This is one of a series of posts looking at the four key dimensions of data and information that must be addressed in a data strategy – reach, richness, agility and assurance.
In previous posts, I looked at Reach, which is about the range of data sources and destinations, and Richness, which is about the complexity of data. Now let me turn to Agility – the speed and flexibility of response to new opportunities and changing requirements.
Not surprisingly, lots of people are talking about data agility, including some who want to persuade you that their products and technologies will help you to achieve it. Here are a few of them.
Data agility is when your data can move at the speed of your business. For companies to achieve true data agility, they need to be able to access the data they need, when and where they need it.Pinckney
Collecting first-party data across the customer lifecycle at speed and scale.Jones
Keep up with an explosion of data. … For many enterprises, their ability to collect data has surpassed their ability to organize it quickly enough for analysis and action.Scott
How quickly and efficiently you can turn data into accurate insights.Tuchen
But before we look at technological solutions for data agility, we need to understand the requirements. The first thing is to empower, enable and encourage people and teams to operate at a good tempo when working with data and intelligence, with fast feedback and learning loops.
Under a trimodal approach, for example, pioneers are expected to operate at a faster tempo, setting up quick experiments, so they should not be put under the same kind of governance as settlers and town planners. Data scientists often operate in pioneer mode, experimenting with algorithms that might turn out to help the business, but often don’t. Obviously that doesn’t mean zero governance, but appropriate governance. People need to understand what kinds of risk-taking are accepted or even encouraged, and what should be avoided. In some organizations, this will mean a shift in culture.
Beyond trimodal, there is a push towards self-service (“citizen”) data and intelligence. This means encouraging and enabling active participation from people who are not doing this on a full-time basis, and may have lower levels of specialist knowledge and skill.
Besides knowledge and skills, there are other important enablers that people need to work with data. They need to be able to navigate and interpret, and this calls for meaningful metadata, such as data dictionaries and catalogues. They also need proper tools and platforms. Above all, they need an awareness of what is possible, and how it might be useful.
Meanwhile, enabling people to work quickly and effectively with data is not just about giving them relevant information, along with decent tools and training. It’s also about removing the obstacles.
Obstacles? What obstacles?
In most large organizations, there is some degree of duplication and fragmentation of data across enterprise systems. There are many reasons why this happens, and the effects may be felt in various areas of the business, degrading the performance and efficiency of various business functions, as well as compromising the quality and consistency of management information. System interoperability may be inadequate, resulting in complicated workflows and error-prone operations.
But perhaps the most important effect is on inhibiting innovation. Any new IT initiative will need either to plug into the available data stores or create new ones. If this is to be done without adding further to technical debt, then the data engineering (including integration and migration) can often be more laborious than building the new functionality the business wants.
Depending on whom you talk to, this challenge can be framed in various ways – data engineering, data integration and integrity, data quality, master data management. The MDM vendors will suggest one approach, the iPaaS vendors will suggest another approach, and so on. Before you get lured along a particular path, it might be as well to understand what your requirements actually are, and how these fit into your overall data strategy.
And of course your data strategy needs to allow for future growth and discovery. It’s no good implementing a single source of truth or a universal API to meet your current view of CUSTOMER or PRODUCT, unless this solution is capable of evolving as your data requirements evolve, with ever-increasing reach and richness. As I’ve often discussed on this blog before, one approach to building in flexibility is to use appropriate architectural patterns, such as loose coupling and layering, which should give you some level of protection against future variation and changing requirements, and such patterns should probably feature somewhere in your data strategy.
Next post – Assurance
Richard Jones, Agility and Data: The Heart of a Digital Experience Strategy (WayIn, 22 November 2018)
Tom Pinckney, What’s Data Agility Anyway (Braze Magazine, 25 March 2019)
Jim Scott, Why Data Agility is a Key Driver of Big Data Technology Development (24 March 2015)
Mike Tuchen, Do You Have the Data Agility Your Business Needs? (Talend, 14 June 2017)