Anyone who has wrestled with data architecture and information management on an enterprise scale understands the problem: things take a long time!
Any time a new data shape or source appears, changes will be required. Any time there is a new interpretation of a particular data element, changes will be required. Any time an action is required because of an interpretation, changes will be required.
No one change is inherently difficult. Coordinating many changes across many touchpoints is very difficult.
Not to sound trite, but in the digital era, constant change is the new normal. Agile responses are demanded to new data, new interpretations, and new actions. And while many organizations might be agile in some aspects, few are agile around facts and what they mean.
So, what does good look like? It’s being agile with data and its meaning.
Data agility is the ability to make simple, powerful, and immediate changes to any aspect of how information is interpreted and acted on.
Data agility is important in any digital business model. Much like a physical business wants to be agile in regards to supply chain, manufacturing and distribution, so does any digital one.
Why Data Agility Is Becoming More Important
The reason is simple: facts and what they mean can change, often very quickly. Many examples abound of competent organizations that were caught flat-footed when facts and meanings changed quickly.
Anytime you are managing risk, you want data agility.
Life sciences may need to pivot their focus quickly in light of new information and health care concerns, such as a global pandemic.
Intelligence agencies may need to quickly reposition their information security posture after a new development, such as a public leak.
Once you get past the headlines, there is a vast universe of more pragmatic concerns: getting new products to market faster, taking exceptional care of customers, managing risk in better ways, and so on. All demand data agility.
Without a better way to store, share, manage and protect data along with everything we know about it, data agility will remain an elusive goal for many.
How Is Data Agility Created?
Many organizations will invest in three areas.
The first is a data layer, usually a mix of internal systems. This is how things get done on a granular level.
The second is an integration layer that tries to make the disparate pieces work more as a whole: both from a workflow perspective, but also from a reporting perspective.
The third is an interpretive layer of knowledge, guidelines, dictionaries, ontologies, knowledge graphs, and other artifacts that help people interpret what the data might mean in context.
It will never, ever be agile when done this way.
By deeply integrating data (digital facts) with what we know about the data (encoded metadata) along with what the facts mean (semantic interpretations), data agility is created.
Data agility is created by connecting active data, active metadata, and active meaning.
The Implications Are Magical
Depending on which aspects of the enterprise data challenge one has personally wrestled with, the transformative results are – well – magical. It would be inaccurate to describe it differently, unless you prefer words like transformational, revolutionary, game-changing, and so on.
The ideal situation is any use case that has (a) smart people making decisions of consequence by interpreting complex data, or (b) important organizational knowledge that needs to be used everywhere, updated, kept compliant, etc., or (c) likely combinations of both.
If one were to take horizontal slices – that is, problems that everyone has – one could start with any aspect of information security.
With data agility, any infosec policy that can be thought of can be implemented – immediately and verifiably. Thinking more broadly, good infosec demands being able to interpret the meaning of facts quickly and authoritatively. As does being able to act immediately.
Next up, the vast landscape of data warehouses, marts, shares, reporting systems, and similar. The facts are there, but what do they mean? Data agility creates the capability to quickly construct individualized lenses on shared facts, and what they mean to each user.
Perhaps the next best industry target would be the burgeoning investment of analytical tools and platforms. Again, no shortage of facts for people to analyze – but is there shared and trusted understanding of what the facts mean in context?
Shifting to specific industries, once again we have a deep well of fascinating use cases.
Anytime organizational knowledge is itself underpinning the product or service is also a strong candidate.
And we often see both together.
Intelligence agencies, security, fraud, and other threat-detection functions. Financial services, life sciences, aerospace, and other areas where innovation and risk must be balanced.
There is no shortage of examples in the private and public sectors that want to get much better at interpreting facts and what they mean. They want to be able to make simple and powerful changes to how information is interpreted and acted on.
They want data agility.
The Reality in Larger Organizations
You want to spend your time working on the former, not the latter.
Someone sees that there might be a better way to do things. But enormous friction makes it fall into the category of “can’t get done” – at least for now.
Perhaps someone will try and integrate a “solution” around the existing pillars, and that fails to reduce friction as well.
The historical evidence is that the new model – a semantic data platform – is almost always introduced into a compelling use case where all others have tried and failed. By keeping active data, metadata and meaning together, it delivers outsized results in a surprisingly short amount of time.
People notice, and are impressed. Another use case follows, and then another. After a while, a footprint is created of different functions and groups who are assembling and reusing organizational knowledge to create new business processes. As a result, they can easily share digital facts and what they mean amongst themselves.
New patterns of information management and knowledge sharing start quickly replacing old ones. New things are now possible, as substantial friction has been removed.
Some might think this is a technology argument – that there is now a better way to manage both data and its various interpretations, and that would be correct. But it also is a leadership argument – that any organization needs a better way to manage facts and what they mean.
What Next?
If you are on the front lines of helping to shape modern digital business models, you will want to know that there are new ways of doing things.
We’ve talked about the importance of facts and what they mean. Connecting active data, active metadata, and active meaning creates data agility. Data agility is the ability to make simple, powerful, and immediate changes to any aspect of how information is interpreted and acted on.
As a result, the early adopters are now managing information in a new and very different way than their peers.
This new way could be described as knowledge-centric vs. data-centric. In their eyes, what is known about the information becomes more important than perhaps the information itself.
Going forward, we’ll spend some time looking at the inner workings of a semantic database platform, and how it is substantially different from familiar data, metadata, and semantic tools. You’ll see familiar concepts, just used in a new way.
We also want to spend some time on organizational impact, and lessons learned. Anytime a new way of doing things is introduced into an organization, heavy lifting is required.
Finally, there are wonderful implications for current and future digital business models, which are very much worth exploring.
Learn More
Download our white paper, Data Agility with a Semantic Knowledge Graph
Jeremy Bentley
Jeremy Bentley is the founder of Semaphore, creators of the Semaphore semantic AI platform, and joined MarkLogic as part of the Semaphore acquisition. He is an engineer by training and has spent much of his career solving enterprise information management problems. His clients are innovators who build new products using Semaphore’s modeling, auto-classification, text analytics, and visualization capabilities. They are in many industries, including banking and finance, publishing, oil and gas, government, and life sciences, and have in common a dependence on their information assets and a need to monetize, manage, and unify them. Prior to Semaphore Jeremy was Managing Director of Microbank Software, a US New York based Fintech firm, acquired by Sungard Data Systems. Jeremy has a BSc with honors in mechanical engineering from Edinburgh University.