Why you need data guardrails, not guidelines [Q&A]

Often described as the lifeblood of an organization, data drives business operations and decision-making. But while the raw data itself is valuable, it’s the intelligence and insights that can be gleaned from it that truly fuel innovation and growth. This vital intelligence is the foundation on which organizations build long-term strategies, optimise processes, and identify new opportunities.  

However, with IoT and AI creating volumes of data at an unprecedented rate, it has come to a point where many large enterprises have data lakes and warehouses overflowing with untapped potential. 

We spoke to Ugo Ciracì, Data Strategist & BU Lead at Agile Lab, about the main challenges preventing enterprises from managing this data, and how data guardrails and data mesh can help enterprises more effectively utilize it to gather more meaningful insights. 

BN: What challenges are enterprises facing when trying to leverage their most valuable asset: their data?

UC: Driving value from data is a key objective for most modern businesses, but in the rush to leverage this valuable asset, many organizations also struggle to enforce effective governance frameworks. Without these capabilities in place, however, ensuring data is produced and consumed in accordance with internal standards for quality, integrity, architecture, compliance, and security can prove impossible.

Digging deeper into the underlying issues will often reveal that datasets are frequently hosted in disparate systems, formats, and locations. As a result, consolidating, processing, and applying consistent governance standards becomes an even more complex and laborious task, often reliant on manual intervention, which is both inefficient and expensive.

In attempting to overcome these hurdles, many organizations invested heavily in expensive data management systems and analytics platforms. However, this can bring additional problems, ranging from increased data duplication and applying different technical standards to a lack of clarity over the integrity and location of data assets.

This means that extracting meaningful intelligence from data often requires an expensive combination of sophisticated tools and highly skilled staff. Add to this growing data silos and rising costs, organizations need a new approach to take control of their data. 

BN: What is computational governance?

UC: What’s required instead is a reliable approach to identifying and implementing enterprise-wide governance rules that remain in place throughout the data lifecycle, irrespective of where it resides.

Known as ‘computational governance’, this approach provides organizations with a set of data ‘guardrails’ so they can work with a consistent governance framework. Instead of the traditional data management approach, which creates, copies, and moves data around, computational governance enforces internal standards and security controls while also empowering users by expediting data discovery and project development.

By overseeing existing data tools and technologies instead of replacing them, it allows organizations to leverage their existing data silos without additional consolidation. The use of customizable guardrails ensures that every data-related project complies with the relevant standards at all times, so data can’t reach production environments until predefined policies are satisfied.

BN: What are the main benefits for enterprises?

UC: Computational governance delivers a range of important benefits for today’s data-centric organizations. With legacy approaches, employees typically spend a significant amount of time on non-revenue-making activities, such as finding and validating data integrity, before they can proceed with further activities.

In other situations, organizations try to build their own governance platforms -- a process that can be extremely expensive and take years to complete. Instead, with the right support, today’s most effective computational governance approaches can be implemented quickly and effectively with minimal disruption to existing operations.

By creating data governance specifications, policies, and standards, teams can start new projects or incorporate existing and old ones with much greater speed and effectiveness than using traditional processes. In addition, intelligent templates help automate data-related technologies and processes to dramatically improve the user experience -- reducing project delivery timescales from years to months. 

In the modern business environment, these capabilities are key to agility, radically speeding up responsiveness to new opportunities or changes in the economic outlook. Computational governance also aligns closely with the concept of data mesh, which aims to further enhance agility by distributing data ownership to the relevant business domains.

BN: What is data mesh?

UC: Data mesh is a model that advocates a self-service approach to accelerate decision-making, based on the view that domain experts are best placed to understand their own data. It is founded on four key principles: Domain Oriented Ownership, Self-Service Data Infrastructure as a Platform, Federated Computational Governance, and Data-as-a-Product.

The objective of data mesh is that data should be made to work for the user instead of the user working to solve technical complexities. As such, it is both revolutionary for the results it provides, and evolutionary, as it leverages existing technologies and is not bound to a specific underlying approach.

BN: How does computational governance support a data mesh approach?

UC: For organizations adopting this approach, a computational governance platform will address the consistency and control issues that can impede the adoption of a mesh model.

Computational governance also ensures that the compliance and security standards required to implement the data mesh approach meet relevant regulations and legislation while mitigating legal, financial, and reputational risks. It does this while maintaining the organization’s architecture framework, facilitating system interoperability and integration.

Ultimately, computational governance delivers the guardrails that organizations need to break down data silos and gives domain experts autonomy to unlock the full potential of their data assets. When applied in conjunction with a self-service approach, users can more effectively drive innovation, agility, and the ability to quickly adapt to evolving business needs.

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.