Enable javascript in your browser for better experience. Need to know to enable it? Go here.

10 recommendations for a successful enterprise data mesh implementation

Thoughtworks has been implementing data mesh since it was first introduced by Zhamak Dehghani, a Thoughtworker at the time, in 2019. Since then, we’ve implemented data mesh with clients globally.

 

The following is a set of 10 recommendations that are based on insights we have gained from our experiences. For each, we remark upon observed anti-patterns and the approach that we recommend instead and why. These recommendations are listed top-to-bottom by levels in an organization.

 

Recommendation #1: Bottom-up-only approaches don’t work — get top-down buy-in early

 

Management and C-level buy-in for data mesh can be challenging. This often leads data mesh evangelists to attempt to implement data mesh bottom-up by building data products within a data mesh domain that specifically wants Data Mesh.

 

We have, however, seen that these domains and data products encounter an impassable barrier when attempting to expand data mesh to other departments, who are skeptical about the entire approach. Crossing departmental boundaries and implementing shifts which change priorities, funding, roles, and responsibilities is very difficult to do in a bottom-up manner. It can sometimes elicit slightly awkward moments when someone asks “who are you, and why are you telling me what to do?” 

 

Platform teams sometimes face the same issue. They should be the enablers and coaches on best practice: without top-down support, they cannot change data product teams’ ways of working, team setups or even implement a unified set of data governance policies. Data product teams encounter a similar barrier when they attempt to persuade a non-data mesh team to give them access to data or assistance with interpreting data. Scaling data mesh requires a top-down mandate in order to create agreement and alignment among parties with different agendas and interests.

 

Both top-down and bottom-up buy-in is required: excited teams that are willing to change their way of working from the bottom-up and leadership at the top that supports that change.

 

Recommendation #2: Start with the operating model

 

Despite data mesh requiring changes in both operating model and technology, the operating model is often sidelined because it’s too difficult to change. As a result, organizations often attempt a technology-first approach to data mesh. This technology-first approach, while improving technological practices, often results in failure within the first year. This occurs because the structures required to support the scaling of a data mesh haven’t been adequately changed to accommodate new ways of working. 

 

While it’s true that changing the operating model can be difficult, it’s also critical. It should be done on day one of a data mesh initiative.

 

Data mesh isn’t a project; it’s an enterprise program. It requires support from others in the organization because it affects how teams collaborate with other teams. This means high-level sponsorship and buy-in from the top is needed to ensure organization-wide alignment. It also requires a certain level of change management, such as creating various governance bodies, redefining roles and responsibilities and upskilling the organization. A transformation office can help here; by bringing people with knowledge and experience with the operating model and the technology together it can ensure organizational alignment.

 

In summary, tackling an operating model change can be daunting but it is an essential aspect of successfully implementing data mesh within an organization. Do it early.

 

Recommendation #3: Define domains in a way that represents organizational domain objects and optimizes for efficiency and communication 

 

Domain ownership is a key principle in data mesh. It’s important because it ensures that each data product in the data mesh is owned by someone who has expertise in that specific area of the organization. The benefit is that it makes data products more useful to those who might want to use it — it removes potential confusion about what certain terms mean in given data fields and helps mitigate inconsistencies. In other words, if something doesn’t make sense, the domain owner is best placed to amend or provide the necessary context. 

 

Of course, this isn’t without challenges; defining the boundaries of organizational domains  — and who owns what — is difficult. This is often caused by predefined budget and reporting lines or possibly underlying political undercurrents. Because of this, at the beginning, it might be easiest to define domains along the boundaries of existing functions. You can then further split domain boundaries that become too complex or reassign domain boundaries as the project progresses and new information is surfaced. Better yet, start with one domain and then explore outwards as time goes on as an iterative process. Organizations already working with a domain-driven design in their operating model have an easier time making this operational shift. 

 

There are, in the end, multiple ways to define a domain. What’s important is that these domains make sense in the context of the organization, documented, and it makes communication and processes faster and more efficient. Some examples could be:

 

  • Along existing business units or functions

  • Along business outcomes (goals such as “increase profits”, “increase customer satisfaction”)

  • Along value streams (initiatives which deliver value or outcomes to customers)

  • Use the Inverse Conway Maneuver (this is where teams are structured according to the desired architecture rather than letting existing communication paths and structures shape it) 

  • Use domain-driven design

 

In summary, domain definitions allow organizations to identify owners and experts of data and to optimize for efficient lines of communication and collaboration. It’s unique to every organization and while the first domain definition can be difficult, start with a method that would be easiest to adopt in your organization and then evolve it when familiarity of the ways of working and responsibilities are achieved.

 

Recommendation #4: Develop the operating model, data governance and platform together

 

The platform brings the roles, responsibilities and ways of working defined by the operating model — and the policies defined by data governance — to life.

 

One anti-pattern that we often see is that the operating model and data governance are separate, isolated projects, each of which define a collection of documentation which is thrown over the wall to an IT department to implement. Here it may be helpful to think of “data mesh” as a product, where the operating model, data governance and platform have the same underlying goals, hypotheses, initiatives and backlog. With this unified approach, the implementation of the operating model and data governance concepts are tested and improved iteratively through data products, which are enabled by the platform. 

 

We recommend that the first chosen use cases should define operating model and data governance policies as part of the requirements. An example requirement:

 

 “When a Data Product team creates a Data Product, it is automatically registered in the data catalog along with a description of their input ports, output ports, schemas, SLOs, domain, and domain owners in order to increase transparency of the Data Product within the organization.”

 

The platform can then implement such a requirement after which the relevant operating model and data governance offices can monitor the impact and performance of such a policy in the data mesh ecosystem (and whether its performance lives up to its defined measures of success).

 

Recommendation #5: Establish a good platform early

 

The notion of a self-serve data platform is one of the four principles of data mesh and it is the technical implementation of the decisions made about baseline data product capabilities, domain definitions, and governance policies. The Platform offers capabilities to data product teams in a self-service way so they don’t need to wait for the platform team to create resources and integrations on an ad-hoc basis.

 

With the data mesh approach, the platform team is no longer responsible for maintaining and transforming data for analysts to consume, but instead responsible for shaping and building data product offerings (self-contained deployable packages) that data product teams can request and use to maintain their data.  

 

These data product offerings are what is called an “architectural quantum” (as defined in Evolutionary Architectures). They contain everything a data product team needs to build their data product, such as base capabilities around data ingestion, storage, distributed compute for data transformation, integration points with data catalog tool, monitors in the data quality tool and governance policies. Sometimes there are multiple offerings for different types of data products. These guardrails and templates make it as easy as possible for teams to deliver data products.

 

As self-serve data platforms are the main enabler of data mesh, they need to be established early. A common mistake is for organizations to start data product teams without a basic platform. These initial data product teams are stuck for months at a time, waiting for the platform to develop baseline capabilities, which puts a strain on ongoing initiatives. At the other end of the spectrum, some organizations spend years attempting to build the perfect platform which takes too long to deliver value and is too difficult to maintain and operate.

 

The right answer is somewhere in the middle: a “good platform” is one built upon requirements that were determined by researching what data product teams actually need. A well-researched self-service platform plays a significant role in reducing the friction that can occur when creating data products. 

 

To avoid spending too much time building the perfect platform, we recommend starting with defining a set of core MVP capabilities to be “just good enough” to get data product teams moving quickly to deliver value to the organization and continue to iteratively build and scale with learnings from new data product teams and domains. 

 

Recommendation #6: Reuse your existing technologies in the new data mesh platform

 

Many clients at the early stage of data mesh are demoralized by the sheer number of technologies and tools required to power it. The truth is that while there are some tools that might fit the requirements of data mesh better than others, it’s best to leverage your existing stack, licenses and expertise where it makes sense. You can then add custom layers to improve developer experience, and use new tools to fill any remaining capability gaps.

 

Note that “reusing existing technologies” does not mean “reuse an existing or older platform”. Existing or older platforms may not be compatible with the data mesh approach because data mesh requires a critical paradigm shift in the way that the platform is built. Additionally, reusing older platforms that don’t conform to the data mesh approach can lead to additional complexity, increasing costs and slowing you down.

 

We suggest taking an inventory of existing technologies and comparing them to the required capabilities of a data mesh self-serve data platform and your data product requirements. Reuse the tools that fit those requirements and are mature to be used via self-service (for example, APIs are available or declarative definition of resources is possible for the purposes of Infrastructure as Code).

 

Recommendation #7: Move slow and start small with Use Cases 

 

The number one reason why data mesh projects fail is because they’re trying to scale too fast. Instead, give your organization the time to learn and adjust to the change. This initial patience will really pay off in the long run. 

 

We recommend starting with a use case that has two to three data products that spans across three dimensions — operating model, product and tech — over the course of six months.

 

Understandably, some organizations believe this approach to be “too conservative”. They may want to adopt a more aggressive approach to onboard several use cases at once at the beginning. We have found that it typically takes six months to bootstrap the first data products under the above three dimensions. Once that period is complete, data product owners need to be trained and a platform team needs to be assembled to begin to build out new capabilities for the platform based on learnings. New templates and ways of working are also needed to shift the organization from a centralized to decentralized model. A data mesh governance board will also need to be established with a roadmap to onboard additional domains.

 

Simply put: Don’t run before you can walk. There will be plenty of learnings to take away on the journey. Don’t miss them.

 

Recommendation #8: Be deliberate about your first use cases 

 

Choosing your first set of use cases can be daunting, but it’s important to remember there is no single right answer. Every organization is different. The decisions you make will depend on what you want to optimize for and your organization’s risk appetite.

 

For example, some clients choose highly urgent, complex use cases. They’re often eager to address inefficiency and internal politics that are causing pain in the organization. Others have gone the more conservative route by picking a simple, isolated use case to test the organizational appetite for data mesh.

 

Other clients, meanwhile, choose to optimize the build-out of a diverse set of platform capabilities. A mature data mesh data platform provides capabilities for batch data processing, [near] real-time data processing, analytics, AI/machine learning (ML) and data governance. Choosing use cases that address each of these capabilities sets the precedent (or priority) to build the groundwork for all platform capabilities in parallel. 

 

Another benefit to this approach is that use cases that require more than one capability (such as ML and batch capabilities, a common dependency) become candidates for the first use cases. This approach, however, requires a large and strong platform team that can handle product thinking and the complexity of integrating capabilities in a compatible and interoperable way. 

 

While there are many approaches and ways to optimize, our recommendation is that the first chosen use case should be:

 

  • Manageable given the organization’s existing capabilities

  • Tied to business goals with clear metrics of success

  • Attainable

 

It’s easy to become consumed by the choice of use cases due to internal politics and over-optimization, but there is often not one perfect use case. A common pitfall to avoid is paralysis by analysis: timebox the exercise and start with “good enough”.

 

Recommendation #9: Onboard data products onto the platform AND operating model’s governance structures

 

An anti-pattern that we see in IT-driven organizations is that the onboarding of data product teams stops at onboarding the team on a platform. In reality, it is also critical to onboard them to the organizational processes set up by the operating model. 

 

Proper onboarding to the operating model allows data product teams to be represented in various forums to influence important activities, such as feature prioritization. Onboarding also involves adding them to the right communication channels so that they don’t miss out on important information about new features, releases, and learning opportunities. 

 

In the long run, teams that are not onboarded to the operating model might not be aligned with the principles of the wider data mesh ecosystem. This could lead to discontent and frustration from all sides.

 

Recommendation #10: Be committed

 

Data mesh implementations within an organization sometimes require large changes over time that affect many people, existing departments and decision-making processes. This can prove difficult when some parts of an organization are resistant to change. 

 

It might be argued that the problems associated with organizations that change quickly — like scale-ups, where experiment is valued and there is a more relaxed attitude to data access policies — and more established and mature enterprise organizations  — with greater centralization and more stubborn legacy silos — are different and so should be treated differently. While there is an element of truth to this, the reality is that either way, if the organization wants to be successful, it needs to be willing to commit to the change in terms of implementation and resources.

 

Our most successful data mesh adopters have done the following:

 

  • Obtained dedicated top-level sponsorship early-on

  • Adopted data mesh is part of the organization’s identity, where everyone was willing to participate in adopting the practices and mindset of data mesh via a company-wide initiative that was data-first 

  • People were willing to dedicate the time to learning the new approach and change their processes

  • The right people were quickly moved to the right places and investments were quickly made where there were gaps

  • The organization was willing to accept and implement a decentralized model

  • The organization was ready to be aligned with domains (if they weren’t already)

 

Organizations who have made up their mind to go all-in, achieve value with data mesh faster (and cheaper). That’s why a full commitment to data mesh is required for efficiency and success in the shift.

 

Summary

 

A data mesh initiative could bring innovation and positive impact to your organization but it requires dedication and commitment to implementing it properly. Change can be challenging but with the right approach and the right process it’s possible to overcome growing pains to make a success of data mesh.

 

Do you want to learn more about how to bring data mesh to your organization or evaluate whether you are ready? Get in touch with us.