Enable javascript in your browser for better experience. Need to know to enable it? Go here.

The Model Context Protocol: Getting beneath the hype

The Model Context Protocol (MCP) has caught the attention of the AI and software development communities in recent months. In this blog post, Karrtik Iyer explains what it is, why it matters and some of the drawbacks that are often overlooked by those peddling hype...

A major challenge in AI development is connecting sources of data to LLMs. While, as we wrote recently, organizations are becoming more aware of the value of unstructured data — documentation, content repositories, among many other things in the corporate “junk drawer” — building a bridge between those data sources and AI models can prove difficult. This is the context that has led to significant industry interest in something called the Model Context Protocol (MCP)

 

But what is MCP? How does it work? And are there alternatives? In this blog post we offer a perspective on a topic that’s been dominating conversation in AI and software development circles in 2025.

What is the Model Context Protocol and how does it work? 

 

Released in November 2024 by Anthropic, the Model Context Protocol is an open standard that defines and stabilizes the way developers build interactions between AI models — such as Claude Sonnet, DeepSeek and GPT-4. 

 

One metaphor that seems to have stuck, used by Anthropic in its documentation, is that MCP is “the USB-C for AI”. While the metaphor is problematic, it nevertheless provides the world with a broad vision of the role MCP can play in AI development.

 

The protocol follows a client-server architecture, using JSON-RPC 2.0 messages to establish communication between AI systems and data sources. It was originally developed by Anthropic as an internal tool that would enhance Anthropic’s AI model Claude’s ability to interact with external systems. However, the organization opened up the tool to encourage wider adoption and to standardize the way AI-to-tool communication. 

 

Prior to MCP, developers would typically have to create custom connectors for each individual data source. That meant fragmented and redundant integrations were unfortunately not uncommon, making AI products and models more difficult and time-consuming to build.

The Model Context Protocol in practice

 

To understand the benefits of MCP and appreciate why it is receiving so much industry hype, it’s worth exploring how it actually works in an applied scenario. Imagine, for instance, a customer service AI assistant. To be most effective, it needs access to a range of data sources — the more data connected to the assistant the more context and information it will have to respond to customer queries. 

 

MCP helps here by making the process of connecting a company’s internal knowledge base, CRM system and, say, email system, much faster for the company’s development team. 

 

The range of applications is huge. At Thoughtworks, we’re particularly interested in how MCP can help accelerate and improve AI assistance in the software development lifecycle:

 

  • Document interaction. AI assistants can directly work with local files-creating, reading and editing them seamlessly.

  • Code development and review. MCP can power systems that fetch PR details from GitHub, analyze code changes, generate review summaries,and save reviews to platforms like Notion.

  • Cloud service integration. Specialized MCP servers for AWS services allow AI assistants to run Lambda functions, access documentation, implement CDK best practices and analyze costs.

What are the benefits of MCP?

 

The fundamental benefit of MCP is interoperability: it simplifies integration across diverse systems, replacing custom implementations with a standardized approach.

 

In turn, though, that leads to a number of downstream advantages.

 

  • It helps you scale AI projects faster and more easily. The unified connection model makes it easier to scale AI-powered applications across multiple data sources.

  • It gives AI systems enhanced contextual understanding. By enabling AI models to access relevant data, MCP dramatically improves the quality and relevance of responses.

The role of MCP servers

 

One of the most common ways MCP is implemented is using something called an MCP server. MCP servers are lightweight programs that expose specific capabilities through the standardized Model Context Protocol. If we think of MCP as, at its core, a standard or definition for how to do something, an MCP server is a kind of tool that helps you actually do it. 

 

There's a growing ecosystem of MCP servers for various systems. They include AWS services, Azure DevOps, Atlassian (for internal products like Confluence and Jira) among many others.

 

It’s worth noting, though, that you don’t necessarily need an MCP server. For less complex and more simple AI projects with clearly defined context, perhaps local-only or a single-language deployment an MCP server will likely be overkill; you could just use a direct API integration. However, when things become more complicated — whether you’re adding heterogeneous sources of data, running multiple agents, or running them across a distributed system (e.g. across containers or cloud environments), they immediately come into play.

Kartik Iyer, Thoughtworks
With MCP we have patterns specifically designed for AI agents, with the potential to enable a whole new class of autonomous systems that can dynamically discover, learn about and interact with enterprise resources without human intervention.
Karrtik Iyer
Director of Data Science and AI, Thoughtworks
With MCP we have patterns specifically designed for AI agents, with the potential to enable a whole new class of autonomous systems that can dynamically discover, learn about and interact with enterprise resources without human intervention.
Karrtik Iyer
Director of Data Science and AI, Thoughtworks

Are there any drawbacks and risks with the Model Context Protocol?

 

MCP has many advantages. The attention it’s received across the industry is a signal that it offers developers and others something exceptionally valuable in AI development. However, as already mentioned it isn’t always the right option — for simple, straightforward projects, it might be overkill. As effective as MCP can be, a simple API call might well be enough, for instance.

 

That’s not all though. There are some other risks and drawbacks it’s important to be aware of. 

 

  • Security vulnerabilities. MCP servers can be targets for attacks, leading to potential data breaches. Community servers are untested and should be used at your own risk.

  • The need for human oversight. MCP includes a sampling mechanism where "the server can request the host AI to generate a completion based on a prompt." Anthropic suggests sampling requests require human approval.

  • MCP isn’t a replacement for RAG (retrieval-augmented generation). It’s a common misconception that MCP eliminates retrieval challenges; it doesn’t. Implementing retrieval mechanisms, in the form of various RAG techniques, necessary.

  • It makes it very easy to offload application logic to an LLM. While getting an LLM to manage the functionality of am application might be tempting, there are risks — doing so can reduce your application's value and your ownership and control of it. While it’s great that MCP makes this easy, don’t let complacency lure you into a damaging antipattern.

  • You need to remain mindful of data governance and privacy. As above, MCP might make connecting data to an LLM easy, but just because you can doesn’t mean you should. It’s vital to remain mindful of what sources of data are being passed to a third party LLM.

  • It’s only as open as Anthropic decides it should be. While having a major backer on the one hand strengthens MCP, it also means the AI ecosystem becomes dependent on Anthropic to continue to support it as an open standard. What if that changes?

 

Related to that final point, it’s important to remember that MCP is still a very new standard. Developers need to spend time familiarizing themselves with it and the implications of using it. Practices and implementations will evolve, of course; it’s essential developers remain on top of those changes.

The Model Context Protocol and the future of AI architecture patterns

 

One critical aspect of MCP that isn’t widely discussed is its potential to fundamentally reshape enterprise architecture patterns for AI integration. Most current discussions focus on the technical protocol details or immediate benefits. However, it’s important to recognize that MCP represents a significant shift toward “AI-native” architecture patterns, the implications of which are yet to be fully understood.

 

Traditional enterprise integration has followed patterns that are optimized for human-driven applications. However, with MCP we have patterns that are instead specifically designed for AI agents, with the potential to enable a whole new class of autonomous systems that can dynamically discover, learn about and interact with enterprise resources without human intervention. This shift will require engineers and others to rethink governance, security models and system boundaries.

A significant step in a rapidly-evolving field

 

As with any widely-discussed technology, in recent weeks there have been murmurs across the internet that MCP is over-hyped. Naysayers suggest that it’s not really that groundbreaking or sophisticated and that much of the noise about it stems from a successful Anthropic marketing campaign. While caution about hype is understandable, of course, MCP is proving useful — connecting data sources to AI is a challenge and simplifying and accelerating that process is undoubtedly valuable.

 

If it’s making an impact on how AI developers work now — making them more productive and more innovative — that has to count for something. But whether MCP comes to be the definitive standard for AI development remains to be seen; the field is evolving so rapidly that it would be complacent to think that this is it. Other changes will come for sure. 

 

What matters now, then, is to explore the opportunities of MCP now while remaining open to the innovations that are coming down the line in the months and years ahead.

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Explore how we help organizations bring AI into software engineering