Thanks to generative and agentic AI, it’s never been easier to generate code and build software solutions. True, anyone that remembers the low and no code trend just a few years ago will know claims of ease, access and ‘democratization’ rear their head fairly often in the tech industry. This time, though, things really are different.
Maybe that sounds like unwarranted optimism. But there are a number of things about this revolution that suggest the step we’re in the process of taking is unique.
It’s bringing us closer to code, not further away
For me, personally, one of the most interesting use cases is AI to create design prototypes. Now, I could talk about how it accelerates the process of creating designs and MVP applications. More important, from my perspective at least, is that, despite the years away from the coal face of coding, it’s now much easier to go ahead and actually build things. Sure, I might not be writing code for production, but, ironically, AI has brought me closer to code than I have been for quite some time.
This might sound strange but it’s absolutely true. It’s also crucial for understanding how AI is penetrating and infusing everything in software engineering. This is reflected in the latest volume of the Thoughtworks Technology Radar, a biannual publication that throws the spotlight on the technologies we’ve found interesting and important when working with clients around the world.
In the publication we not only note the usefulness of AI in prototyping, we also highlight other very different techniques, such as using generative AI to understand legacy codebases and for forward engineering, too. While a long way from the way I’ve been using the technology, it’s not a stretch to see how such techniques are altering the relationship between developers and code in a way far more nuanced than greater abstraction and distance. Yes, complacency with AI-generated code remains a challenge and an antipattern but what comes after it is surely greater sensitivity to code — how it’s made, what it can and can’t do and where it’s used.
It’s not about a single product; it’s a constellation of innovations
It’s precisely because AI is reaching into diverse domains and use cases that there’s also significant diversity in tools. If we compare this to the landscape of the low and no code, which was deeply proprietary and little possibility of portability, we’re in an environment where innovation can thrive.
Of course, major players like OpenAI, Google and Anthropic have extreme power and control, but what’s most exciting in the field are the innovations and experimentation happening outside or on the edges of what they’re doing. For instance, Claude Code is an Anthropic product; many of our teams have found success with it (which is great, of course). But what’s been really interesting to see in recent months is how an ecosystem of tools that can support major vendor products. For instance, on the latest Technology Radar we note Context7 and Serena, two very different tools but both of which can be configured with Claude Code (and other AI coding assistants) to improve accuracy and reliability.
Arguably, the center of this ‘constellation of innovations’ is the Model Context Protocol (MCP). Although also introduced by Anthropic, this is an open standard that provides a defined way of connecting LLMs to sources of data. But more than that, it’s being talked about as a driver of a wider ecosystem of tooling, from separate MCP servers for different purposes (like domain-specific knowledge or code) to client interfaces from familiar chat applications to IDEs for software developers and agent frameworks.
To be clear, this has only really emerged this year. Where at the start of the year the conversation centered on vibe coding and making use of popular LLM products — an almost lo-fi version of low/no code — in the months since there’s been a huge shift to something radically different.
Practices aren’t ossifying, they’re evolving
This now has a name: context engineering. Of course, names for new disciplines and practices are a dime a dozen in the tech industry, but this does, I think, point to a cohesive attempt to think through all the ways that software developers can make use of AI — generative and agentic — effectively and consistently.
It’s possible that the diversity and dynamism of the MCP ecosystem is driving change in practices. Far from ossifying the way we work, rendering it rote and boring, it’s forcing practitioners to adapt and change. At Thoughtworks, for instance, we’ve been thinking a lot about how AI should be a team technology, not an individual enabler. One technique we’ve found particularly useful are curated shared instructions for software teams, where teams effectively have prompt libraries to ensure alignment and to accelerate productivity.
There may even be further changes ahead. Spec-driven development is an approach to software engineering whereby detailed specs are captured upfront in order to provide LLMs or agents as much information as possible. This remains very new, and there are questions about what it means for agile software development, and the idea that fast feedback and consistent iterations are superior to comprehensive upstream planning. However, what the future actually looks like isn’t really the point: it’s more that the changes we’re seeing highlight an industry actually taking the opportunities and inevitable trade-offs that come with them seriously.
The future belongs to the curious
This is an exciting time to be in software engineering. And while there are fears about how AI may take developer jobs, as my former colleague Mike Mason argued six months ago, developers are very much still in the driving seat. From my perspective, AI isn’t taking away opportunities, it’s actually opening them up and helping me get back to doing what I want to do. It’s making work more interesting, not less.
For other developers, it will no doubt reduce tedious manual work and extend the kind of system automation that has been occurring over the last couple of decades. For many, it even represents an interesting meta-challenge unlike anything they may have encountered before: like an enigmatic new colleague, it forces us to ask how we can work effectively together and, even more interestingly, what we might learn from one another.