This year we’ve seen a real-time experiment playing out across the technology industry: one in which AI’s software engineering capabilities have been put to the test against human technologists. And although 2025 may have started with AI looking strong, the transition from ‘vibe coding’ to what’s being termed ‘context engineering’ highlights that while the work of human developers is evolving they nevertheless remain absolutely critical.
This is captured in the latest volume of the Thoughtworks Technology Radar, a report on the technologies used by our teams on projects with clients. In it, we see the emergence of techniques and tooling designed to help teams better tackle the problem of managing context when working with LLMs and AI agents.
Taken all together, there’s a clear signal of both the direction of travel in software engineering and even AI more broadly. After years of the industry assuming progress in AI is all about scale and speed, we’re starting to see that what matters is the ability to handle context effectively.
Vibes, antipatterns and new innovations
It was all the way back in February 2025 that Andrej Karpathy coined the term 'vibe coding'. Although it might have been meant flippantly, it took the industry by storm. It certainly sparked debate at Thoughtworks; many of us were sceptical. On an April episode of our Technology Podcast, we talked about our concerns and were cautious about how it might evolve.
Unsurprisingly given the implied imprecision of vibe coding, antipatterns have been proliferating. We’ve once again noted, for instance, complacency with AI generated code on the latest volume of the Technology Radar, but it’s also worth pointing out that early ventures into vibe coding also exposed a degree of complacency about what AI models can actually handle — users demanded more and prompts grew larger, but model reliability started to falter.
Experimenting with generative AI
This is one of the drivers behind increasing interest in trying to engineer context. We’re well aware of its importance — working with coding assistants like Claude Code and Augment Code, providing necessary context or ‘knowledge priming’ is crucial. It ensures outputs are more consistent and reliable which will ultimately lead to better software that needs less work — reducing rewrites and potentially driving productivity.
When effectively prepared, we’ve seen good results when using generative AI to understand legacy codebases. Indeed, done effectively with the appropriate context it can even help when we don’t have full access to source code.
It’s important to remember that context isn’t just about more data and more detail. This is one of the lessons we’ve taken from using generative AI for forward engineering. It might sound counterintuitive, but in this scenario we’ve found AI to be more effective when it’s further abstracted from the underlying system — or, in other words, further removed from the specifics of the legacy code. This is because the solution space becomes much wider, allowing us to better leverage the generative and creative capabilities of the AI models we use.
Context is critical in the agentic era
The backdrop of the changes that have happened over recent months is the growth of agents and agentic systems — both as something organizations want to develop as products and as something they want to leverage. This has forced the industry to properly reckon with context and move away from a purely vibes-based approach.
Indeed, far from simply getting on with tasks they’ve been programmed to do, agents require significant human intervention to ensure they are equipped to respond to complex and dynamic contexts.
There are a number of context-related technologies aimed at tackling this challenge, including agents.md, Context7 and Mem0. But it’s also a question of approach. For instance, we’ve found success with anchoring coding agents to a reference application — essentially providing agents with a contextual ground truth. We’re also experimenting with using teams of coding agents; while this might sound like it increases complexity, it actually removes some of the burden of having to give a single agent all the dense layers of context it needs to do its job successfully.
It’s important to consider the day-to-day practices that allow us, as software engineers and technologists, to collaborate effectively even when dealing with highly complex and dynamic systems — sure, AI needs context, but so do we.
It’s important to consider the day-to-day practices that allow us, as software engineers and technologists, to collaborate effectively even when dealing with highly complex and dynamic systems — sure, AI needs context, but so do we.
Towards consensus
Hopefully the space will mature as practices and standards embed. It would be remiss to not mention the significance of the Model Context Protocol, which has emerged as the go-to protocol for connecting LLMs or agentic AI to sources of context. Relatedly, the agent2agent (A2A) protocol leads the way with standardizing how agents interact with one another.
It remains to be seen whether these are the standards that win out. But in any case it’s important to consider the day-to-day practices that allow us, as software engineers and technologists, to collaborate effectively even when dealing with highly complex and dynamic systems. Sure, AI needs context, but so do we. Techniques like curated shared instructions for software teams may not sound like the hottest innovation on the planet but they can be remarkably powerful for helping teams work together.
There’s perhaps also a conversation to be had about what these changes mean for agile software development. Spec-driven development is one idea that appears to have some traction, but there are still questions about how we remain adaptable and flexible while also building robust contextual foundations and ground truths for AI systems.
Software engineers can solve the context challenge
Clearly, 2025 has been a huge year in the evolution of software engineering as a practice. There’s a lot the industry needs to monitor closely, but it’s also an exciting time. And while fears about AI job automation may remain, the fact the conversation has moved from questions of speed and scale to context puts software engineers right at the heart of things.
Once again it will be down to them to experiment, collaborate and learn — the future depends on it.
Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.