Enable javascript in your browser for better experience. Need to know to enable it? Go here.

Macro trends in the tech industry | April 2026

The last few editions of the Technology Radar have captured relentless AI-accelerated change in the industry. However, while recent volumes have reflected the astounding energy of the field, from the proliferation of new tools to the almost monthly emergence of new terms and concepts, volume 34 is different: it highlights a level of maturity, a moving away from endless experimentation to a desire for repeatability and stability and something cognitively manageable.

 

However, this isn’t to say things are stabilizing: the macro trends in the tech industry reflected in volume 34 all speak to unresolved tensions — reliability and AI’s unpredictability, AI-acceleration and developer experience and past and future practices. 

Searching for consistency and reliability

 

Consistency and reliability have always been significant concerns in AI. However, in the early part of 2026 they appear to have shifted from one of many issues to one of the most critical. Perhaps driven by increasing adoption and the step change in capabilities we witnessed at the end of 2025, the best evidence of this is the emergence of the term ‘harness engineering’ in recent months. 

 

Harness engineering

 

Broadly speaking, harness engineering refers to the infrastructure, constraints and feedback loops that wrap around AI agents to improve their reliability. Part of this is an extension or evolution of spec-driven development (SDD); one of the ways in which we can harness agents is by using SDD frameworks such as OpenSpec and GitHub SpecKit to provide guardrails and structured workflows. 

 

However, it also goes beyond this to consider the ways in which agents ‘learn’ and self-correct. In this edition we featured something called the ‘feedback flywheel’, which essentially adds a further step to the spec → plan → implement flow typical in SDD aimed at iteratively improving the coding agent. It’s worth flagging a number of techniques here including feedback sensors for coding agents to reduce the manual review burden and provide agentic systems with the capacity to improve themselves.

 

Sandboxing

 

This apparent desire for increased reliability arguably suggests a growing awareness of the many risks associated with AI and agent assistants in software engineering. However, while we welcome the expansion of risk-aware practices like sandboxing coding agents, demonstrated in blips on this Radar including Dev Containers and Sprites, it would be wrong to think there’s been an industry about-face. There’s certainly lots of high-risk experimentation happening, including agent coding swarm projects like Steve Yegge’s Gastown. While these are intriguing and may offer insight for the future of software engineering, as we note in this volume, these need to be approached with caution.

 

It’s also worth noting the importance of agent durability in the context of reliability. We’ve noticed that ignoring agent durability is a bit of an antipattern, with teams successfully developing agent workflows only to find they fail when deployed to production in complex distributed systems. Bringing durable computing approaches and tools such as Golem and Temporal to bear on these use cases can help minimize the risks of execution failures.

Rethinking developer experience and productivity

 

Many of the practices that grapple with AI reliability are closely related to the question of the role of the developer in the software development process: where should the humans have control? What needs to be reviewed? What needs to be iterated manually and what can be automated? 

 

One of the things that’s becoming clear is that ‘agentic’ coding poses challenges for developer experience. This is something we’ve been thinking about a lot at Thoughtworks; indeed, even before we began putting this volume of the Radar together, the potential for AI workflows to degrade developer experiences, leading to a divergence between productivity and personal flow and satisfaction, was a significant topic of discussion at Martin Fowler’s Future of Software Development Retreat.

 

Measuring the right things

 

Undoubtedly some of the challenges are cultural, informed by misunderstandings of what AI can and cannot do. For instance, despite long-running discussion on this topic, we thought it was still important to caution against using coding throughput as a measure of productivity. As an alternative we suggest measuring collaboration quality with coding agents using metrics such as iteration cycles per task, post-merge rework and failed builds. This shifts the focus in a way that ensures developers are focusing on the right things and should ultimately lead to higher quality software being delivered.

 

MCP scepticism and the return to the command line

 

One of the major shifts in the experience of developing with AI is the shift away from MCP. While it was hailed as a game-changer 12 months ago, in many instances it isn’t necessary, which is why we’ve cautioned against MCP by default. This isn’t to say it shouldn’t ever be used but instead that there are often more appropriate approaches that avoid what Justin Poehnelt calls ‘the abstraction tax’.


Interestingly, it appears reservations around MCP have led to a return to the command line. One of the reasons for this is Agent Skills, an open standard that packages instructions, executable scripts and other associated resources to modularize and progressively disclose context to coding agents. This means that rather than interfacing with an MCP server, a given skill can be called in the command line. In addition to Agent Skills is the Claude Code plugin marketplace which we’ve found is significantly improving the developer experience and collaboration; workflows and other resources can be easily synced with the CLI.

The importance of older and established practices

 

The return to the command line points to another trend we noticed during Radar discussions: persistence or re-emergence of older and established technologies and practices.

 

To a certain extent this is a corrective to the period of novelty and experimentation we’ve been living through; when things are constantly changing, ensuring there are robust and even familiar foundations takes on even greater importance. Measuring the right things is something we’ve already discussed; we wanted to emphasize how critical this is by placing the DORA metrics on this edition of the Radar. Yes, they’re almost as old as the Technology Radar itself (introduced more than a decade ago), but they have a vital role to play in helping teams ensure they’re focusing on what’s most critical even when practices and technology shapes are continually evolving. We even noted in the write up for that particular blip that using the DORA metrics effectively doesn’t require sophisticated and complex tracking and dashboards; if anything these can be a distraction, and, as we write, “simple mechanisms, such as check-ins during retrospectives” can be much more powerful.

 

Another established technique that we’ve featured in this volume of the Radar is zero trust architecture. First appearing on the Radar in May 2020 and moved to Adopt in October 2021, we wanted to bring attention to it half a decade later.  “Principles such as ‘never trust, always verify,’ along with identity-based security and least-privilege access, should be treated as foundational for any agent deployment,” we write. 

 

We also talked a lot about testing too, this time discussing mutation testing and building on last edition’s mention of fuzz testing by featuring WuppieFuzz. These techniques certainly aren’t new, but recent developments in AI are both lowering the barrier to entry and making it more important to test for a wider range of unpredictable behaviors.

Cognitive debt

 

What ties this all together is the issue of cognitive debt. Yes, AI can accelerate many parts of the software development process — far beyond just writing code — but in doing so it does two things: first, it creates greater distance between developers and the software they’re responsible for and, second it increases the range of tasks and problems they may be working on. 

 

We caution against codebase cognitive debt on this edition of the Radar, but it’s also important to think beyond day-to-day work to recognize how we may individually incur cognitive debt as professionals. If you offload everything to a coding assistant, what are you avoiding learning? And what might the impact be in the future? Of course, this is always a question of trade-offs; an important part of a developer’s skillset is knowing what to pay attention to. However, given the speed of AI-accelerated change, consistent self-reflection is important as a reminder of our agency.

 

For all the novelty in the industry at the moment, much of the most interesting work in this area is exploring exactly how we can manage cognitive debt, whether that’s at an individual, project or organization level. We’re excited to continue monitoring this work in future volumes and contributing to ideas and practices that help technologists everywhere.

Gain a fresh perspective on tech today with the Technology Podcast