Enable javascript in your browser for better experience. Need to know to enable it? Go here.

The cognitive demands of AI novelty

Too young to blip?

The Technology Radar is an opinionated snapshot in time of technologies and techniques we’ve used with our clients. It offers insights across many dimensions, but one that’s particularly interesting is that it surfaces new technologies people find useful. No one can know or experience everything happening in software development, but, thanks to the Radar, we get to explore some of the most exciting or important things our colleagues are working with.

 

However, when putting this edition together we encountered a particularly extreme level of newness in ways we haven’t before. Forget ‘emerging technologies’; some of the things we discussed were barely a few weeks old. This presented a dilemma: yes they were interesting, but they were typically far too new for us to really be able to say much confidently about them in a publication like the Technology Radar.

 

While the radar has always brought us face to face with the novel, there was something unique about what we found in this edition. Yes, we’ve had periods like the JavaScript framework boom in the 2010s, but this is different. What’s more, it also speaks to the nature of the AI-accelerated moment we’re working in right now.

AI-driven volume and velocity

 

Out of more than 300 proposed technologies and techniques put forward by Thoughtworkers, a significant proportion had a) only been around for a few weeks and b) very few GitHub contributors. Often, one of the contributors was a coding agent. 

 

It’s hard not to see this as anything other than evidence of AI-accelerated proliferation. The fact we had more proposals for this edition than we’d received from colleagues since 2023 indicates we’re entering an era distinct from the one we were in 12 months ago, one in which the barrier to creating software has reduced drastically thanks to the improvements in AI since the end of 2025. 

 

A solo developer with an idea and a few free hours can now produce something quickly of seemingly high quality to open source standards and including well-documented readmes, SBOMs, clean implementations, licensing, contribution models, documentation, CI badges, stars and a history of multiple releases. 

 

Semantic diffusion

 

Parallel to the proliferation of software is the proliferation — and diffusion — of language. When new things are developed they require concepts and descriptions to communicate what the software does and why. Given the ease with which such content can now be created, we are faced not only with lots of similar kinds of software, but new words for subtly different things. Far from elucidating things, this often has the opposite effect: it adds to confusion. 

 

To compound the complexity of the present moment, a lack of shared understanding means underlying practices are not only evolving quickly, but also in different ways that aren’t easily articulated. Indeed, even if we are building in good faith, without a language that has stabilised, it’s also challenging for practices to stabilize and mature too.

Alessio Ferri, Thoughtworks
Waiting for the ecosystem to settle is itself a choice, and an increasingly expensive one. What’s needed isn’t a higher tolerance for ambiguity, but sharper judgment when operating inside it.
Alessio Ferri
Waiting for the ecosystem to settle is itself a choice, and an increasingly expensive one. What’s needed isn’t a higher tolerance for ambiguity, but sharper judgment when operating inside it.
Alessio Ferri

Four implications for software developers

 

The implications for software developers are multifaceted, but there are a few key things to keep in mind. 

 

Intensifying the challenge of evaluating software

 

The different ways language is used to name and describe technologies and practices makes evaluating those technologies more challenging. Without a clear shared understanding of what’s being discussed, how can we assess what’s in front of us?

 

This is something we encountered when putting the Radar together — to be able to judge and evaluate we spent considerable time discussing and trying to clarify what was being referred to. There’s an obvious cognitive demand here that’s additional to the challenges of day-to-day software development.

 

Increasing cognitive debt

 

The second implication is cognitive debt. The rapid pace of change and endless novelty means developers may lack understanding and appropriate mental models of not only the things they’re using but also what their colleagues are actually doing. In short, there’s a risk of a kind of organizational atomization.

 

Distinguishing between disposable and durable code 

 

This cognitive burden is related to a third issue, which is about the importance of distinguishing between disposable and durable code. As Charity Majors explained in an article written last year, disposable code is that which is created for prototypes, scripts and experiments, while durable code is that on which long-running systems are built upon. The former tolerates a shallow understanding of the code and requires relatively little governance and maintenance  because its lifecycle is very short; the latter, though, demands developer understanding, appropriate documentation and, of course, evolving and maintenance. 

 

This isn’t to say we need to avoid using AI. It’s more that we need to recognize which mode we’re working in and be intentional about what kind of software we’re creating and consequently how far in the loop we as developers feel comfortable with. 

 

Developers who understand this distinction and are able to intentionally move between these two modes will undoubtedly have an edge. Those who don’t will accumulate cognitive debt for the durable software they build, and, as Majors notes, costs will be much higher. 

 

As with technical debt, cognitive debt isn’t inherently bad; we may well reasonably choose to do something with low cognitive burden with the knowledge we’ll need to address the debt in the future. It’s really a question of awareness and intention.  

 

Securing and governing software 

 

Cognitive debt will inevitably weaken security posture. This is because developers lack the internal understanding or landscape knowledge to respond to incidents or perform effective threat modeling.

 

One threat in particular exacerbated by AI-accelerated cognitive debt is the software supply chain and the risk of malicious prompt injection. When we call on AI to develop software, it’s extremely easy to lose sight of the dependencies of our systems. There might well be vulnerabilities we’re unaware of — a vulnerability discovered at the end of March 2026 in AI gateway LiteLLM was found to steal credentials, compromising users’ applications.

 

At the heart of this is what Simon Willison calls the lethal trifecta — the ability for agents to access private data, exposure to untrusted content and the ability to communicate with external data and systems. In the context of coding agents, the risks for are exacerbated when developers are managing significant cognitive load.

Ambiguity requires sharper judgment

 

It’s not clear whether this is the new normal; however, the current cognitive demand isn’t sustainable and will arguably undermine the possible gains AI can deliver. Consequently, we may see an explosion of new tools, evaluation frameworks and trust signals to help us assess a much larger volume of these technologies.

 

Uncertainty won’t resolve before teams need to make decisions; waiting for the ecosystem to settle is itself a choice, and an increasingly expensive one. What’s needed isn’t a higher tolerance for ambiguity, but a sharper judgment for operating inside it. Knowing what signals matter, distinguishing “too early to assess” from too early to adopt, and being willing to revisit previous decisions quickly.

Gain a fresh perspective on tech today with the Technology Podcast