Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Volume 33 | November 2025

Technology Radar

An opinionated guide to today's technology landscape
Subscribe

Thoughtworks Technology Radar is a twice-yearly snapshot of tools, techniques, platforms, languages and frameworks. This knowledge-sharing tool is based on our global teams’ experience and highlights things you may want to explore on your projects. 

 

  • Adopt Trial Assess Hold Adopt Trial Assess Hold
  • Adopt Trial Assess Hold Adopt Trial Assess Hold
  • Adopt Trial Assess Hold Adopt Trial Assess Hold
  • Adopt Trial Assess Hold Adopt Trial Assess Hold
  • New
  • Moved in/out
  • No change
No blips
No blips
No blips

Each insight we share is represented by a blip. Blips may be new to the latest Radar volume, or they can move rings as our recommendation has changed. 

 

The rings are:

  • Adopt. Blips that we think you should seriously consider using.

  • Trial. Things we think are ready for use, but not as completely proven as those in the Adopt ring. 

  • Assess. Things to look at closely, but not necessarily trial yet — unless you think they would be a particularly good fit for you.

  • Hold. Proceed with caution.

 

Explore the interactive version by quadrant, or download the PDF to read the Radar in full. If you want to learn more about the Radar, how to use it or how it’s built, check out the FAQ.

 

Download the PDF

 

 

 

English | Español | Português | 中文

Sign up for the Technology Radar newsletter

 

 

Subscribe now

Themes for this volume

 

For each volume of the Technology Radar, we look for patterns emerging in the blips that we discuss. Those patterns form the basis of our themes. 

 

Infrastructure Orchestration Arrives for AI

AI workloads are driving organizations to orchestrate large fleets of GPUs for both training and inference. Teams increasingly work with model sizes that exceed a single accelerator’s capacity (even with 80 GB of HBM), pushing them toward distributed training and multi‑GPU inference. As a result, platform teams are building complex, multi‑stage pipelines and continuously tuning for throughput and latency. Discussions in this space included Nvidia DCGM Exporter for fleet telemetry and topology-aware scheduling to place jobs where interconnect bandwidth is highest.

Before this surge in GPU demand, Kubernetes had already solidified itself as the de facto container orchestrator — and it remains a strong substrate for managing AI workloads at scale, even as we also explored alternatives like micro and Uncloud. We’re tracking emerging GPU‑aware scheduling patterns — such as queueing and quota via Kueue, coupled with topology‑aware placement and gang scheduling — to co‑locate multi‑GPU jobs on fast GPU‑to‑GPU links (e.g., NVLink/NVSwitch) and within contiguous data center “islands” (e.g., racks or pods with RDMA). Recent multi-GPU and NUMA-aware API improvements in Kubernetes further strengthen these capabilities, improving cross‑device bandwidth, reducing tail latency and increasing effective utilization.

We expect rapid innovation in AI infrastructure as platform teams race to support the growing demand for AI coding workflows and the rise of agents elevated by MCP. In our view, GPU‑aware orchestration is becoming table stakes — topology is now a first‑class scheduling concern.

The Rise of Agents Elevated by MCP

The dual rise of MCP and agents — and the expanding ecosystem of protocols and tools built around them — dominates this edition of the Radar. Virtually every major vendor is adding MCP awareness to their tools, which makes sense: In many ways, MCP has become the ultimate integration protocol for powering agents and enabling them to work efficiently and semi-autonomously. These capabilities are central to making agentic workflows productive.

We observed continued innovation in agentic workflows, where context engineering has proven critical to optimizing both behavior and resource consumption. New protocols such as A2A and AG-UI are reducing the boilerplate required to build and scale user-facing multi-agent applications. In the software development space, we compared different ways of supplying context to coding agents — from AGENTS.md files to patterns like anchoring coding agents to a reference application. As expected in the AI ecosystem, each Radar brings a new burst of innovation — last time it was RAG; this time, it’s agentic workflows and the growing constellation of tools, techniques and platforms that support them, along with a few emerging AI antipatterns worth watching.

AI coding workflows

It’s evident AI is transforming how we build and maintain software, and it continues to dominate our recent conversations. As AI becomes strategically embedded across the software value chain — from using AI to understand legacy codebases to genAI for forward engineering — we’re learning how to better supply knowledge to coding agents. Teams are experimenting with new practices such as defining custom instructions via AGENTS.md files and integrating with MCP servers like Context7 to fetch up-to-date dependency documentation.

There’s also a growing realization that AI must amplify the entire team, not just individual contributors. Techniques like curated shared instructions and custom commands are emerging to ensure equitable knowledge diffusion. The tool landscape is vibrant: Designers explore UX Pilot and AI Design Reviewer, while developers prototype rapidly with v0 and Bolt for self-serve UI prototyping.

We also continue to debate spec-driven development — its scope, granularity and potential to serve as a single source of truth for incremental delivery. Yet amid the excitement, complacency with AI-generated code remains a shared concern, reminding us that while AI can accelerate engineering, human judgment is still indispensable.

 

Emerging AI Antipatterns

The accelerating adoption of AI across industries has surfaced both effective practices and emergent antipatterns. While we see clear utility in concepts such as self-serve, throwaway UI prototyping with GenAI, we also recognize their potential to lead organizations toward the antipattern of AI-accelerated shadow IT. Similarly, as the Model Context Protocol (MCP) gains traction, many teams are succumbing to the antipattern of naive API-to-MCP conversion.

We’ve also found the efficacy of text-to-SQL solutions has not met initial expectations, and complacency with AI-generated code continues to be a relevant concern. Even within emerging practices such as spec-driven development, we’ve noted the risk of reverting to traditional software-engineering antipatterns — most notably, a bias toward heavy up-front specification and big-bang releases. Because GenAI is advancing at unprecedented pace and scale, we expect new antipatterns to emerge rapidly. Teams should stay vigilant for patterns that appear effective at first but degrade over time and slow feedback, undermine adaptability or obscure accountability.

Contributors

 

The Technology Radar is prepared by the Thoughtworks Technology Advisory Board, comprised of:

 

Rachel Laycock (CTO) • Martin Fowler (Chief Scientist) • Alessio Ferri • Bharani Subramaniam • Birgitta Böckeler • Bryan Oliver • Camilla Falconi Crispim • Chris Chakrit RiddhagniEffy Elden • James Lewis • Kief Morris •  Ken Mugrage • Maya Ormaza • Nati Rivera • Neal Ford • Ni Wang • Nimisha Asthagiri • Pawan Shah • Selvakumar Natesan • Shangqi Liu • Vanya Seth • Will Amaral

Subscribe. Stay informed.

Sign up to receive emails about future Technology Radar releases and bi-monthly tech insights from Thoughtworks.

Marketo Form ID is invalid !!!

Visit our archive to read previous volumes