Enable javascript in your browser for better experience. Need to know to enable it? Go here.

Macro trends in the tech industry | November 2025

The latest edition of the Thoughtworks Technology Radar is out, with 114 blips curated from our community proposals. This article expands on the macro trends that informed our discussions during the Radar meeting, taking a conversational tour through our observations of the broader technology landscape. It provides some additional insights beyond the current Radar edition. 

 

 

AI workloads demand a shift in infrastructure orchestration

 

Whether the elusive “AI bubble” is popping or not, one thing is certain: AI isn’t just a buzzword anymore, and it’s a huge driver for the tech industry at large. Infrastructure demand in particular scaled significantly, as training large language models or running inference for AI features often requires large fleets of GPUs, and companies find they must manage these at scales previously seen only in supercomputing. 

 

It’s common now for teams to grapple with model sizes that can’t even fit on a single GPU, forcing them to split work across multiple accelerators. Platform engineering teams are spinning up complex multi-stage pipelines and continuously tuning for throughput and latency to keep these AI workloads humming. A few tools and techniques can help out in this endeavor, such as NVIDIA DCGM Exporter, an open-source tool for monitoring distributed GPUs, and topology-aware-scheduling

 

One immediate challenge is cost. GPU cloud instances are expensive and often billed whether they’re busy or idle, so maximizing the use of those clusters is paramount. In response, organizations are adopting smarter orchestration strategies to squeeze every bit of performance and value. Kubernetes, which was born in the world of stateless web apps, proved to be an ideal solution for these problems, adapting seamlessly for AI, especially when coupled with additional libraries such as Kueue. Recent releases of Kubernetes have also introduced enhancements like Dynamic Resource Allocation (DRA) for GPUs and better awareness of hardware topology. 

The rise of context engineering, MCP and AI workflows

 

If, on one hand, infrastructure engineers are seeking ways to optimize hardware utilization, on the other, developers are still grasping for the best ways to integrate AI into their workflows effectively and efficiently. One notable trend is the shift from ad-hoc prompting to a more rigorous context engineering. In essence, it refers to the practice of carefully preparing and feeding structured background information to an AI model so that it can perform a task reliably. This goes far beyond simply phrasing a single clever prompt, involving a set of different techniques and carefully planned steps to improve the reliability and accuracy of the model. In practice, context engineering seeks to mitigate the non-deterministic behavior of LLMs by feeding it with the most essential information it needs to be more precise. 

 

The Model Context Protocol (MCP) is the best example of a standard evolving as an effort to “control” context effectively. Since we last mentioned MCP in volume 32 as an interesting standard, it’s now becoming ubiquitous. Originally published as an open source standard by Anthropic in late 2024, it defines how an AI client (like an agent or coding assistant) can query an MCP server for information or actions. The server might sit in front of a company wiki, a database or an external SaaS API. It provides the server for what it needs in a standardized way, and the server fetches or executes on its behalf. This decoupling is powerful: it means AI developers can integrate new tools or data sources much faster, and it’s vendor-neutral by design. The industry has jumped on MCP remarkably fast. In less than a year, we saw thousands of MCP servers spinning up, providing bridges to everything from GitHub to SAP systems. It’s safe to assume that a major contributor for the use of MCPs are AI agents, which, besides MCP, amassed a plethora of protocols to support their workflows, such as Agent-to-Agent (A2A) and AG-UI

 

AI agents are both the most exciting and hype-fueled developments in AI. This shift from chatbots to agentic workflows represents a big step for organizations, which have now realized the limitations of static LLMs. A model can’t see past their training data cutoff, requiring careful usage of context and prompting techniques to achieve a goal. By contrast, an agent can pull in real-time information and react to changes. Our concern about complacency with AI-generated code remains, and new practices are emerging to mitigate the potential for an agent to go haywire. These range from simple file patterns, such as having an AGENTS.md file to more complex setups, such as anchoring coding agents to a reference application and spec-driven development


These new techniques don’t just apply to agents, by the way. Full-blown AI workflows rose to prominence, serving entire teams, be it functional or cross-functional. Most of the code editors provide a set of options to share instructions to be reused by multiple individuals. This essentially lets teams share best practices and utilities as a one-click extension for the AI, helping standardize AI usage — ensuring every code review runs through the same checklist, or providing all developers a quick command to fetch updated library docs, for instance.

Steering the boat away from the iceberg

 

As AI techniques spread through the industry, we’re starting to observe certain antipatterns emerge. Bad practices aren't new in software development, but new problems are being facilitated by the adoption of AI. While most of them can be mitigated by a strong use of fundamental practices, it’s valuable to recognize antipatterns early, so we can course-correct. 

 

We previously mentioned AI-accelerated shadow IT, which still remains a concern. As the name implies, this is analogous to classic shadow IT, but turbocharged by AI’s ability to connect systems in unconventional ways. For example, some no-code automation platforms now let users integrate directly with OpenAI or Anthropic APIs, making it tempting to use AI as “duct tape” to join systems together in ways the IT department never sanctioned. While this can yield quick wins, it poses maintainability and security risks (e.g. data leaks or unmonitored processes). 

 

Other examples include overly optimistic use of current AI tech. Text-to-SQL solutions, for instance, have not met initial expectations in practice. Simply trusting an LLM to reliably generate complex SQL queries can backfire, they often need human validation and struggle with edge cases. This doesn’t mean the idea is dead, but teams have learned to keep a human in the loop or use such tools in limited scopes.

 

By staying aware of emerging antipatterns, technology leaders can avoid the pitfalls that come with the hype. The overarching message is one of balance — enthusiastically leveraging AI’s accelerating capabilities, but coupling that with thoughtful engineering practices, open ecosystems and a healthy dose of human judgement. This ensures that AI truly elevates the team’s productivity and creativity, rather than leading it into a maze of quick fixes and hidden risks.

 

Facing challenges with a flexible and informed strategy

 

The tech industry is no stranger to revolutions. Cloud computing transformed the way we work and live, and multiple waves have happened since. And with each new thing, we discover new pitfalls and realize that our processes and mindsets must evolve alongside the tech. It’s no different with AI. Beyond all the hype and doomposting, AI is just another part of our toolkit, albeit one that makes a lot of noise. Companies that successfully ride this wave will be those that invest in core fundamentals with a pragmatic look. By understanding the significance and implications of these trends, we can make better decisions today, rather than just waiting to react to the future. 

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Explore the latest Technology Radar