Enable javascript in your browser for better experience. Need to know to enable it? Go here.

Viral AI call to action is more nuanced than you think

What Matt Shumer got right and what he missed

Earlier this week (Feb. 9), Matt Shumer, CEO at HyperWrite and a number of other AI startups, published an article about AI's rapid advancement and its implications for the workforce. It went viral. 

 

Shumer’s message was clear: this is real, it's happening now and we need to stop pretending otherwise. He’s right, of course. However, there are important nuances around what an AI-enabled future will look like, where the technology will have the biggest impact and what it means for humans.

 

A new category of work

 

AI capabilities will undoubtedly continue to increase. But that may not mean we’re in the midst of a process whereby human labor becomes increasingly marginalized. 

 

A more hopeful assessment is that we’re going to see a new category of human work: one where the onus is on things like verification, evaluation and orchestration. To put it another way, this will be work that requires humans remain in the loop, ensuring AI stays in its lane and redirecting it when it misunderstands context that only humans can fully grasp.

 

Shumer is correct when he advises readers to “push AI into their work”. Treating it like a search engine is to misunderstand its capabilities. But to really effectively push it into your work you need to have a thorough grasp of what you’re trying to achieve as well as your constraints. That means reflecting on your own practices, goals and context are crucial when trying to get the most out of AI. Such reflections are necessary to inform not just what you use it for but how you do so too.

Greenfield vs. brownfield

 

Most of the examples presented in the article are greenfield projects — building new applications from scratch.

 

However, this misses some of the more subtle, yet equally impressive, potential of AI: brownfield work. In other words, tackling complex existing systems and legacy mainframes that have been patched and updated for decades and codebases where either nobody remembers why something works the way it does or where we have lost the code completely. These are often also systems governed by intricate regulations, compliance requirements and institutional knowledge that exists only in the heads of people who've been there for twenty years or no longer work there at all.

 

Experimentation is certainly happening in this area. At Thoughtworks we’ve seen significant success using AI to better understand legacy codebases which, in turn, helps our clients accelerate the process of updating and evolving them. This may not be a viral use case; however, when you consider how extensive and expensive legacy systems are (70% of systems run on mainframes and 90% of initial mainframe rewrites fail) this has the potential to make a huge impact.

 

The point is that while we know AI can write new code, it’s just as useful at helping us understand existing code. While it might struggle to refactor a mission-critical system that processes billions of dollars in transactions every day, when used as a code analysis tool rather than a code generation one, it’s still exceptionally powerful. Of course, it’s possible that AI will be able to tackle highly complex refactoring tasks in the future; the point is that we don’t need to wait; there are solid use cases already out there.

 

This isn’t to contradict or dismiss Shumer’s core argument: something big is indeed happening. It’s more that we need to recognize there remains a significant need for human ingenuity in seeking effective and impactful ways to deploy the technology. 

Security is critical

 

There’s also an important point to be made about security. While AI agents are getting smarter at an exponential rate, they’re doing so with few frameworks and systems for verification and auditing. Compounding this issue is the fact that malicious actors also have access to AI agents, which means attacks are evolving in ways we couldn’t have anticipated just a couple of years ago. 

 

Until we put serious effort here, many organizations, particularly those with complex systems and in regulated industries like finance, healthcare and defense, will be uncomfortable moving forward. 

 

The gap between what we can do with AI and our ability to do it securely is a chasm right now. This means expertise in governance and risk are currently extremely valuable, particularly the ability to automate them in a way that’s repeatable and ensures trust. Given security is an area that will never be done; it’s important to acknowledge it requires continued learning and proactive defense — AI emphasizes this fact.

 

What you should actually do

 

Shumer’s advice is good and worth your time. This certainly isn’t a time for ego and 2026 could well be the most important year of your career. However, there are a few other things I think are important:

 

Learn to orchestrate. Don't just learn to prompt; learn to manage multiple AI agents working together. Learn to verify their outputs and catch their mistakes before they cascade.

 

Become indispensable at the brownfield problems. If your organization has complex legacy systems, regulatory requirements or institutional knowledge that's hard to articulate, lean into that. Become the bridge between what AI can do and what your organization actually needs.

 

Build expertise in AI governance and security. These fields are still only in their infancy, but they're becoming critical. Understanding where change is most important and where risk may be most pronounced, and then acting accordingly, will be (and arguably already is) extremely valuable.

 

Develop your empathy and judgment. These are the skills AI cannot easily replicate. The ability to understand unstated needs, to navigate human politics and emotions, to make calls in situations where the data is incomplete or contradictory matters more now than ever.

 

Learn to vibe code, but know when not to. Yes, everyone should learn to use AI to build the tools they need quickly. But also learn to recognize when you're dealing with a problem where moving fast and breaking things isn't acceptable. Remember that few problems are greenfield problems.

Rachel Laycock, Thoughtworks
Humans are extraordinarily good at adaptation, especially when we face reality head-on. We tackle new problems as situations and environments evolve and we’re surprisingly good at creating value in ways we didn't anticipate.
Rachel Laycock
CTO, Thoughtworks
Humans are extraordinarily good at adaptation, especially when we face reality head-on. We tackle new problems as situations and environments evolve and we’re surprisingly good at creating value in ways we didn't anticipate.
Rachel Laycock
CTO, Thoughtworks

We adapt best when we’re curious

 

Shumer’s essay is spot on: the next few years will be disorienting and will require adaptation. It’s important that industries of all kinds (technology or otherwise) are aware of this reality.

 

However, there are reasons for optimism. Humans are, after all, extraordinarily good at adaptation, especially when we face reality head-on. We discover new problems as situations and environments evolve and we’re also surprisingly good at creating value in ways we didn't anticipate.

 

The key is to properly prepare. We can do that by remaining curious, balancing openness with scepticism, risk with opportunity and by learning and collaborating with those around us.

We're redefining the way the world builds and maintains software