Brief summary
The Thoughtworks 2026 Looking Glass report was published in January. Designed to provide business and technology leaders with the tools to better understand and navigate future trends, this edition paid particularly close attention to what organizations need to do to reach a level of AI maturity that will effectively unlock an operational and commercial edge. Taking in everything from AI-assisted software delivery to AI-ready data, it bridges the gaps between what the world is doing today, what will be possible in the months to come and what may be coming on the horizon in the long-term.
To discuss this year's Looking Glass, host Ken Mugrage is joined by Rickey Zachary and Thomas Squeo. Together, Rickey and Thomas provide both a technology and business perspective on the main insights from the report, exploring some of the key throughlines and issues Thoughtworks believes businesses need to contend with. With a complex and rapidly changing industry and economic picture, one thing emerges as critical: being brilliant at the basics.
Read the 2026 Looking Glass.
Ken: Hello, everybody. Welcome to another episode of the Thoughtworks Technology Podcast. Today, we're looking at our Looking Glass report that we published a couple of months ago, and the effects on the real world, our clients, and the folks at Thoughtworks. We're lucky to have a couple of guests with us today that you've heard from in the past. In fact, one of them is one of our new hosts, but they also have exactly the experience we're looking for. First off, if you wouldn't mind introducing yourself, it's Thomas Squeo.
Thomas Squeo: Perfect. Thank you, Ken. Thomas Squeo, I'm the Chief Technology Officer for the Americas here at Thoughtworks. I get to work very closely with Rickey and the rest of the team on our own agentic journey. We've not only been working in the AI space the entire time I've been at the company, but also it's been nice to see the-- I think February 2026 was a pretty significant inflection month for everything we've seen so far.
Ken: Rickey?
Rickey: As Ken mentioned, I'm one of the new hosts on the ThoughtWorks Technology Podcast. My day job is I'm working with Thomas on leading our both internal developer platform and platform engineering practices, as well as our external client engagements from a platform engineering perspective. I'm focusing a lot on the most effective ways that we build platforms for both ourselves and our clients. I'm working with our AI/works team on how do we build our developer platform and then just scaling the use of AI across our client enterprises and our client customers. It's very pertinent to look at the Looking Glass and see that there's a lot of platform engineering focus and platform focus there.
Ken: I invited the two of you specifically because you both bring a little bit different direction that you look at this from. Thomas, you talk to enterprise leaders at the CTO level and stuff every day from executive mindset. Rickey, most of your background is in actual hands-on enterprise modernization platforms, et cetera. I really thought it would be good to get both feedback or both impressions, if you will. Thomas, as I mentioned, you talk to enterprise tech leaders every day. We hear about gaps between their ambition and what they're actually seeing. What does that look like? When you're talking to a CTO or CIO, what are the gaps that they're worried about? What are you seeing on the ground?
Thomas: The biggest gap that I see for most folks is around AI fluency, being able to have their teams be able to interact, engage, work with these tools, be able to understand where the perimeter really sits, where guardrails, evaluation frameworks, data labeling, things like that needed to happen in order to be able to bring things forward into production.
It's not just, if you think about, in the context of software engineering, the inner loop being the creation of the software and the outer loop being the running, the CICD observability, and so on and so forth, dimensions. We're starting to see this orchestrator role become emergent. That is very much key in what it takes to be able to take a C-suite's ambition for being able to be an AI enterprise and being able to bring that to a production state. I think that those are one of the areas that I hear the most.
Ken: You've also, Thomas, been quoted as saying that organizations need to be brilliant at the basics. What does that mean?
Thomas: I always come back to the core mechanics around software engineering principles, especially when it comes to organizations that are greater than 50 people, really taking a platform approach to what they're doing. Whether that be CICD, DevSecOps, software bill of materials, or policy as code. Those things, as good first-principle practices at the enterprise scale, are critical for being able to have success in an AI-driven landscape.
What ends up happening is that the pace of change and the pace of ambiguity, being able to thrive inside your operational environment, by being brilliant at the basics, you have the ability to now control for the things that are controllable, and then be able to work with a non-deterministic system in a production environment. That's really what I come back to. One of the reasons why Rickey and I work so closely together is because we view the ambition of many of our customers is being enabled by platform.
Ken: Which is a great segue. Rickey, you've spent years helping people build internal platforms. What does this change about what a platform needs to be?
Rickey: I don't think it changes the core engineering foundations of the platform. We still want to have platform abstractions. I think we still want to have a basic platform engineering components around infrastructure as code or around CI/CD pipelines. I don't think it changes some of those core foundational elements. I think what it does change is the interaction modes that the developers now can consume those platform capabilities using.
It's this idea that before it was going to be the IDE or CLI or the terminal, and now what AI is adding is these additional interaction modes, the different operating modes that the platform now can expose capabilities to. Now you can do an MCP, you can do an AI agent, you can do all of these new modalities that are there. That's what I think it does change. It changes the interface that the developers are going to be utilizing to consume platform capabilities. I think that's the interaction model that it changes.
Ken: Thomas mentioned a second ago that February of 2026 was a big month for AI. Frankly, some stuff back in December, different models coming out with Opus, and so forth. If the bottleneck isn't the models anymore, if they can write decent code and we know what we have to do in a platform, it doesn't mean we've done it, but we know what we have to do, question to either or both. What is the bottleneck? How do these folks get AI from experiments to production?
Thomas: From my view, it's basically as follows. If you over-rotate and you over-optimize one aspect of the value stream, then the other aspects of the value stream start to slow you down. We now see this notion of other areas around being able to absorb the change. If you have these agents that are moving at speed and scale, now we have the opportunity to be able to upscale the rest of the organization as it relates to next steps. When I say next steps, I'm talking about, what does it mean to be product-led. What's the evolution of the product manager's role? What's the evolution of an operator's role maintaining production systems and so on?
Rickey: I agree with those statements. I think one additional thing that I'm seeing as I'm building more and more of these platforms with clients is that the bottleneck is no longer-- we're reaching a tipping point where it's not the models and those things. I think from a platform perspective, one of the bottlenecks is, we're not seeing the actual platform product thinking that we would expect to see within the platform. I think there's a parallel to when we started to do Agile as a process.
There was this idea that Agile was doing a lot of exposing some of the organizational issues that were in an organization if you tried to go and do Agile without having some of the organizational issues fixed. That happened repeatedly as I was growing up in the consulting world from an Agile perspective. I think we're now seeing that happen in the platform space as well, where the platform space is not just here's a tool to do infrastructure as code, here's Terraform, or here's GitLab or CircleCI to do these things. It's the actual product thinking to interconnect those experiences.
What the addition of the LLMs is adding is, hey, now I can write way more code. Now I can produce way more code as an output, but none of those other parts of the platform are interconnected enough to be able to get that code into production in a consistent and seamless manner. I think as people start making investments in platforms, they still see it as just a collection of tools, not really applying that platform product thinking. I know that's similar to what Thomas said, but that's a tactical example of what I'm seeing as well.
Ken: In looking at the last report, we talked a lot about what does it mean to rebuild platforms and ecosystems and all of that. There was a whole lens around AI for software delivery for developer experience. Sorry, I'm used to saying DevEx these days. AIOps, federated data platforms, et cetera. Rickey, you've talked a lot about developer experience as a key trend, a golden path, if you will. How does this evolution set the stage for AI-first delivery?
Rickey: I think repeatedly, I've been talking to clients about making the easy way the right way for developers. That's the key unlock for focusing on developer experience. It's the idea that if I create the golden paths, if I have the right developer experience, if I'm focusing in on that, even if we're talking about producing code with AI or we're doing agentic workflows or those things, if I make the easy way the right way, I'm going to have great adoption of the developer platform.
I'm going to have better outcomes in my business platforms, in my data platforms, all of those elements that are there. I think that focus on developer experience, operationalizing that through golden paths with the outcome being making the right way the easy way. I think that those are some of the outcomes that we're looking at.
It's easier said than done because there's a lot of operational pressure, organizational change, there's an operating model component to it, there's a change management and adoption model component to it, but that's what you want to do to get that outcome, is focus on those things. It's less of a tech problem. I know some of the folks from Google say platform engineering is a sociotechnology problem. It's more socio than it is technology. Getting those things and those elements associated with the socio part of the problem is very important. The technology is fairly easy right now.
Ken: It's interesting that you brought that up because, Thomas, you were interviewed by, I think it was, InformationWeek recently, and made the comment or the observation that AI readiness is a business-aligned transformation, not a technical project. If you think about the culture and organizational aspects, what does that mean?
Thomas: The models are starting to become commensurate with each other. Obviously, Claude's got its strengths, OpenAI's got its strengths, Gemini's got its strengths. What we're seeing is that it's the context that lives above them that is really the opportunity, I should say. I think about enterprises, and they're dealing with AI, as a part of their operating model, it needs to be understood. I come back to AI fluency. If your executive team is understanding how AI is going to be applied in their functional domains, they're going to have a greater desire to see it be able to be incorporated.
Our CMO is one of those people that really took AI as a part of how do they accelerate the operating model for that part of the business. Then that was one of those things that I saw as like, okay, that's an example of how you see these things go downrange. If the CEOO for an organization is really looking at their business process and the art of the possible there, that's another aspect of how it could actually be incorporated in. The thing is that the technology organization that actually would be building, delivering, managing, and scaling that, that's not the hardest part of that problem.
It's actually getting the adoption in these functional areas of the organization. If I think about one of the constructs that I worked on for a long time was this notion of in the inner loop, you would have this notion of how software was developed, this middle tier where you would start to see operational business processes being affected. In the outer tier, you'd start to see things that were facing stakeholders and customers and so on and so forth. That outermost tier of the onion is the most risky because that's the area that you have the least amount of control for.
When you have an operational executive or somebody that's responsible and accountable for outcomes, they're going to be driving it and being able to understand it. This also brings up another issue, which is now this notion of Shadow AI and Shadow IT, where you're starting to get systems that are being put into mission-critical environments or where, in fact, they might not have been vetted from a security dimension or a risk dimension as well. I think that, largely, many of these things are risk discussions. If you think about the opportunity to do things like legacy modernization or any of those things, you start to approach that from risk as opposed to technical feasibility.
Ken: We talk a lot about platforms. The other angle about this, of course, is the data. We can change the code on top, but the data is really usually where a lot of the value is. How does that need to evolve? We had data lakes for those of us with gray hair, moving along to product-centric. How does the data platform evolve with this, either or both?
Rickey: I think about it from the bottom of the infrastructure up. In an AI world, I think that we are now going to be even more heterogeneous about the infrastructure that we need to deploy in the runtime to support the different agents. I think that now there's this mixed environment where, on the Kubernetes side, I may be deploying a lot of GPU-based clusters, and I need to be able to maintain that and utilize that for different ETL packages for different AI agents that are running purely on the data side.
Now I'm moving from what we had, where there were a lot of data lakes and there were a lot of data meshes that some of those endpoints are going to now be knowledge graphs or GraphRAGs or all of these very AI-enabling data endpoints that are there and data datastores that are there.
Now, I think that what we are seeing is the ability for the developer and business platforms to support these mixed and very heterogeneous data infrastructure pieces that are there and then expose those as either building blocks for AI agents or for business outcomes that are there. I think that we're seeing that infrastructure that was usually dedicated for pure data applications converging with normal AI agent applications, engineering applications, particularly on the Kubernetes side, and now platform engineering teams are converging and have to manage a lot more infrastructure.
From a bottom-to-the-top or bottom-up standpoint, it makes it very, very interesting for new platform engineering teams that were normally only managing just compute infrastructure or basic storage infrastructure to now having to start to manage these more exotic data storage solutions to support the AI agents.
Thomas: It's around this notion of context. If you think about the swampy nature of data lakes that occurred over time, we're going to have different levels of readiness for what is going to be powered by AI. I think that that is a very not only important, but it's a critical aspect of whether or not you're going to be able to bring a system forward. One of the greatest inhibitors that we've seen in going to production is data readiness.
People are very comfortable with it in a POC environment, but then all of a sudden they realize that the regulatory indemnity that goes along with putting a system in production is one that can actually introduce a lot of risk to the enterprise. Those are things that are going to be critical for enterprises to understand. I think that it's also important to understand that it's unlikely that your entire data estate is going to be at the same level of readiness, so you have a medallion architecture for your data readiness as well.
I think those are going to be portions of how you'd be able to approach that. Again, it depends on what the blast radius of what the problem set that you're going to solve for. Context is king. We've seen a lot of things happen for engineering teams and product development teams internally that far outstrip the capability of what you would be able to put in front of a customer in a regulated environment. We've all heard the funny horror stories, or they're only funny if they didn't happen to you. That's really what I think about the data readiness problem.
Ken: You've both talked about the ROI of all this. I was talking to a project lead the other day that on their project, they're spending half a million dollars a year just on LLM access, on tokens and whatever. Then, of course, there's other costs. Rickey, you've talked about the DORA and SPACE metrics. Thomas, you've talked about just the need to define and track AI ROI. How do people measure value? How do they know this is working or not working?
Rickey: I'm going to give an example of what not to do as a good example of what to do. I've been talking to a lot of clients about this, and I advise them not to think about the traditional counting metrics, like hours spent doing a particular task or amount of lines of code that's produced by an LLM. I think that those types of metrics are traps because it makes the conversation purely about productivity and not about the reduction of friction or the use of AI on maturing a particular practice.
A very tactical example about that is a testing cycle. Most organizations have a very manual testing cycle that's supported by a bunch of people going and clicking buttons on a UI to test or verify something. I can reduce that by a certain amount of time on each individual person's activities, or I can think about what is the actual toil in the system that's there. The toil in the system may be, "Well, I've got to manage a spreadsheet to manually test these things." Great.
The entry point there would be, "Okay, we want to extract that toil, we want to increase automation." We're not necessarily tracking how long an activity took, but we're tracking an AI metric that's a part of that. I think that in that particular example, the automation uplift would be the metric that I'm tracking.
Number of tests automated, number of tests using AI elements, those are more important elements of it because you can use those as leading indicators of is the AI experiment or my capability, is it actually providing value? Versus we're going to introduce AI and see how many hours we save. That could be because of other mechanisms. It doesn't give you an accurate reflection. We are moving towards these non-productivity leading measures so that we can actually accurately measure the impact of AI as it's being applied to our clients.
I still think that DORA and SPACE are valuable metrics as a part of that conversation. I think that we're also doing a lot of research on what are going to be the next set of metrics for an AI-enabled or an AI native world. We're thinking about token costs. How does that actually play into the day-to-day lives of the developer? Is there something around token costs and their effectiveness? A bunch of different elements there. We're really exploring those elements. I don't think that there's one right answer yet. Something that's similar to DORA and SPACE for the new AI native world. We're taking this very deliberate layered approach when it comes to that and from a developer and engineering perspective.
Thomas: I'm just on the ROI side. I think it depends on what area you're looking at. If you think about there's no Jevons Paradox, you're going to be filling in the use of that time. If you look at engineering as a manufacturing-based process with certain lines of code per engineer, you're already looking at the wrong numbers. I think that to Rickey's point about DORA and SPACE, I think those are critical elements of us to see that evolution. We see, for example, the DX folks that are evolving their own metrics, they're putting out their own state AI report, very much looking at these kinds of elements.
When you think about the ROI for a call center, where you're introducing a call center agent, you have a completely different set of ROI metrics that are going to be satisfied. If you think about NPS scores and things like that, that's when you're facing a customer and the ability to have an avatar-based agent interacting with a human-based agent, seeing the controls and how you deal with those handoffs. You could be looking at that and how you deal with managed service L1, L2, L3.
All of those elements have different ROI measures than engineering. Engineering is going to very much be looking at everything from what's your stability, scalability, what's your security dimensions, whether or not you're introducing risk, whether or not you're able to go fast and deliver features that are delighting your customers or your stakeholders. Then are you doing that in an efficient fashion? I think that what we're seeing from an ROI standpoint is when you look at token costs, that's a new gravity well that hasn't existed before.
We view that AI ops as part of FinOps and that needs to be considered as a first-class consideration in that dimension. I would argue that a well-formed environment where you're actually building out your engineering teams to take advantage of this, the advantage is far beyond just ROI and it can't just be a efficiency play. We've had conversations, for example, some of the heads of tech and other CTOs, and I have had conversation specifically around the only way that measure makes sense if you're looking at it from an engineering team, if you've locked yourself in amber and are no longer evolving to where the market goes.
As your competitors are also working against this space and taking advantage of these tools and techniques, they are going to be the thing that takes your ROI and eats it as their ROI is your margin in that scenario. That's an example where we just see the operating environment for so many of our customers that have to deal with legacy companies with 100-plus years of life in their business, competing with an emergent AI native startup that has come in and with unencumbered by all of this infrastructure. One has a huge install base of customers, one has the ability and agility to be able to move nimbly inside a market.
Those are different consideration when you come to ROI because if I was just going to say, "Hey, is your token cost high? Are you getting the value out of that?" I think we need to focus on the business outcome being driven from it rather than just a raw cost metric. If you think about Thoughtworks as an organization, when we look at what our token cost is, I would argue that that is a false measure for being able to understand where the value actually comes from. Again, if your issue is token cost at this point, you're probably focused on the wrong aspect.
Ken: You're talking about pace of change. Two of the other lenses in Looking Glass, the first one, lens 2, was about agentic workflows and embedded governance and that sort of stuff. Then lens 5 was about responsible foundations. We have human oversight, computational governance. Since that was published in January, we now have OpenClaw and lots of other things out there that have really made a splash. Now that agentic, it's a real thing for sure. Obviously no one's going to argue with that. Thomas, you've talked about security implications of autonomous AI agents that traditional security architectures fail when systems are making their own decisions. What does governance look like these days?
Thomas: Again, I'll come back at the be brilliant at the basics as a key notion of being able to absorb risk. Understanding the risk that you're absorbing is a key aspect of that. Google has published new AI agentic security standards. MITRE ATLAS has done that as well. MIT has done that. There is a growing body of understanding about where these dimensions are. I don't think one framework is the only one that you're going to be able to do. Whether the ISO or other emergent standards are going to be part of this, I think that what we need to do is be able to go and say, "Okay, security by design has got to be a component of that."
Previous article that I participated or wrote, I can't remember which one it was, but basically it was this notion of understanding when a guardrail strike happens so you could actually intervene and actually view that. It's not just an agent that's hitting the guardrail over and over again, and you don't have a governance process to be able to absorb or inspect that change. I view that notion of an agent as a product. It could be a product that has many agents in it delivering a capability.
The software design paradigm of a program or an agent that does one thing very well, give it that bounded scope to be able to control for that from a risk perspective. That gives you then the ability to manage that as a team. That notion of continuous beta, that the only time that something's out of beta is when it gets pulled out of production, is something that I think it needs to be considered a core software engineering mechanic that we need to be able to apply as we go forward here. When you talk about OpenClaw and MoltBot, all these agentic systems that are going to be working in their own way, that's here, it's already here today.
We need to be able to understand that our governance frameworks need to be able to evolve with that, and our governance processes need to be able to evolve around that. Some of our customers are like, "Hey, we are only going to make AI available for our engineering teams, but not the larger organization." I think that that's a mistake because the larger organization, understanding that context, understands different risk profiles than just, say, a purely engineering perspective. That whole 360 view in that regard.
Rickey: From a security standpoint, I agree. I think that there is a lead-in to what's realistic from a security and boundary perspective and then what is hype from a security perspective, and that is really aligned on what the agents can do and what we want them to do. I think that that constrains the security boundaries that we should have. I think about traditional security context, which would be if I'm doing NIST 800-53 security stuff or SOX compliance and those things, how am I building AI into those very highly regulated environments?
I think it goes back to what you said, Thomas, which is getting the basics right, basic data protection, basic supply chain hygiene, basic AI workflow hygiene. Those are the things that, regardless of if you're going to be giving models to 90% of the developers, if you got the basics done correctly, that matches to normal AI use cases or non-AI use cases. If you're getting your supply chain right, really, if you've got CI/CD right, so you got a point of entry for all of those changes, you got those basics right, then you can layer on the different security concerns that are there, what your risk tolerance looks like so that you can actually take advantage of AI in a very meaningful way that doesn't open you up to these additional attack and threat vectors that are there.
The basics around risk modeling, around getting the basics on the engineering side right, having the right data protection strategies at a tactical level, I think that those things make it easy to start using AI, and we will see clients going in a direction where they're having breaches or vulnerability is when they weren't doing those basics right. I think that I agree with your context there on that promise.
Thomas: The threat vectors are real, and they're emerging, and they're going to be constant, and they're exploding. I think that when we think about our own CISO organization and are working with our customers' CISO organizations, things like your DLP, your model management, your cloud architectures, your architectural standards, and so on and so forth, not having those as communicated practices is not a viable opportunity for an enterprise. I think that those are key critical as we carry these things forward.
Rickey: To reiterate that with a tactical example, whenever I'm seeing AI writing code or a new AI breach, it's very rarely a new exotic technique. It's, "Hey, the AI wrote some API keys in a GitHub repository, and we weren't doing any checks in our CI pipeline for API keys or secrets management." "Oh, it used a known CVE as an attack vector, and it introduced that into a supply chain, and we built that code, and now it's in production."
I think that there are some exotic attack vectors as a part of AI, but even things like the MCP call chains, it's, "Yes, it was a basic man-in-the-middle attack that we didn't actually have basic security hygiene over." I think getting those things correct really reduces the attack vector and the security concerns that you have, and then having basic risk management as a part of that process makes it really easy to reduce the attack vector, have very good security involved as a part of that, and then to effectively use AI where you really need to use it.
Thomas: When we think about this landscape, we have AI models versus applications that are consuming inference in that regard. There is this notion of model poison. That is one of those things that you need to be able to understand, is if somebody's going to inject new rules to your agent, those are really areas that need to be considered and captured. Good practices around policy as code, software bill of materials, secure by design in your practices matter that much more because it's going to be one of these things that you built, left out there for a long time, then all of a sudden, now it's-- The benefit previously was that agents had a small memory context.
Now, as you're starting to see that memory context get larger and you're starting to see them do things for longer periods of time that are more sophisticated or interactive with multiple agents, that just exponentially increases your threat vector. We were recently at an event, and we were talking about this notion of how do you get agents to be able to control their context? One of the things we thought was this is really one of the really appropriate uses of ledger technologies when you think about agents writing the ledgers to be able to have their context.
When the agent gets removed from the environment, and they get reinstantiated in an ephemeral manner, they can get that context back and understand what it actually means to be able to run. You could make a rule that says, "Hey, you have to establish new context at each time that the agent is reinstantiated," in that regard. These are all ways in which the controls can be done, and we learn these controls through other techniques, through other practices.
I think that what comes back is that all three of us were actually recently at an event where we saw the practices around engineering that have been evangelized since object orientation over the last 30 years have become that much more important. The old ways still matter.
Ken: Looking Glass describes a shift where the human role goes from performing tasks to overseeing the behavior of intelligent systems. Rickey, you've talked about developers becoming the curators of intelligence. I'm curious again to both of you, how does that work? How do we get people that instead of thinking about their lines of code, they have to curate the intelligence of the system?
Rickey: Anecdotally, one of the things that I am really excited about that AI is doing within this particular space is that it's making software development look more like software engineering. What I mean by that is that now the developer is in the seat where the LLM will do anything. It will do anything that we tell it to do. It's now our job to impart the context of what we do and constrain the LLM. Now there's this idea about constraint engineering as the key practice in utilizing LLMs as a developer.
As I'm helping to build out AI/works platform pieces, as I'm using AI on my day-to-day, I really focus on my job as a software developer being constraining the LLM, understanding a nuts context about the problem that I want to solve, the architectural elements of it, all of the information that I need to go and impart on the LLM, and constraining that through prompt loops, through detailed specifications, through all of the metadata that the LLM is going to need, constraining that problem enough so that it's going to do exactly or very close to exactly what I needed to do. That, I think, is something that's making software engineering more exciting for me.
I think it's going to put a lot of the burden on individual developers to understand the problem more from a prompting standpoint, from writing good prompts, to understanding what I actually want the LLM to do. That's going to take a lot more knowledge work. Not to say that 90% of developers were doing this, but going to a site like Stack Overflow or googling "How do I write a basic loop?" or "How do I write a very specific algorithm," and then copying and pasting that code. It makes it more engineering than the toil work that's there. I think that for software engineers, true software engineers, practitioners of the craft, that's a much more interesting outcome than, "All right, I'm going to go write a bunch of Python code to go do something."
Thomas: It is not uncommon for me to have a conversation where somebody's like, "Oh, I have something running in the background that's figuring out this other problem." They're orchestrating it today. It's like that could be in Cursor, that could be Claude, Gemini, OpenAI. It doesn't really matter which one they're actually using. The thing is that what we're starting to see is this notion of people orchestrating actions that they're now going to walk away, go do another thing.
We'll be in a meeting that's completely unrelated to that. They're like, "Oh, I'm going to check and see if my code is actually updated for X, Y, Z." I think that that's an example where you're now starting to see people overseeing action as opposed to necessarily being the individual contributor of it. It's not uncommon for me to have a conversation with somebody that's wanting to modernize the system where none of the SMEs that built that system are still in the organization.
That context around who wrote the code and why they wrote it a certain way has been lost into the mists of time in the enterprise. Now we're having to modernize that and develop that context over a set of components. Very rarely are we doing that hold line-by-line manner. What we're doing is we're understanding the context of that system, we'll understanding the capabilities that they're doing. We're understanding what it means to be able to be behaviorally equivalent to the system that exists that they might not understand, or they might only understand what that system's supposed to do for the enterprise.
Then, as we then go and get that equivalent state, we then have the ability to reimagine those things. What you start to see is this as a team of engineers orchestrating step-function capabilities that didn't exist. I would say that the reason why I made the comment about February is a pretty big month for AI is that we saw all of a sudden context windows increase the ability to do this orchestration become significantly more complex.
We saw the notion of Moltbot and OpenClaw and agentic networks starting to emerge in ways that we envision from the notion of swarming technologies and nanotechnologies from the past. The thing is that now we're actually seeing it realized in systems that you could harness from an engineering standpoint. I would say that don't sidecar OpenClaw into your enterprise, but you should definitely experiment with it or have it in an air gapped or some kind of control plane that you could actually do this. I think that there's very real benefits to understanding how these systems work, and I think that they're going to be part of our operating norm in the future.
If you think about serverless architectures or function as a service or these things that we did in the past, I would joke a year ago I was talking about like functions as a service and RPA were our first steps onto understanding what agents would do, and then we talked about a single agent system, and then we talked about agentic systems, and now we're talking about swarm-based agentic systems, where the agents might be writing their own agents to do sub-functions and so on and so forth. There's a rent-a-human site that agents can go get people to go do things for them now. What we're seeing is this is now an operating reality as opposed to science fiction.
Ken: I think we'll end with the Monday morning question. The actionable takeaway, and I'll start with you, Rickey. Someone's leading a platform engineering team. What should they be investing in today to get ready for what's coming? What's their action Monday morning?
Rickey: Their action Monday morning are three things. One, I would say the most critical thing that I'm seeing cause AI investment in platform engineering to fall down multiple times and time over is, and it's very, very mundane, not having rock-solid CI/CD. It seems like it would be very simple, but most of the organizations that I'm working with are struggling with consistent CI/CD. CI/CD is the brains of the platform engineering effort on the developer platform side. What we're seeing is the lack of consistent CI/CD means that I can't apply agentic workflows as well.
I don't have consistent developer-centric golden paths. I can't test as well. I can't shift capabilities and complexities down into the platform. If I am running platform engineering teams for both clients, and internally, I would do a very quick assessment of what is the state of my CI/CD. Are developers complaining about super-long build times? Does it take into account release confidence? Can I actually do release management as a part of that? If not, I may get a mandate from my CTO that says, "Hey, are we doing AI stuff?"
It's going to fall over eventually. I think that's one of the things that I would do. I think the second thing that I would be doing is taking account of are we actually doing platform product thinking. I think platform product thinking in particular provides the biggest amount of uplift from going from AI being just a tools investment to an actual enterprise-wide differentiator on the engineering side. I think a lot of organizations start this AI journey on, "Hey, we did a bunch of GitHub Copilot experiments," or "We bought a bunch of cursor licenses," or "We've given everybody anthropic keys," and then expect this creation of these 10X teams that are there.
They're not really thinking about what is going to be the developer interface for those. What are some of the key friction points that I'm not going to be able to solve just by throwing tools at the problem? Really, having a robust platform product thinking capability or platform product engineers or platform product managers as a part of your organization and treating that like it's a first-class capability, I think that that's extremely important.
Then the last thing that I would do on that Monday morning is making sure that I have the right runtime to match the business capabilities that are there. There's an extreme amount of focus across all the Cloud providers to maximize Kubernetes, all of these exotic new runtimes that are there, but make sure that you've got the ability to have those runtimes running and operating so that you can set up some of the things that Thomas talked about earlier on the AIOps side, on the security side. Having the right runtime components is extremely important.
I would have a catalog of those runtime components so that I can ensure composability for not only my traditional engineering workloads, but my AI agentic workloads as well. I would do all of those things at the beginning, at the middle, at the end, if there was one last fourth thing, making sure that I'm measuring some of the right things. Those three things are the big focus tactically that I see more and more clients falling down on.
Ken: Thomas, I'll give you a little bit bigger window, because if we're talking about a CTO or a VP of engineering-- By the way, feel free to respond to anything Rickey said as well. If we think about 2026, because CTOs are more forward-looking in general, what's the most important thing for them to get right in this year?
Thomas: I think that one of the aspects is that a CTO can obviously think in their domain, but I think the ideal scenario is that they have a cross-domain use case that is affecting one of their stakeholders, like an operations officer, a marketing officer, so on and so forth. Be able to get to a measurable 12 to 16 week pilot that you're going through and actually understanding what that means from a metric standpoint, what your controls are. A contract first data product from the year to touch on your earlier approach, this notion of policy as code being baked into that system.
Then we could look at things like what does it mean to have automated policy enforcement? What is the scaling that they want to be able to go through from an approach to this work? Then, really all of the things that are going to come out is going to be reuse your quality standards, your way of managing and governing a program like this, and then ultimately what are the evidence-based metrics that you're going to be able to go and say, "All right, I now know what I'm going to put into an environment." If the CTO is looking only in their own domain, then they have the ability to control for what their engineering teams are doing, what their product-led organization is doing, and so on and so forth.
That very well, depending on their context, might be the appropriate way. I think the biggest unlock is as soon as you start to cross out of the CTO-only domain and start working with other stakeholders across the business, because that requires a certain level of fluency or literacy around AI and its capability, that means that the CTO now needs to become an educator for the enterprise, which is one of the parts of the role. If you think about this, if they don't have an opinion about AI, that's likely a perishable condition for their career.
Ken: On that note, that is a perfect close. I want to thank Rickey and Thomas very much for their time. I really appreciate the vision you give when you're right there talking to the clients and out at the events and so forth. Thank you very much.
Thomas: Perfect. Thank you.
Rickey: Thank you.