Brief summary
Join Thoughtworks CTO Rachel Laycock for a candid look at what AI-first software delivery really means, beyond the buzzwords. In this episode, she discusses how this emerging technology is reshaping software development life cycles, and how it can be used to tackle the most complex modernization challenges. If you’re navigating the fast-changing landscape of tech strategy, this one’s for you.
Episode highlights
AI-first software delivery emphasizes a shift in mindset where AI tools are leveraged from the start of projects, not as an afterthought. This approach requires rethinking roles and workflows across the software delivery lifecycle.
Organizations need a structured plan for experimentation, including clear guardrails around security, compliance, and data privacy, enabling teams to explore AI tools without compromising risks or resources.
The fast-paced evolution of AI tools necessitates a shift in procurement strategies, favoring short-term contracts to allow agile adoption and iteration as tools improve and new solutions emerge.
While faster software delivery grabs headlines, tackling long-standing challenges like technical debt and modernization of legacy systems is where AI can bring the really transformative value.
AI-powered tools are making previously impossible modernization projects more feasible, enabling reverse engineering, streamlining workflows, and reducing risk and costs for large-scale legacy systems.
With AI-generated code becoming more prevalent, testing, guardrails, and verification processes gain heightened importance to ensure quality and prevent the accumulation of technical debt.
Long-term, rigid strategies are no longer viable. Leaders must adopt iterative, learning-focused approaches to strategy, continuously adapting based on experiments, industry changes, and new tools.
Amid overwhelming possibilities, leaders should rigorously prioritize fewer, high-impact initiatives, ensuring focus and alignment to deliver meaningful outcomes while making space for AI experimentation.
Organizations must integrate experimentation and learning cycles with ongoing execution to stay competitive. Leadership buy-in and flexibility are both critical.
Transcript
Kimberly Boyd: Welcome to Pragmatism in Practice, a podcast from Thoughtworks, where we share stories of practical approaches to becoming a modern digital business. I'm Kimberly Boyd, and I'm joined by Rachel Laycock, Chief Technology Officer at Thoughtworks. In our edition of Perspectives, we explored what it means to take an AI-first approach to software delivery. It's a topic generating a lot of buzz, but also a lot of confusion.
In this episode, we're going beyond the headlines to dig into what really matters. We'll talk about what AI-first means in practice, where organizations are seeing impact beyond just coding assistants, and what leaders need to do now to unlock the next wave of AI-enabled software development. Rachel, welcome back to Pragmatism in Practice. Nice to have you on the pod again. It's been a little while.
I think last time we were talking tech debt, and now we're talking, everyone's favorite topic, AI, and specifically AI-first software delivery and development. Maybe if we could kick it off, you could just refresh listeners' memory and introduce yourself, and tell us a bit about your background and your role as CTO at Thoughtworks.
Rachel Laycock: Yes, Kim. Thanks for having me. I may still talk about tech debt today. We'll see. Yes, I'm the Global Chief Technology Officer for Thoughtworks. I have been at Thoughtworks for, I believe, 15 years in many technical leadership roles. I've been in the CTO role for over two years now. Prior to this, I ran the modernization platforms and cloud service line, which is probably why I was talking about tech debt last time.
Kimberly: Like you said, it might come up again. AI doesn't mean tech debt will no longer exist.
Rachel: In fact, it may be the opposite.
Kimberly: [laughs] Exactly. Exactly. We'll dig in for the topic of the day today. How do you define AI-first software delivery, and how do you think it differs from traditional software engineering approaches?
Rachel: It's a great question, Kim, because we had a lot of back and forth on what we wanted to call the AI version of software delivery, even within Thoughtworks. We've added around words like assisted and enabled. What I really wanted was people to have that mindset shift of thinking about how to use an AI tool or a model right from first principles. Instead of thinking about it like adapting your current job, how does it change the role that you do? How does it change the whole software delivery lifecycle process?
Not just coding. There's a lot of focus on coding, but even in modernization, even in what your business analysts or user experience designers might do, there are tools obviously available for all different kinds of roles. I really wanted everybody to start leveraging these tools and not be afraid of them so that they could then use them in anger and see what they can do better than your job or help you do faster or be more productive or more effective in and what they're maybe not that great at and help people start to see how their roles are going to change within creation of software and also within modernizing software and operating software.
We actually did this when we first started doing mobile apps years ago. I can't remember when that was, but we went in with this internal campaign of mobile first to get people to think about not just how do I take this web app and turn it into a mobile app, but how do I think about the features of mobile right from the get-go, and how I might design something differently. I wanted that same mindset with generative AI and large language models and the tools that were coming out around that. I wanted people thinking right from the start, how do I use this in my workflow, how does it change my workflow?
Kimberly: Just picking up, and I think the word I heard you say there a couple of times when you use a terminology AI first, it really is so much about the mindset, because it's not getting people even just to develop and build and deliver software differently, but it's to think about it differently. Putting AI first is like what that means to me when you say, blank first, like fill in the word, is your first step. You've got to think about that at the beginning, not as an afterthought. That's more so change in behavior management than anything else. Makes a lot of sense for that approach.
Rachel: Yes, because people need to change their behaviors, and until they use these tools, they're not going to know what they do well and what they don't do well. I even just think about my role when I sit down to think about technology strategy or insights that we gather in our team. I realized I didn't really talk about what my job is at Thoughtworks, but I drive the technology strategy, which obviously, for the last couple of years, has been AI-focused. It's very focused on AI-first software delivery, but it's also on AI-accelerated modernization and agentic architecture.
The future is another area that we're focusing on. My team also drives a lot of what we call thought leadership. The books that we put out and the articles, and the podcasts like this, what's the things that we should be talking about that we can continue to put out a voice in the industry that's pragmatic and that's built on real-world experience. We really try hard not to just jump on the bandwagon and go along with the hype, but look at what's real within that. The only way to look at what's real is to get people to start using the technology as quickly as possible and get those early adopters, provide early insights, and then obviously go through the adoption curve of getting more and more people bought into it.
That's a big part of my role is getting us from zero to one, getting us to almost that early majority phase, and then moving on to the next piece. Yes, the mindset is so important. We're talking about a change, and changing hearts and minds is not easy. Certainly, when we put out the word AI first, there was a lot of reaction to it where people were like, "No, the human in the loop is still so important." I'm like, "Absolutely. That's not what I'm saying. I just want people to start using it so that we could see where the human needs to be in the loop and where we can actually let potentially the agents do some of the work for us."
Kimberly: Nirvana future of the agents. Maybe thinking about timeline and you were talking a bit about a big part of your job is taking technologies and bringing them to zero to one, not only for Thoughtworks, but helping that thinking for the broader tech community and our clients' organizations, where do you see us at right now when we're thinking about that AI-first software engineering adoption curve? How far along are we?
Rachel: I think as a consultancy, you have to pride yourself on being a step ahead of your clients. Otherwise, how can you provide useful advice? So I think at Thoughtworks, what I'm seeing now is we're probably in that early majority phase with some of the late majority starting to pay more attention. I've heard of some very tenured, well-known Thoughtworkers, I won't name any names, who have started to use these tools and started to go, "Okay, I see what you're getting at here. I see what this can do."
I would say clients are at different phases. Some of them were very quick to start with GitHub Copilot, which interestingly, based on the research that we did internally, became a detractor because there was so much hype around Copilot, around this 50% productivity, which turned out to be around 10, and there was so little acceptance of the code suggestions that people just were like, "Oh, it's loads of hype." Then, when you get the second generation tools coming out like Windsurf and Claude, and Cursor, people were like, "Oh, it's the same."
They're actually really different, and they're much more advanced. Getting that adoption up has actually helped us with getting past just the early adopters, but getting some of that early majority to buy into what these tools can do and how the life cycle of software is going to change. We're also working on that with our clients. Clients that have rolled out GitHub Copilot is like, can we help them identify opportunities to use some of the next-generation tools so that they can see that bigger impact and potentially take a bigger step forward.
We're also helping clients with the adoption of other tools in the design space and in the infrastructure space, you name it. We're looking across the whole life cycle, and this is something our clients had a similar experience to, is we really messed up our procurement department, who were used to "You want to sign up for this tool for like, what, a year?" It's like, "No, just a month."
Kimberly: They need to change terms. This is a fast-moving space. [laughs]
Rachel: Our internal systems weren't really set up for that level of change of the landscape of the tools that we would use. I think our clients face similar challenges. We're not used to having to adopt technology this quickly and adapt to it changing so quickly, and very few organizations are set up to do it like that, especially in the enterprise. It does put a strain or at least require an adaptation to how you procure software while you're in this testing and learning phase, because you don't necessarily want to sign up for something for a year if, within that year, three more tools--
Kimberly: Yes, it's going to be obsolete in a month or two. Yes.
Kimberly: Still a phase of a lot of experimentation, it sounds like. I want to dig in there a little bit. You talked about there needs to be a big shift in procurement just because of the fast-moving space of all these tools. What else have you seen when it comes to driving more full-scale adoption of AI-first software engineering? What are the barriers, and what are some of the basics that need to be in place for Thoughtworks, as well as other organizations, to be able to pursue it?
Rachel: You have to have the basics around security and compliance. Very quickly, you need to engage with those parts of your organization and make sure that you're okay with how the data is being used, if the data is being used, and what you're signing up for. That was a really important one. I also mentioned procurement. To me, it's like, what is your plan around experimentation and learning? As I said, because it's changing so quickly, it's hard to choose. Certainly, things have not settled.
I think to me, the big important thing is like, what's your plan in terms of the test and learn cycle? You've got to get the basics right in terms of data and data privacy and security, and compliance. Once you get that, it's like you've got to set up the guardrails for your teams to experiment within that. We work quite closely with our internal infrastructure and security, and compliance teams on what are the experimentation sandboxes that our teams can work within, so they can really go to town on these things and use them in anger.
Then, obviously, we have to have all those legal checkboxes checked with our clients as well and make sure that they're assigning, they're okay from their perspective to use these tools. There are some barriers to entry, but I do think it's worth it from a payoff perspective. There's certainly challenges we're seeing with copying and pasting code, and that having injection attacks in it, and things like that.
There are certainly risks associated with tools and the whole software supply chain that need to be paid attention to in terms of identifying what the potential risk is and how you're going to mitigate that, and how are you going to balance that, managing risk with the ability to allow teams to experiment.
Kimberly: You mentioned a number of the different coding assistants, and I think that's where a lot of people's heads initially go when thinking and talking about this space. There's a lot of potential beyond just that. Where else in the software development life cycle should organizations start to experiment, just beyond coding assistants?
Rachel: I do think it's interesting to focus on coding assistants since coding and building code is actually just a small part of the entire life cycle. We very early, and we actually open source, we created a tool called Haven that essentially allows you to create prompt libraries so you can improve the context that you're giving to your teams. It also helped to break down stories and things like that. I know that that's something that Atlassian has been working on, and there's tools, obviously, like Figma in the design space.
I think it still remains to be seen when you look across the whole life cycle, when you start adopting all these tools, how different roles are going to change, and what-- Everybody's been so focused on productivity, but where does it leave us? My hypothesis is that the specification and the verification are going to be more and more important if we're allowing these tools and these models to write the code for us.
I expect a lot more focus in testing and guardrails to be coming up in the next, let's say 6 to 12 months, especially with some of the research that's come out that's showing that more code isn't necessarily good, it's actually increasing the technical debt for organizations. We know that there's a lot of challenges with carrying a lot of technical debt.
Kimberly: See, I knew you would mention technical debt.
Rachel: I couldn't help it. It came up. [laughs] I think you just have to go into these things with your eyes wide open. One of our big hypotheses at the moment is that getting the specification right and the verification is going to be really, really critical. Then you can start to see when you look at that, there'll be more tools that are focused on that, either that, we're building things as accelerators for our clients. That's usually a stopgap from the products coming out and doing those things.
I'm expecting a lot more in that space because I think there's still a lot in the guardrail space that's unknown, is like, "Do you generate your tests and code separately? Do you leverage the research in evals, and are you okay with some margin of error with what's produced? It does bring up a lot of questions. Then, of course, there's the quality metrics as well. I do think if you haven't got really strong practices around continuous delivery, continuous integration, and good test coverage, and things like what we would recommend, like test-driven development, it's actually going to be harder and you're going to potentially create more issues.
I do think that those core fundamentals matter even more. The other area that I'm super excited about is how it can impact and help with some of the challenges of modernization. Obviously, we've talked about technical debt in the past, but one way to alleviate the technical debt is to modernize the applications and improve them, or rewrite them, refactor them. Especially for very large, old applications, maybe it's written in mainframe, maybe it's a Java code base, but something that's very old and not well-maintained, and you don't have good SMEs around it, a lot of organizations have resisted the big rewrite because of all the risks associated with it and all the challenges.
It's the cost, not necessarily getting the value at the end, failing miserably. Nobody wants to lose their job over these kinds of things. We've been looking at how Gen AI can help with things like reverse engineering. It can understand masses of unstructured data. Can it understand masses of old mainframe code? Turns out, yes, by the way. Then we've been partnering with an organization called Mechanical Orchard that built this imaging platform that allows you to build out, understand the data flows as well as the code, working together with us, and then allows you to understand the core parts of the system and rewrite the system piece by piece with the verification built in.
These are some of just the early things in that space. I do think that the challenge of modernization, the challenge of technical debt, is way, way bigger than all the hype around how fast can we build software. If we're building more and more software that's potentially not great quality now, let's say that it improves in the future, these models and tools get better at that, then we're actually exacerbating that problem.
Kimberly: Yes, adding to it instead of taking away.
Rachel: I do think a focus on that, and I'm hoping to see more happen in the industry around that. There's certainly a lot more interest from our clients in that space versus how much faster can I build the software, although people are definitely interested in how much faster can I build the software. It's also, "Hey, is it going to help with tackling these technical debt challenges? Does it remove some of the risk? Does it remove some of the costs? Does it help me go faster? Some of our early work is demonstrating that that is true, both from the reverse engineering and the forward engineering aspect. That's the part that I'm-- I'm actually really excited about what's going to happen in that space.
Kimberly: Hearing you share all that, I think when people immediately hear the word risk, it understandably has a negative connotation, like, "Oh, what do we have to be prepared for, or worried about, or plan for?" Thinking about it in this context and also thinking about it as you were just sharing the reverse engineering concept, you can almost, with the advent of everything going on in AI and the software space, flip that on its head and say, "What were the things that we thought were too risky or too impossible? What was that modernization mountain that we never dared tackle?"
Almost think about risk in an exciting, positive way, to say, "Hey, let's look at this with fresh eyes. Maybe these things are back on the table, and yes, I'm with you. That's incredibly exciting space to be able to think about and explore, meaningfully, whereas they might've been way off the table for no way, like, "This is going to take my career. This is going to take too much money." Now, it's definitely in the realm of the possible.
Rachel: It is. The early signs show that that is true. Some of our hypotheses is that there's going to be a lot more of these old code races and mainframes modernized over the next few years than there has been for 20 years. Obviously, it's a hypothesis. It remains to be true, but I see a lot of focus on that now. As I said, these are the really big problems that large enterprises are dealing with, that create an incredible amount of risk in their organization and costs. We haven't had any-- This isn't a silver bullet either. We're describing them as some bronze bullets that could help.
Kimberly: I like that. [chuckles]
Rachel: It's more appealing. We've always said, don't go for a big bang rewrite. Try and break it apart piece by piece and take into account like the Pareto principle, the 80-20 rule of like, "Look, 80% of the values and 20% of the software, let's figure out where that is. That's the piece that we'll modernize and we'll rebuild." Understanding it, finding an SME that can help you unpack it getting time with them, making sure that you ask all the right questions at that time point, that is extremely difficult. We built a tool called Code Concise that uses ASTs and knowledge graph to basically build this huge graph of the code base so that then you can interrogate it and ask it questions about what the software does.
You've actually just massively shortened the reverse engineering cycle, so now you understand the code base better, and you've reduced the cost of that cycle of a piece of it. Now we're thinking about, "Okay, if you then treat that like an MCP server, you could plug that into some of the new tools to provide context to those that are then building the new versions or the new components of that application." I think we're just at the start of this, and I have this picture in my head of how it all fits together. I think it's probably just a giant infographic because it definitely doesn't fit on the slide. I've tried.
I have this picture in my head of how it all fits together that I need to somehow get out of my head, but I think at the moment, as an industry, we're tackling these problems in quite a myopic way, one set of people looking at like, okay, with this modernization problem, some people looking at coding, some people looking at design, some people looking at requirements breakdown and analysis and task breakdown or whatever, we're not yet looking at the whole life cycle and saying, "Let's just assume we didn't build it the way that we did before and we've got these new tools, how would we approach building software?" That's actually some stuff we're starting to explore, but it's very early days on that.
Kimberly: I was going to ask, what's stopping us from doing that and really looking at the whole picture? Other than not having a slide big enough?
Rachel: It's not having a slide big enough, because the slide comes after you've actually done it. I just think we're in the early stage of the adoption cycle, and this is common that people then look at very small problems and try to tackle them. We're seeing this with clients, even just leveraging AI, either internally for their processes or whatever products and services they're building for customers, they're starting with small use cases, because then you get the wins of something working or not working, and then you haven't spent too much money.
You also get the learnings. It's not until people really learn this stuff and use these things in anger, and then when you start bringing that cross collaboration together, so you've got all these different roles using these new tools, you bring people together to start to look at the whole life cycle once they've got that experience. I don't think we've done that yet. We're certainly thinking about it and talking about it at Thoughtworks, and I'm sure others are too. We just haven't got there yet. We're still in the learning phase. Plus, it's not like the ecosystem has settled down. The models are changing. The tools are changing. New tools are coming out.
We're not in any settled state where we can say, "Okay, we stitch all these pieces together and this is the new world." All the explorations that we're doing in what we're calling the future of software, what software development could look like in the future is a stitching together of existing tools and having a backlog of hypotheses of like, what if X, could Y be true, and then testing that and where our experiments are also broken down into smaller pieces. We just did an experiment on just CRUD, like create, write, update, delete. Could you just generate an application, and it be the quality that you would want it to be?
Kimberly: How'd that go?
Rachel: It was great. We did another one. We called it black box reverse engineering. Could we reverse an application that you don't have access to the code for? Turns out the answer to that is also yes. [laughs] Then we're starting some around quality and testing, and guardrails, and what's the new approach to testing in the AI world? Then we're also looking at-- We have a bunch of sensible defaults around software development at Thoughtworks. Continuous integration is an example of a sensible default for us, or test-driven development.
We're asking ourselves, what are the new sensible defaults in software? Do the old ones still apply? Do we have new ones? Do we adjust the old ones? I mentioned verification and testing is probably going to be even more important. Maybe we'll expand some of the sensible defaults in that area. Then in other areas, maybe they go away because they're not relevant anymore. I think you can tell how excited I am about all these different changes.
Kimberly: Yes. I'm already envisioning another episode to come on and say, "Hi, how have some of these sensible defaults changed?" Because, like you said, some of them, maybe sunset, and some of them become more important than ever before. I think it's helpful for people to get that context and recalibrate around some of those things.
Rachel: I do think when people really start to learn what's going on and get into it, especially the people that have a lot of experience that they can bring to the table, it does open up so many questions. I do think it's such an interesting time to be in technology. I don't think I've ever seen a time where a technology has come in, at least in my lifetime of being in technology, which is just over 20 years, where so much is impacted all at the same time, like how you build software, how you might run it, how you modernize it, what's the software that you build, what's the new target architecture that you might have? What about the SaaS and the off-the-shelf products?
It's like everybody's impacted all at the same time and nobody's figured out what the future software architectures will be, what the new SaaS will be, what will software development look like? I think I get frustrated by people positing what's essentially a hypothesis as like this is the new way, which I just feel like is somewhat buying into the hype instead of just being honest and saying, "Nobody knows the new way."
We have a lot of theories and hypotheses, and analogies that we're hanging on to, but nobody actually knows. I watched the Andrej Karpathy presentation—it's on YouTube—about the software 1.0, 2.0, 3.0. He's one of the people that's at the very forefront of this thing. He actually came up with the term vibe coding, and he didn't mean to, but he did.
Kimberly: Too late. It's out there now.
Rachel: I don't know if I'll forgive him for it. [laughs] Because it's a very scary, terrifying thing to do, and it could be extremely expensive way to build software, but that's by the by. He did say in his tweet that this was for throwaway projects, but people just take the pieces that they are interested in, and like, "Oh, let's just vibe code this." It's like, "No, that's a terrible idea." Even he has a hypothesis. He has a perspective on what software building will look like. He doesn't know. Nobody does. I just think that's such an interesting time to be around, try and ignore the hype and see it as just, it's a hypothesis.
It could be true. There's probably some gold in there. There's probably some good insights in there, but it's not real until we have real-world applications that have either been built like that that are in production, written in that new target architecture. We've been used in anger for like a year or two or however long is appropriate, so that we can see that it stands up to all of the cross-functional concerns that we've cared about in software architecture for a long time. That's why I think it's such an interesting time because it's frustrating when people are like, "How much more productive are we going to be?" It's like, how long is a piece of string? Unknown.
We haven't really sat down and seen how this plays out. Even the way the organization's baseline, this whole productivity thing, if you think about your baselining on story points, and then people estimate in assuming that they have the AI tool, that's one baseline. Then, if you start estimating, assuming you don't have it, and then you assume that you do have the AI tool, then you're going to get a new baseline, which messes up your entire baselining approach. Baselining has always been challenging. It gets more challenging when you're asking people to baseline the old world and then the new world, and the new world is a moving target.
Kimberly: I don't have a frame of reference for this new world that makes it [unintelligible 00:28:21].
Rachel: [crosstalk] Yes. It's like this is why I think it's so important to give teams the space and the guardrails to experiment and learn. I understand organizations. Of course, they want to get the product out. They want something valuable at the end of it. That absolutely needs to continue to be the goal, but they have to have this overlaid goal across their products and projects that they've got running of the learning cycle, of like, "What did we actually learn from that, and how does it change how we might do the next application?"
That's maybe not as inherent in a lot of organizations. Then not just the learning of like, what did we learn from what we built, did it work, did it not work, would we have done it differently, but what else has happened in the industry since we started building that? That might change also the approach that we would take to that.
Kimberly: I think that brings up a good point. You've talked about giving folks room to experiment, having guardrails in place, but also the fact that there's still a lot of hype and people are constantly developing hypotheses, tools come in, they come out. This isn't firm ground right now. It's definitely a constantly moving space.
Rachel: Worse than [unintelligible 00:29:35].
Kimberly: Yes, I know, and in a good way, and also probably a, frankly, a super overwhelming way. For organizations that want to jump into the fray and then also manage to try to keep up and keep their head above water while they're experimenting and reacting to all this change, how do they do that? Is there anything else that they should consider beyond what you've already shared today, of having those guardrails, having that room for experimentation? How can they get the buy-in and support they need organizationally to properly experiment with AI?
Rachel: Yes. Getting buy-in support's always tricky, because you'd be back to business cases and influencing folks. I think about a client that we're working with now, and we're basically looking at their current-- It's a current transformation. Where they're looking at, we need to modernize these applications. We want to make sure that the new target architecture isn't basically building the new legacy. The challenge, of course, is, okay, I've mentioned some of the tools we've started to develop and partnerships we've developed around what can we do to improve how you modernize and reduce some of the costs and the risk.
Improve the speed and improve the value that you get out of it. Those are today's tools. I'm sure we will develop further on them, and I'm sure new products will come out into the market. The way that we are approaching it is both, it's like a several-pronged strategy. It's like, look at the things that you've got on your transformation backlog and look at where you could-- There's a good use case for trying a different approach on those. I'm just using modernization as an example. Then let's say you're talking about AI-first software delivery. You could similarly say, "Okay, we're going to have some teams running off leveraging these tools."
Maybe from a cost perspective, you don't want all the teams doing it. You just want to have a small subset that you can learn from. There's probably some no-brainers in there that you can quickly learn and scale out. Similarly, with the modernization, after you've done a few in a different way, you learn from it and you say, "Hey, this is better than the old way, let's scale that out," with the assumption that it's going to change, going back to my point of like, when you might have your quarterly governance on what you would do, there's a new layer to that governance of like, what did we learn and what's changed in the industry?
That needs to be a part of the decisioning of like, what does the next quarter look like? Then, in terms of the target architecture, similarly, there aren't really any, at least that I've seen, new architectural paradigms defined in any way for agentic architectures. There's not a new microservices out there that we can be like, "Okay, there's a few people that know what it is and know how to do it. Somebody's wrote a book about it, and then we can have a go at this new approach." If we're not there yet-- Then similarly, you're going to have to identify, and organizations will probably do this again with internal processes.
That's what we've been seeing is identify some of the areas on your current backlog where you could actually leverage, try and attempt a new target architecture in part, just to learn. You might choose something that's lower risk where you can learn how to do it well and what are the good patterns coming. Again, knowing that, there will be organizations, hopefully it's Thoughtworks, that says, "Here's a great architectural pattern for agentic." There'll be several architectural patterns that people will recommend. Some of them will work, some of them won't.
It really just becomes around like, we've always thought about-- At least I think from a strategy perspective, you think about the execution of big programs and big modernizations, then you think about experimentation, and you often think about them in the two different teams, two different threads. We need to start bringing these things together of like how do you overlay learning and experimentation, and external insights over your execution plan for your big programs? Which leads me to one thing I think is really important is like, if your big program is like, "We're going to be X place in three years," then it's already a fallacy because you don't know where things are going to be in three years.
If you could just continue down that execution plan, regardless of what's happening in the industry, then you probably are writing the legacy of the future while you're doing it. Instead of having some way to adapt and learn, not to adapt to every little change that happens, every new tool that comes out, but some sensible way to review what's happening, if you want to adapt your strategy, which is why I think that we've really got to move away from thinking about strategy as these large scale execution and more into it's a learning cycle, both from what did you learn, what's going well in the execution itself, and then what's going on that impacts how we might think about it.
Kimberly: That's a really fantastic point. Ties nicely with-- I know we don't have much time left. Just one last question for you today is, there's a lot that's been prevalent throughout the entire conversation. You're talking about, "Hey, if you are having this three-year transformation plan, it's not going to be there in three years because things are just moving so quickly." To make space for AI-first software engineering, make space for these experimentations, it also means trade-offs and it means stopping doing things that you're doing. What should organizations stop doing to make room to put this capability and this experimentation into play?
Rachel: It's a really hard question to answer because it depends what they've got on their backlog. In my experience, there's always too much. Everybody always does too much. I think one of the main rules of strategy is to figure out what you're not going to do. We all fall foul of just doing it anyway and having too much running at the same time. To me, it's like, how are you going to be more rigorous around that so that you can make space for the things that are a real priority? Even when we look at our global priorities at Thoughtworks, it's like, but what are the three things that we must do?
I think adapting to this new environment in how you build software, in how you're modernizing it, how you're running it, what's the target architecture, I think you must do something in that space. It doesn't have to be everything. Like I said, you could identify projects and services that you're building that you can take this approach with and have some learnings from, but you can't sit and wait. It needs to be on the must-do, which means that some of those things that are not in that top three need to get kicked off.
I can't say what those things are because every organization has a different strategic priority list, but it's the rigor around actually not doing some of those other things and accepting that consequence, because we know that great organizations prioritize and get things done instead of trying to do 20 things in a not great way. They pick a few things that they want to do really well.
Kimberly: Stop not prioritizing with rigor is the--
Rachel: [crosstalk] Stop not prioritizing with rigor. That is a terrible double negative. That hurts my brain.
Kimberly: I know, I know. [laughter]
Rachel: Prioritize with rigor. There you go.
Kimberly: There you go. There you go. Rachel, it's been great to chat. Clearly, there's a lot of excitement. I'm a marketer by trade, and I find myself diving in and really more deeply technical content than I ever have because it's just an exciting time and exciting space. Not a silver bullet as we discussed, but a bronze bullet, the power and potential, and capability of AI-first software delivery. I think, taking away today, the importance of the guardrails, making that room for experimentation, rigor in the prioritization, and not falling prey to the hype cycle and saying anything is definite, yet still having that hypothesis-driven mindset, all key things, I think, to be on the path for success and participation in this space.
Thank you so much. I'm going to ask you to come back when we have a cool, agentic architecture or an update on what we've learned about sensible defaults, since I'm sure there'll be a lot going on in both those spaces in the coming weeks and months. Thanks so much. It's been a really great conversation.
Rachel: It's great to be here. Thanks, Kim.
Kimberly: Thanks so much for joining us for this episode of Pragmatism in Practice. If you'd like to listen to similar podcasts, please visit us at thoughtworks.com/podcasts. If you enjoy the show, help spread the word by rating us on your preferred podcast platform.
[00:38:56] [END OF AUDIO]