Enable javascript in your browser for better experience. Need to know to enable it? Go here.

How developers can get the most from new AI coding workflows

Podcast host Ken Mugrage | Podcast guest Brandon Cook
November 13, 2025 | 29 min 13 sec

Listen on these platforms

Brief summary

One of the biggest stories in software engineering in 2025 is the impact of generative AI on the software development lifecycle. From advances in coding assistance to the emergence of so-called agentic coding, there's undoubtedly a lot for software developers to process, learn and experiment with — not to mention rapid change to contend with.

 

On this episode of the Technology Podcast, host Ken Mugrage is joined by Brandon Cook to discuss not only how AI has been shaping the way software developers work but how developers can play an active role in ensuring the technology is leveraged safely and successfully. Taking in everything from sensible defaults and best practices to evaluating how much autonomy you should give up to an agent in any given problem, this episode offers both a snapshot of where we are today and the role we all have to play in deciding what the future will look like.

 

Explore the Thoughtworks Technology Radar.

 

Listen to Brandon's last appearance on the Technology Podcast from July 2024.

Ken Mugrage: Hello, everyone. Welcome to another edition of the Thoughtworks Technology Podcast. My name is Ken Mugrage. I'm one of your regular hosts. My guest today is Brandon Cook. Brandon, would you like to introduce yourself, please?

 

Brandon Cook: Yes. Brandon Cook. I've been at Thoughtworks for 13 years now. I am a principal technologist at Thoughtworks. I've also been playing the global practice lead for software development.

 

Ken: Great, thank you. I imagine you've seen a few things happening in the last couple years. For those that follow Thoughtworks, you may know that we recently published the 33rd edition of our technology radar. One of the things that we were doing is our CTO, Rachel Laycock, actually posted, I think, on Instagram, and behind her was a board that we call too complex to blip. What that is, people suggest things that we should put in the radar, but we really can't summarize them in one or two paragraphs. We can't do the topic justice. We use those as potentials for podcasts and articles, and other things.

 

Brandon saw that post on Instagram and reached out to me and said, "Hey, I have things to say there." We chatted and we thought, gosh, we should share those with you, the listening public. What we're going to go through is a few different topics here. First off, Brandon, it's a loaded question, but the impact of generative AI on developer experience. Pretty much all we talk about is generative AI these days, it seems like, but the truth is it is touching every part of what we do. As a developer, how has the experience changed creating software over the last couple of years?

 

Brandon: I think everyone can feel the transformative shift in how we're trying to approach software development while using things like coding agents and other applications of LLMs into the software delivery life cycle. There is something there. I think a lot of people may have mixed opinions at the moment on what is actually feasible and what actually gives the best outcomes. I think a lot of people are familiar with the vibe coding term and being able to just prompt your way towards an application.

 

At the end of the day, a lot of us in the software engineering side of the world understand that that vibe coding aspect of it and just not caring about the code will only get you so far. It's very good for rapid prototyping or experimenting with something or a small little toy application, but actually getting that into production where you have hundreds of thousands or millions of users, is probably not the best approach.

 

We're trying to find that good middle ground of being able to leverage coding agents to help accelerate the software development cycle, but then also still instill all those good practices into the coding agents as well. Still having all out of the foundational practices that we would do during software development. Things like TDD test-driven development is an important aspect of the Thoughtworks sensible defaults, and the idea around using TDD to drive the design of the system as well as when you're using coding agents using TDD to drive and steer the coding agents towards the outcome that you're looking for.

 

It's really working with the coding agents and focusing not necessarily where you're vibe coding, where you're focusing on the pure outcome, but also using the coding agents and focusing on the how. Embedding a lot of the standard practices and good outputs and outcomes that you're looking for in terms of releasing high-quality software that can be used by your end users.

 

Ken: You used a term called sensible defaults, which is a term we use in Thoughtworks that actually, I think last time you were a guest on this podcast about 18 months ago, we were talking about that. It's things like, we do TDD by default. For us, a sensible default, for the people that didn't hear that podcast means that this is really the way you should do it unless you have a good reason not to. We don't believe in the term like best practices. There's no single best.

 

That said, I know internally we've had a lot of conversations about the maturity of some of these practices in this area that we're not ready to publish sensible defaults yet. Is that getting closer or what are some of the things, even if they're not ready to codify them quite yet, what are the sense some of the sensible defaults, if you will, that developers should be looking at in this area?

 

Brandon: Before I get into, I guess what we're seeing as emerging sensible defaults for AI-first software delivery and coding agents, I'll like to reiterate that a lot of the sensible defaults we've already have, like the foundational practices, are essentially amplified by leveraging these tool sets. If you're already doing frequent and continuous integration, you're in a better spot than someone who has long-lived branches and long code review and PR times, because at this point you're only going to accelerate the amount of code that you're going to be outputting because these coding agents are very good at generating larger code sets.

 

You can even see in a lot of the recent studies, I think there's get-clear study as well as the latest DORA AI development report that we're already seeing an uptick in PR sizes and code sizes as the introduction of coding agents has come into play. If you're already having a struggle getting through the review cycle, integrating code, dealing with merge conflicts, it's better to shore up that practice prior to adopting coding agents and then start building that into your system. That's where we're seeing a lot of our sensible defaults across the board being more important. It's almost like validating that okay, these practices that were good for developers without coding agents are also going to be importing for coding agents as well.

 

Another aspect of where we're seeing emerging sensible defaults is a lot of the coding agents are converging on some standards and practices. If you look at the agent's MD files, you'll see most coding agents and tools basically out of the box will say, create this, instill the context and expertise that you provide for your standards and practices so that the agent will follow those practices while doing development and ensuring that you're also basically instilling some of the software engineering craft and practices into the development for the agent. Then that you're making sure that you are bringing your expertise to the agent. It's not just you're going back and forth between the agent and yourself without actually providing a lot of the context that can lead to a better outcome.

 

Ken: It's interesting. It reminds me a little bit of the hit that continuous delivery did, gosh, what is it, 15 or 16 years ago now, where a lot of us would say, if you're creating junk, this is just going to let you create faster junk. We didn't always say junk, but that's what I'll say on the public podcast. I get what you're saying, that yes, it's the engineering practices are still maybe even more important now.

 

You also mentioned, of course, the "vibe coding" term. I'm actually publishing an article today or tomorrow about how the ability to do that style of coding has actually brought me closer to code than I've been in many years. I was a professional developer, but it's been a while. I don't prototype anymore or that sort of thing, but I started to. It was really good for flushing out the ideas and then sharing it with people who had the right development experience.

 

One of the things that I also experienced during that, we talk about the team practices and team structures, is I was using a tool, and when one of my colleagues came by and they're like, "Why are you using that tool? That's what we were using a year ago. That's not anywhere close to best of breed anymore, and you should be using this other one." How do you deal with that?

 

Everything is moving really quickly. Sometimes there are real substantial differences. In this, there was. I switched tools. I'm not going to name names because this is going to be different next month than it's today. I switched to a different tool and I'm like, oh my gosh, this is so much better. How do you draw that line when you're managing a team? If you're always updating your tools, you're never writing your product. What's that look like?

 

Brandon: I think right now with the breakneck pace of the industry, there is a challenge to continuing to know when to move on to a new tool or whether to keep adapting to the current tool that you're using, as well as just understand what are the foundational ways of leveraging these tools in the right way. Even if you look at particular tools like Claude Code, for instance, they'll have commands and sub-agents, and skills, and MCP servers. All these new ways of collaborating with agents or building agentic systems are coming out on a weekly basis, and it's hard to keep up.

 

At the end of the day, it's really figuring out how do you manage that context and how do you provide the right context to the agent without giving it too much information, where now it's got context rot and it can't really drive to the outcome that you're looking for. When is it time to clear out a session and restart? How do you start adding proper memory or proper expertise to an agent at the right point in time to give you the right outcomes that you're looking for?

 

I think there are some foundational practices that are emerging there, even though there's a bunch of new ways of doing it, depending on the tools that you're using, but those are the foundational practices that are emerging with how to use these tools. Whether you're using a new mechanism for managing context by some other tool that comes out next week, having that understanding of when to use that and when to maybe use the old mechanism that still can be valid in some outputs is going to be important.

 

Basically, at the end of the day, it's like how software engineering has always been, being adaptable and flexible. Not everything is a silver bullet. Trying to decide what pattern to use at what point in time for the use case is probably the most important aspect of it.

 

Ken: I know one of our colleagues often says that AI is good at making experts better and beginners worse. Silly example, the baseball playoffs in the United States were recent. I was a little chippy Sunday morning as a Seattle native, and our Mariners got kicked out in the championship series. I asked for a correlation between payrolls, and where they finished. It gave me that correlation except for that it was wrong on 80% of where the teams finished. Had the wrong teams in the World Series, et cetera. It was just wrong. Had I not known that, I could have published that. If we think from a code perspective, is your IDE a pairing partner now, or where does it sit in the team? What's its role in the team?

 

Brandon: When we're talking about pairing, I think people like to see it as a replacement for pair programming, but I don't think it's a replacement for pair programming for a variety of reasons. One, the main reason for pair programming isn't just right productivity increases or anything of that nature. It's really about the need for collaboration as a team as well as spreading ownership of the system across the team.

 

I think even when we're using these coding agents and developing with them, and they're outputting a ton of code or generating a lot of different changes, at the end of the day the ownership is still going to reside on the team. We haven't reached a point where the agency of these coding agents is where they're going to have full ownership of the code. Who knows when that's going to happen, but it's clearly not in this day and time that that's occurring.

 

It's even more important to make sure that we're pairing and we're disseminating knowledge so that the whole team has ownership of the system, and not just siloed with maybe just one developer, or even just within that session of the coding agent. We definitely need to have ownership.

 

Another aspect that I'm starting to see like you were saying Ken, it's coding agents being really good with experts, but difficult for juniors. I'm trying to work through ways of how do we have coding agents where pairing sessions with coding agents as a trio where maybe you have more senior engineers working with more junior engineers to, one, cultivate the craft and expertise with those junior engineers, but also, the junior engineers tend to have more innovative ways of using these tools that the senior engineers don't.

 

It's almost a good synergy of the senior engineers knows where the bodies are buried in terms of gotchas and different systems. and designing systems, and they can bring that expertise and instill that with the junior engineer. Then the junior engineer has innovative ways of using these tools and agentic systems because they are growing up with it, and that's just their way of development. Marrying the two can lead to some interesting outcomes, and we're starting to see that on the ground, but we're trying to instill that more into our practices on a variety of our teams today.

 

Ken: As we sit here in November of '25, in case people are listening to this later, if we think about, you mentioned agents in agentic. If we say, and I realize that there's a million things out there, a million definitions, but if we say part of the definition of agentic is autonomous actions, are we close at all? Is there an agentic development architecture? Is there an area of the code, an area of the testing, deployment, whatever, where we can trust autonomous agents? If so, how do we get there?

 

Brandon: I think right now my mental model is there's an autonomy slider almost of how much autonomy do I want to give the agent? It's not just autonomy, but it's like what scope of the problem am I doing? What's the risk that comes into play into the problem I'm trying to solve? Also, what's the novelty of the problem I'm trying to solve? At the end of the day, most of the coding agents are essentially trained on what's been done mostly on the internet in terms of code bases like this. It's like, what are the normal things that have been done on the internet?

 

I think it's really great at accelerating any boiler freight things, or things that are simple, that have been done before, that are repeatable, so giving it more autonomy in those tasks, but then saying, is this a high-risk area? Like, probably giving less autonomy, making sure there's more feedback loops between the developer and the agent in that area, or maybe even if it's so novel and so risky, moving back that autonomy and just going back to use, leveraging the auto-complete features of these agents. The auto completes is where we all started.

 

It's almost just like a slider knob, depending on the situation and determining how much risk is involved, the novelty, as well as how much autonomy do I really feel like will lead to the best outcome in this agent, and how much ownership should I have over this piece of the code base?

 

Ken: Something you touched on a little bit there, at least what I took from it, if we look at especially the commercial large language models, they're trained on public code, they're trained on what they can find in the open, which is sometimes-- I come from an open-source background, sometimes very, very good, and sometimes not. Really, I don't have any clue. Are there any guidelines for, whether it be choosing your model or the various techniques you can use to limit what your AI can look at, are there any accepted practices now for saying, "Don't look at the garbage code"?

 

Brandon: There's an emerging practice of basically, essentially, setting a pattern in the code base. Like I said, I'm going to do the autonomy slider. Maybe if I'm just initially setting a pattern in my code base, that's where maybe the autonomy slider goes down, and us as a team need to pair, set that pattern. Once the pattern is set, you can turn the knob back up if you want to repeat that pattern somewhere else in the code base.

 

That's what these agents are very good at. They're very good at taking what's in the code base, understanding it, and then being like, this is the guardrail that I have, this is the pattern that's been set in the system, and now I can repeat that elsewhere. It's very much what I've seen performed well in the past, where maybe you have an architect or a lead set a pattern in the code base, and then the team goes off and runs with it. Now it's even more important to do that with coding agents, setting that foundational pattern in the system and then having that be repeatable and accelerated by a coding agent.

 

Ken: Again, it's the autonomy versus human oversight. We're never going to give the AI a pager. What do we do to make sure that our operations people know how this is architected and how it's being run? This is always a complaint we had with continuous delivery, is that, oh, this just gives our development teams the ability to push faster junk, which where the whole DevOps movement came in, and trying to mix them together. Where's that slider at? How do we know that the support people know what the latest feature does and how to troubleshoot it, and how do you share that knowledge organizationally if some of the codes being created by someone who doesn't pay half a pager?

 

Brandon: Another aspect where I think about it this way, and the coding agents are staff-level at understanding the code base, so that's where it's like very good at, this is what I've outputted, almost being able to create documentation on the fly. I know a lot of us struggled to create a lot of documentation or understanding how the system works after the fact, so that people didn't have to go into the code base and start navigating and really understand the system.

 

Almost being able to document the outputs of that session that day, or the code base that day into something that could be used by operations teams and SRE teams, and potentially even other agents that need that documentation for support or things of that nature can be accelerated by using coding agents.

 

It's almost like this loop of set the initial pattern, have the coding agent repeat that pattern, then also use the coding agent to then document a lot of the outputs of the work that's been done so that they can then be set into the knowledge base of the organization or the developer journey so that either other agents can leverage that documentation or leverage that public knowledge base to then future help other folks down the line help with different decision-making, whether it's issues that have arose in the system or we're trying to change the system for a new feature set.

 

A lot of that developer journey is definitely getting shifted at the moment. Now, not to say that flow is perfect today, but that's where a lot of the agentic systems and a lot of the area is ripe for experimentation and ripe for disruption to help accelerate a lot of that flow.

 

Ken: If we think about the full team being a cross-functional team, everybody from, I think it was John Willis that said from idea to cha-ching, how has AI, or has it as a developer, changed the way you get your inputs and outputs? Like how you get requirements, how you understand what needs to be built, verifying that you built the right thing. Has that changed at all the inputs and outputs, or is that still just people on a table?

 

Brandon: It's definitely changed significantly because I think people aren't starting with that blank slate anymore. Usually maybe it was gathering requirements, starting off with a blank slate. I think it's a lot of people who may be more AI forward are leveraging agents or LLMs as another source or another participant in the requirements gathering or the requirements formatting or the ideation. I think it's almost as another participant.

 

A pattern I like to use is just making sure that it is just purely another participant. Almost just doing the same ways of ideation or requirements gathering that you were before and then validating and using it as a way to critique those requirements to see if there's any educations or gaps that have come into play, because then that can help combat some of the complacency that you may see if you're just taking in the raw output from the LLM and just using that as is because, oh, it looks good. It's probably good enough.

 

That's where you probably don't want us to go too far in that direction because you still probably want ownership over those requirements, ownership over the gathering, ownership over the decision-making, but really leveraging the LLM as a collaborator to identify gaps, critique any vague requirements, as well as just help make improvements overall.

 

Ken: How is the architecture itself evolving to support these? I know you've done a little bit of talk about durable computing and related topics. Looking forward, is the architecture evolving? What's that look like?

 

Brandon: Architecture is definitely evolving, but I would say it's almost like the emperor and new clothes-type thing where a lot of architecture patterns and practices that we were writing in normal systems, without AI, are just being repeated and instilled with agentic systems as well. If we're talking about doing context engineering and managing context in an agentic system, and we want to have particular specialized agent here, and if you're talking about a multi-agent system, having a specialized agent focused in one area and another specialized agent focused in another area, and you have an orchestrator agent that organizes everything together, that to me almost just seems very familiar to how we try to design a lot of our systems.

 

Whether they'd be distributed or whether they'd be in more of a monolith, like where we try to make the modularization and have that separation of concerns and then maybe orchestrate the system together based off those differences. It's a lot of those similar practices just being applied on with AIs and LLMs nowadays.

 

If you look at a lot of these durable computing platforms that built in a lot of the resiliency around distributed systems, they're leaning in to say, you can do all your orchestration for your agents with our durable computing platform. You're resilient to failures, and you can do retries if you can access an agent for whatever reason or an agent fails. It's trying to apply a lot of these older practices with this new flavor of managing context and building out different agents and trying to basically lead to better outcomes with agentic systems. It's a lot of "what is old is new" kind of feeling for me at the moment.

 

Ken: It's interesting. There's a theme there. Like the engineering practices still matter, what old is new, et cetera. Maybe I biased this question, but just in conclusion, I guess, if people take away one thing, if you could teach the world one thing about software development with generative AI, what's the important takeaway?

 

Brandon: I guess the TLDR, a lot of the things I've said today are, yes, we are in a very transformative time where it's good to stay abreast of all the new tools, all the new patterns and techniques, but then really understanding, what are the foundational patterns behind them and am I doing those correctly? If you're doing those correctly, if you've got the expertise within your teams, that's where you can start leveraging these AIs and coding agents as a way to amplify those practices and amplify those across your system.

 

If I don't have those and I'm just trying to tack it onto a given workflow, that's where we're going to see all these-- If you probably read, you'll probably see, "Oh, most AI projects fail." I'm like, yes. I'm not surprised because most software projects tend to fail as well. If you're missing some of these foundational practices and you're just trying to tack on AI to it, that's not the approach you want to take. You really want to take some of those foundational first-principles approach to it and really saying, how can I use this to augment my system once I have those foundational practices in place?

 

Ken: I want to thank you very much for your time. It's always interesting to hear what's going on in the real world.

 

Brandon: Thanks for having me, Ken.

We help engineering teams successfully leverage AI