Brief summary
In every Thoughtworks Technology Radar we feature three to five themes that represent the core issues and topics that emerged from the conversations we had when putting the publication together. This time (Fall 2025) they're all united by AI. They are: infrastructure automation arriving for AI, the rise of agents elevated by MCP, AI coding workflows and emerging AI antipatterns.
On this episode of the Technology Podcast, Bryan Oliver joins Neal Ford and Ken Mugrage to discuss all four of volume 33's themes. They dive into what they mean, how the team arrived at them and what they tell us about the state of software engineering and AI in 2025.
Read the latest Thoughtworks Technology Radar. Volume 33 will be published November 5, 2025.
Ken Murgage: Hello, everybody, and welcome to another edition of the Thoughtworks Technology Podcast. We have one of our biannual editions where we're going to talk about the latest edition of Thoughtworks Radar that's coming out next week. One of the things about Radar that you'll see is this thing that we call themes. We like to talk about themes partially because it's the only thing we actually create. When I say "we," meaning the Doppler group. For more on that, first, my first guest is Neal Ford, who's actually one of our hosts as well. Neal, introduce yourself, and tell us what a theme is.
Neal Ford: Thank you, Ken. Let's talk about themes on the Radar, the Thoughtworks Technology Radar. If you work on a Thoughtworks project and you encounter a bit of technology or a technique that you think is particularly good or particularly bad, twice a year, you get to nominate it to one of the groups on Doppler. The Doppler members go around and solicit what we call "blips" from project teams, and then we meet face-to-face in some location and curate all of those blips and create what is called the Thoughtworks Technology Radar, which comes out twice a year. It's always available at thoughtworks.com/radar.
As of this recording, we're one week in front of the release of Volume 33 of our Radar. We've done this 33 times now. We met about six weeks ago in Bucharest, Romania. It's important to realize that the reason I go through where these blips come from, because we, as a group, only create one thing, which are these themes. We're curating all these blips. We get about 300 blips per Radar session, and it's our job to narrow that down to a hundred-ish blips that will create the Radar and its write ups.
As a group, we are curating all these blips. We've realized in the past that a lot of really fascinating conversations come up as we're going through that process, because the way we curate them is really broad conversations, and lots of things come up during those conversations, and we've noticed thematic unity between our conversations. The blips are discreet little techniques or tools or something that go under our radar, but there's glue between those as well. That's what the themes try to capture.
This is the only thing that we create as a group are these themes. We always have three to five themes per Radar. What we're trying to do is capture some essence of the conversation, or the cross-blip glue, or the things that dominated what we were talking about in the hallways, or at breakfast, or in the bus rides, or all the other places as we were together as a group during that week. That's what the themes are. They're always featured on the Radar publication itself in a separate section. They are the most ephemeral thing of the entire Radar because the blips last forever. You can go back in history, and you can go back and find the themes, but they are very much of the moment. They're a snapshot of the conversation in time that was going on while we're having that meeting. That's what themes are.
Today, what we're going to do is give you a highlight of some of the conversations that led to themes, and what the themes are, and what they really mean to us as some of the creators of those themes. Now, traditionally, we have found that writing things like themes by committee is really difficult because they come out sounding like they were written by a committee, which is not great.
We had the idea a few years ago to start them in one voice and then let owners take them in whatever direction they want. At least they started from a basis. I'm generally writing the first draft of those themes, but then each theme is designated with an owner who feels particularly passionate about that topic and wants to control the final write-up and narrative, and title, the wordsmithing, and all that sort of stuff.
That's our additional guest today who is not one of the regular hosts of the podcast. This has been the longest introductions we've ever done on the podcast. Bryan Oliver, who is a member of the Doppler group and the owner of our first theme. We'll first say hi to Bryan, and then we'll talk about each of what those themes are.
Bryan Oliver: Thanks, Neal. Yes. I'm Bryan Oliver. I'm a principal engineer at Thoughtworks, working on a couple of books. Yes. Like Neal said, I've really enjoyed getting into the Radar. This was my first in-person Thoughtworks Radar, so I had a lot of fun getting to know that process. It's very intense. I really enjoyed it. Yes. The themes we're going to talk about today really focused on a certain area of the industry, but all have "uniques" to them as well, so pretty excited to talk about some of those.
Ken: Yes. It's interesting. Something you said there is that most of the time, if people go back and look at the themes, you'll see that they're pretty wide ranging topic-wise. This time there's four, and they're all about AI. It's easy to say, "Oh, you're just latching onto the latest hot thing," but you know what? That is the truth. Whether we're talking about infrastructure tools or security tools or whatever, AI's impact, for better or worse, and is there impact or not, was a part of that discussion. Therefore, it's an important part of the themes.
Neal: Well, and in fact, in the past, we've been pointedly proud of the fact that all four of the themes touched on different areas of our conversation, but we didn't have four different areas of conversation this time. [Chuckles] Everything was, like you say, it was dominated by AI. The last Radar, we had a conscious effort to half and half it, where half the themes were AI, but this time, there was no hope. It had to overlap because it so dominates the technology landscape now more than any other thing in the previous 32 editions of the Radar.
I think we have basis here for making commentary like this on the ecosystem. Nothing has had such a giant both breadth of impact and speed of impact as the AI stuff has been over the last couple of years.
Ken: The four that we're going to go through today, the first one's going to be infrastructure orchestration arrives for AI, the second is the rise of agents elevated by MCP, the third is AI coding workflows, and the fourth is emerging AI antipatterns. I guess we'll first start with infrastructure orchestration arrives for AI. Correct me if I'm wrong, Bryan. You have quite a significant background in the Kubernetes world. Is that correct?
Bryan: Yes. I do. I've done some contributions through SIG-Network and SIG-Multicluster, have formulated quite a bit of my career around it. I have been writing books on platform engineering, which is all Kubernetes stuff, and now working on a book with O'Reilly for delivery systems, delivery algorithms, all the architectures behind a lot of the schedulers you see in Kubernetes today.
Ken: Then talk to us about that first theme. I think you were even the theme owner. infrastructure orchestration arrives for AI, what does that mean for you?
Bryan: This has been pretty interesting. In 2024, at Kubernetes Contributor Summit in Paris, Tim Hawkins got on stage and kicked off Contributor Summit, which is the thing where all the people that contribute to Kubernetes attend a side event at KubeCon. He got on stage and he basically said, "We have to really focus on AI, or this project is going to be replaced." It was being a focus before that, but after that, that really kicked the focus into high gear. It's very true that needed to occur.
What we're starting to see is there's an older project called "Slurm" that's been around since 2004. Slurm actually predated GPU computing by a couple of years but is now the de facto HPC tool for scheduling and working on machine learning jobs. Now, what we're starting to see is you have lots of operations engineers who want to have a more reliable thing to manage, which is Kubernetes. You have a lot of machine learning engineers who want to write machine learning jobs and for things to just work.
We've seen this problem before with developers and operations engineers, but what's happening now is we need something to help us understand the actual physical layout of the data center. These jobs that we're starting to see being run are, for example, one rack of a, I think, GB200 is 14 terabytes of unified memory that acts as a single GPU. There are places that are using more than one of those at the same time.
You need a physical understanding of where they're placed in the data center because the latency between them is important. You can run a training job, and you want that job to be aware of the physical infrastructure so that it can be run on things that are next to each other.
What we're starting to see is a lot of tools in the Kubernetes space that work within this training space, as well as inference that solved this physical awareness problem that Kubernetes before didn't have. It didn't have topology awareness at that level, at that granularity. We're now starting to see a lot of projects trying to address this problem.
Ken: There's things besides physical distance, obviously, that affect latency. Are these tools looking at the actual latency? Are they measuring ethernet cables? I'm being silly a little bit, but when you say "topology," how literal do you mean that?
Bryan: In this case? What you see is projects like Kueue as well as Kubeflow, which is using Volcano and KAI Scheduler, and some of these others. What these are doing is they're using hardware labeling for their nodes within Kubernetes, and they're looking at this — basically giving rack labels as well as even network speed labels to these node groups so that the tool that is scheduling AI jobs to those nodes can then use that information to make a calculated decision of where that job should be sent.
So it's more about like the — almost the labeling process and understanding which nodes are where and providing that to these technologies. Yes. It's not just latency, it's also a lot of other factors like variability, heat, cooling, performance.
Neal: Well, it's basically execution metadata that you can take advantage of, the infrastructure you can take advantage of to say, "Oh, these should be clustered together for various factors." This reminds me, and this is only peripheral related, but I remember when CPUs got so physically large comparatively that the amount of time it took the electron to move across the chip was longer than one clock cycle.
That was a big crisis for chips because you've always assumed that a clock cycle was the smallest unit at work. It's the same sort of thing when now this is cast into these vast cloud infrastructures. It's the same basic problem of what happens when latency starts dominating compute. That's the general definition of that. What was fascinating was so much of our conversation touched on this.
In the past, we've had the two Ks have dominated a lot of our conversation at Radars, Kubernetes and Kafka, but those have faded away as that sort of became a commodity and became mature. This time, now, suddenly, we're talking about all those things again, but in this context of a scheduling AI and especially GPU stuff.
Ken: Who needs to pay attention to this theme? It's real easy to say, "Oh, that's huge scale. That's not me." Or is it everybody?
Bryan: Yes. No. In a way, everybody. If you are using even some amount of GPU compute, you can get performance benefits from adopting some of these practices, tools. You're not spending $1 billion on a full GPU data center, but maybe you bought 1,000 GPUs from one of these providers for a few months, and you want to get the best performance out of it because you spend a lot of money for that, maybe rental.
You would use some of these tools to help get the best bang for your buck, make sure you're scheduling to topology-aware groups, and all that. Really, you get a ton of performance out of training new models and that kind of stuff, so it ends up saving you quite a bit of money.
Ken: Neal, you want to introduce our next theme?
Neal: Absolutely. I'm super proud in this particular case that my title survived because the original title was the rise of agents accelerated by MCP, but I like the 'rise' and 'elevated' because the elevated and rise together fit, but it is the rise of agents elevated by MCP. One of the things that we predicted, it's easy to be an oracle if you predict obvious things.
On the last Radar, we predicted that for every Radar, there's going to be one super hotspot of innovation in the AI space. The last Radar, it was RAG. There were so many things about RAG, and we talked about RAG. It dominated our conversations, and it became one of our themes. This time, it was absolutely agents, agentic computing, and MCP.
That's exactly where this theme came from because agents and MCP have risen together at the same time, and they absolutely fuel each other. MCP, of course, is the model context protocol. This is completely irrelevant to this, but in preparation for the new Tron movie — I was watching Tron from the 1980s. The bad guy in Tron, the computer, you know what the acronym for the bad guy was?
Bryan: I don't remember.
Neal: It was the Master Control Program, MCP. They kept talking about MCP in Tron, and it's like, "Wow. They're talking about agentic AI back in 1980."
Bryan: Oh, my gosh.
Neal: It turns out it wasn't the same thing, but then MCP is there. MCP, of course, it's a fantastically flexible protocol that offers you the ability to query for tools or data sources or prompt libraries. It allows you to create these ecosystems where agents can autonomously build stuff. We're seeing a lot of real nice innovations. It went from simple coding assistance to now agentic assistance, where you can actually give them limited autonomy and tell them to go off and do stuff for you and have a reasonable expectation that those things will happen.
Part of the innovation in this space that we talked about a lot was this idea of context engineering, about how do you tell the agents what they're supposed to do and give them roles. We talked about things like AGENTS.md, which is a pattern that is emerging, a technique in this space where you describe each agent and give them their roles, their responsibilities and their perspective and their attitude on life and other things that may inform the way they do their job, and that makes them work better.
Ken: Something that really jumped out at me at that part of the conversation was that when we talked about specifically AGENTS.md and a few others, there were some folks in the room that were like, "Well, that's obvious. Every time you install an IDE, that comes up. We don't need to blip that. We don't need to talk about that."
Then there were others who aren't doing AI coding full-time, like myself, frankly. I had loaded up Cursor that morning and, like, "Okay. How do I get this context?" I had to go figure it out. This really, to me, the conversation highlighted a big gap in awareness where there's some folks that are really like, "Yes. This is just what you do," and other folks that have never heard of it. I'm curious if you shared that impression.
Neal: Yes. I certainly did. This is one of those spaces of moving so fast that something that was fantastically brand new six weeks ago is now so old, it's passe, and we've already moved onto the next thing. You can miss whole chunks of innovation and just leapfrog over [chuckles] it in a very short amount of time.
Bryan: Yes. We even talked about one day making an MCP for the Radar itself. It's affecting everything we do, even in a meta way, which I thought was amusing and also a bit scary at the same time. [Chuckles]
Neal: During our meeting, we found out — I think it was on the Thursday of our meeting, the first malicious MCP server showed up in the wild. As soon as every innovation comes out, then nefarious bad actors will find a way to utilize that innovation. As with all new technologies, and AI in particular, beware of new capabilities leading to new attack vectors.
Ken: Yes. I think that's important to mention, because I was talking over the themes with one of our other podcast hosts, who's not here today, Lilly Ryan, who also runs a large part of our InfoSec department at Thoughtworks, talking about MCP servers. She was noting that they see behaviors where people are doing authentication and authorization at the wrong level.
They have something in the middle, whether it be an MCP server or something similar, that is doing the authentication and authorization, and then, all of a sudden, people are getting access to stuff they shouldn't have access to. This is an area where it's moving fast enough to be a little dangerous, if I can say so.
Neal: Oh, absolutely. Yes. I think that's absolutely true. There are a lot of implications to the speed at which we're adding innovation that I don't think we fully realize, but I think the black hats will come up before the white hats will. [Laughs]
Ken: Yes. You mentioned, I think, Neal, you were talking about coding agents and giving them context and so forth. I guess just as a callback for the last version of our Technology Radar, our CTO and a couple Thoughtworkers joined talking about context engineering, so folks will be able to take that, but that's one way of doing coding. The next thing was AI coding workflows. I'll start with you, Bryan. How is AI changing the way we create code? I don't just mean it creating the code, but how's the workflow different?
Bryan: Well, it's getting plugged into every aspect of development. It's no longer just use Claude and give it some code and copy. It's now been plugged into your pipelines as well as even being run inside of Kubernetes clusters, for example. You're starting to see feedback loops coming from every single layer of development.
At first, some of these techniques seem like far off in the future or scary, but you start to adopt them, and you're like, "Oh, I'm actually starting to move faster." It's like I've got this junior engineer that keeps me in certain guardrails, and I'm starting to use it that way myself in a lot of ways. I think it's really exciting, actually.
Neal: Well, one of the hallmarks of a theme for us is that it encompassed several different of our categories on our Radar, techniques and tools and platforms, and this is definitely one of those things that spanned a whole bunch of the surface area of our Radar. Techniques like using AI to understand legacy code bases, we're seeing clear results there. That's one of the — there's so much hype and ridiculous hype and hyperbole in this world, but that's a concrete thing that we've seen actual results with, and GenAI for forward engineering, which is another technique that we've seen some actual results with.
We've talked about agents and interact with agents. That's part of AI coding workflows now, but also tools like a UX Pilot or AI Design Reviewer that allows you to spin up, that looks at user experience questions and helps you design user experiences, and then those will help you review user experience issues, while others like v0 and Bolt will generate user interfaces for you for prototyping purposes. You're using AI to help you build things like that.
Bryan: The code coming out of those isn't as bad as it used to be. You used to have these UI tools that would spit out the most horrible code you've ever seen, and now it's starting to really improve to the point where you're like, "Oh, some of this is usable and a good starting point." It's really impressive now.
Ken: One of the things that came up, and it's partially pace of change and partially guardrails, is random Enterprise A approved some tool months ago, and I won't name names, and it worked reasonably well, and it was the state of the art. Since then, it's been leapfrogged. There's examples in every category, so I'm really not picking on anybody in particular.
What happens from a coding workflow perspective is that the people on the ground are like, "This really doesn't work. It writes bad code." Bryan, you just got to say the code is better. Do either of you have practical advice on these workflows? How do you keep up to speed? How do you deal with, "Okay. I need to use this new model that's better at this. The latest version of Claude came out," or "The latest version of Flash came out." Is part of your workflow updating your workflow?
Neal: It has to be in the early days of this. It's similar but not quite as churny as it was in the JavaScript ecosystem when you were constantly — everything was getting supplanted every few weeks for what we're using for what's your core thing now for this. Because it was so granular, part of the problem in JavaScript was everything was so granular, all the dependencies. Instead of having a dozen dependencies, you'd have hundreds of dependencies on teeny little things. That drove a lot of churn. The problem with AI is not the granularity, it's just the volume of change that's happening right now. I think to be effective in this world, some of your time has to be constantly optimizing your workflow to reflect what's happening in the ecosystem.
Bryan: I would say it's similar for me. It's like you have to adopt that fast pace. Just as a really concrete example, I've seen some engineers on the ground who will keep the same thread going with an LM for weeks, and token count goes up, and you do that, too, but it's like you talk about how models are coming out every day. It's like start to get more comfortable with letting go of that thread and moving into another one.
Most of these tools, you can take the latest model and plug it into your current directory and start from that point and continue. You can even maybe start working on prompts where it's like, "Okay. This is what I was working on in the previous thread. Here's the context of that, and now let's continue forward from there." It starts to get more and more comfortable with letting go of the thing that you were working [chuckles] with yesterday, because there's a better one today. It's easier and easier to start adopting that workflow.
Neal: One of the things that was overheard at the Doppler meeting that I've been spreading around — it's one of those things that once you've heard it, you can't unhear it. Part of the limitation, of course, with GenAI is the context window. You can only put a certain amount of context before it starts losing what it's talking about. The observation at the Doppler meeting was why didn't they just call that attention span? I can't unhear that now because that's what it should have been all along. That's its attention span, and you've exceeded its attention span.
Ken: I am going to use that. There's a sentence in the actual theme talking about how we're talking about the whole team. I'll just give an example of where we really do mean that. If you go and you download, hopefully, or you have or will, depending on when you're listening to this podcast, the latest version of the Radar, you may notice that the look and feel is a lot more shiny and translucent, and those sorts of things.
The designer that's done the Radar for the last several sessions like most designers, is probably, I'm guessing, I don't want to put words in her mouth, a little threatened by some of the thoughts of what other people think AI can do. What she did do is say, "This is the feeling I want to get across. This is what I want to do."
Then, used the AI tools to add the translucency and the three-dimensional, and that sort of thing, and mentioned to us in there — we had a review meeting. We publish these things of which design we're going to use, and said that it allowed her to do things that she just never would have had time to do in the past. We really do mean the whole thing. It's not just code. It's everything from requirements analysis, to design, to MVPs, to the whole deal.
Neal: I've been trotting this analogy out for a bit. It turns out this one's a contextually tough analogy to trot out because it's fairly common nomenclature amongst — this started out in the hacker culture in the US. This process is referred to as "yak shaving." Are you both familiar with yak shaving?
It turns out most people in Europe never heard of yak shaving, which surprises me because I thought this was pretty universal, so I will describe for those of you listening to this podcast who have no idea what I'm talking about. Yes. We are talking about yak, the big furry animal that comes from Asia, and yak shaving. This actually, like so many things in the computer science terminology, comes from a cartoon from MTV called "Ren and Stimpy" from decades ago. There was apparently a sign about a yak shaving contest.
To shave a yak in computer science terms means that you start to solve a particular problem, but then that problem requires you to solve another subproblem, and you start working on that subproblem, which requires you to solve another subproblem. You get about six levels deep and you realize, "Wait a minute. Solving this sixth problem is nowhere near worth the effort of solving the original one, but I'm caught in this loop of solving problems." That's yak shaving. It happens to all technologists.
What I've realized is that for experienced developers or designers, or team members, GenAI is fantastic for automating yak shaving. What the outcome is that I want, GenAI handles a bunch of these details for me. Maybe not all of them, but it handles a lot of those mundane — exactly like you said for our designer. I just never had time before to apply the shininess. Shininess is a yak shave, but now GenAI can automate that for you. For already experienced people, it's great for automating mundane tasks like that where you know what outcome you want and you just need to get there.
Ken: One of the things that I hope we're known for at Thoughtworks is being pragmatic and saying that, "Hey, while we're excited about these things, there are real issues that you need to take into account." Our fourth theme is emerging AI antipatterns. I guess, Neal, I'll start with you because I know that you've written several books on things like architecture. Why don't you introduce this theme for us?
Neal: Sure. Uniquely in this Radar as well, I put it in the first draft, and I was happy to see that it survived in the final draft. All four themes referenced each other. Like I said, mostly in the past, they were all independent, but because these were all AI, they all pointed to each other, so several of them point to the emerging AI antipatterns. It is inevitable in a space that's moving as fast as AI.
First, we need to make sure that the people understand the context. An antipattern is not just a bad thing that I did; an antipattern is something that initially looks like a good idea but then turns out has so many negative consequences or side effects that you probably shouldn't do it. It turns into something bad, and that's exactly what an antipattern — we're going to see so much of this in the AI world because it's moving so fast, and so many promising things come along. Then it turns out not a great idea.
A perfect example of that is one of the ones that we called out as an emerging antipattern is text-to-SQL solutions; initially seemed like a great idea, but now that we've had experience on the ground with it, it turns out it's more trouble than it's worth. This is a great example of — if you look over the history of our Radar, the Assess ring, which is here's things we are trying out that seem promising, is much, much more populated than the Trial ring, which are things that have proven themselves and we've used them in production. That's exactly a lot of these things that would have been an assess, but then we tried them in the real world and they couldn't make it past that barrier into trial.
Another antipattern that we're seeing here is complacency with AI-generated code. This is actually related to something that we blipped on our last Radar, AI Accelerated Shadow IT. This is related to something Ken was talking about earlier, where you have more and more people generating these solutions using AI, and it's like, "Oh, this solves a real problem for me. Let's put it outside the firewall and put it on our corporate network."
Suddenly, your security people are freaking out because it's like, "What is this alien artifact that has shown up in my ecosystem that has not gone through the vetting process that needs to happen?" That's AI-accelerated shadow IT. Then complacency with AI-generated code. As the coding assistants get better, it creates a trap because they get enough better where it's like, "Well, I don't really need to look at that."
The literal workflow that we talked about this meeting was in the past, we've thought, "well, we really need to look at that to make sure," but now it's going to be, "Well, it generated code. I'm sure that the code review, if anything's bad, will catch it because it seems to work." Then code review is like, "Well, all this stuff has been pretty high quality so far, so we're not going to bother." More and more, you're getting code that hasn't actually been curated, which obviously creates a problem because of all the factors we've been talking about; attack vectors and hallucinations and other features of AI-generated code.
Bryan: One of the things that's called on this theme that was pretty interesting to me was the spectrum of development concept, which it sounds like a really good idea. You hear about test-driven development, and you're like, "Oh, that's a good thing, so maybe we should do this, too." It was two areas of concern within the spectrum of development. One of them was this; it drives this need to overspecify your application, where it's like, okay, you're predefining everything about your application upfront so that you get a better AI-generated experience.
What ends up really happening is you end up with way too many files that's completely impossible to maintain from what we've seen from the tooling out there so far. It becomes, actually, like Neal was saying, you're shaving the yak layers, and you're getting down to layers that are actually more complex [chuckles] than what you were trying to solve in the beginning.
The other problem we were seeing with it is this; it pushed teams towards a almost waterfall-like approach where they're trying to define everything upfront. You think about things like test-driven development, and you're like, "Oh." You're trying to do small tests where you write the test first and then the code and make it pass. In this context, that's not really how this technique is used. It's more like define everything up front, and then everything will be generated perfectly. That's not the reality we've been seeing with some of the things we've been testing similarly early days in that space, in that technique. It was an interesting one to talk about.
Ken: Neal, I'm going to put you on the spot a little bit just because of all your things in architecture. Although they didn't make the Radar and they're not part of the blips, just in your world, doing workshops and all of those things, what are some of the antipatterns you see coming out from an architecture perspective that people should be aware or to at least watch out for?
Neal: Trying to use AI for architecture solutions is a terrible idea because the essence of architecture's trade-off analysis, and AI is terrible at doing trade-off analysis because there's no real logic. It's all pattern matching. The way it would work great is if a thousand other people have had exactly your architecture problem and solved it and posted it as a corpus online that you could give as a context, and then ask AI, "Hey, do you have a good solution for this?" The pattern match, a part of it would probably find you a good solution to that architecture, but that doesn't exist, and that corpus is out there.
One of the things that I like to say is that from an architect's standpoint, GenAI is terrible at answers but great at questions. I want to give it my existing design and ask it, "What holes exist in this design? What am I missing here?" Out of 10 things, seven will be irrelevant or ridiculous, two of them will be things you thought of, and one of them will be like, "Oh, yes. I didn't think about that." That's a really good use in terms of architectural decision-making.
I think there's a real problem here with complacency with AI-generated code. One of the things I'm responsible for as a software architect is internal code quality because I don't want to build a project that we have to throw away in six months because it's trash inside. I need to build something we can build on for five years, eight years, 10 years. I need to pay attention to the internal structure and AI, by default, uses brute force to solve problems.
If you ask AI, "I need something for all 50 states," it'll give you a 50-state switch statement. It's not great at abstractions. One of the things I think architects should look for is the complexity of the code that's being generated. Are there good abstractions in the code that's being generated, and refactor toward design patterns and improve the quality of the code that gets generated, rather than just taking it as it stands?
One of the piece of advice I keep giving architects is "get ready for a tidal wave of functioning but terrible code." Part of the job of developers and architects is going to be reformat that working code into good structured code so it's a good foundation for the future.
Ken: Great. With that, I will say thank you to our guests. I hope for the listeners that you enjoy the Thoughtworks Technology Radar, Volume 33, which, I think, comes out first week of November. We're recording this at the end of October. Neal, thank you very much for your time.
Neal: Always a pleasure, Ken.
Ken: Bryan, thank you very much for joining us today.
Bryan: Thanks for having me.