Brief summary
Managing technological change in an organization — particularly a large and complex one — has always been challenging. But thanks to the rapid adoption of AI in all kinds of spheres, from knowledge management to software development to content creation, it's becoming more difficult than ever. How do you strike a balance between governance and safety and autonomy and empowerment? How should teams be structured and how should they work together?
In this episode of the Technology Podcast, Matthew Skelton and Manuel Pais — authors of the influential Team Topologies book — join hosts Birgitta Böckeler and Ken Mugrage to discuss what AI means for organizational design. They discuss how AI is changing team capabilities, what it means for cognitive load and knowledge sharing and how to ensure there's structure and control without constraining experimentation and creativity.
With the second edition of Team Topologies set to be published in September 2025, Matthew and Manuel used the conversation to explore the evolution of their ideas and what they've learned from working with and listening to the stories of many different organizations around the world.
Learn more about Team Topologies.
Episode transcript
Birgitta Böckeler: Welcome to a new episode of the Thoughtworks Technology Podcast. This time, we'll be talking about AI once more, and more specifically about using AI in engineering organizations, or more broadly, even for knowledge work. My name is Birgitta Böckeler. I'm a distinguished engineer at Thoughtworks, and I will be one of your two hosts today, together with my colleague, Ken Mugrage.
Ken Mugrage: Hi, everybody. I'm Ken, one of your regular hosts. Good to speak to you again.
Birgitta: We have two special guests today of Team Topologies fame, Matthew Skelton and Manuel Pais. Matthew, do you want to go first and quickly introduce yourself?
Matthew Skelton: Sure. Hi. It's great to be here. Thank you so much for inviting us. We're super excited because we've got the second edition of the Team Topologies book launching in September 2025. It's great to be talking about it at this stage, particularly, the AI angle. It's a great time to be sharing some insights around this.
Manuel Pais: Yes, the same. Thanks for the invitation. It's really interesting to talk about AI in the context of organizational design and how things might change in the future. Yes, I really appreciate the invitation.
Birgitta: Welcome. I think we should set the scene first, maybe what type of AI usage we're talking about, and for what types of use cases. I was already talking about, we want to talk about knowledge work, how to use AI for that, and probably more specifically in the context of engineering organizations. There's such a broad range of situations in which we can use AI. There's a lot of talk about throwaway work for AI all the way to more long-lived, sustainable work. What type of range are you all seeing out there, and what types of use cases are you most interested in?
Matthew: That's a good question. There's different flavors of AI to start with. There's the generative stuff, which is what most people think about, and there's the traditional AI and machine learning stuff. The traditional stuff clearly is massively valuable in terms of protein folding and large dataset analysis, and a whole lot of amazing stuff that you can do there. The finding a needle in a haystack data search problem.
That's obviously all still relevant, but the spotlight is purely on the generative AI stuff at the moment. That's actually slightly changing. I saw there was a post, I think yesterday, from some of the main model providers who are now starting to look at not just generative AI, but also approaches that actually have a world model rather than just next-world prediction. Finally, they're realizing that there's more to life than just generative AI, but hey, GenAI--
Birgitta: Just language models even. Even within generative AI, we're very focused on language models, right?
Matthew: Yes, exactly. It's to be expected. One of its all fine. Your question was, which things are we seeing, and which things are you interested in? I'll answer that slightly obliquely, which is I'm interested in the applications of technology in general which empower people to succeed. I'm not that interested in technology for its own sake. I'm very happy to get involved in technology when it's quite clearly used to empower people, to achieve a goal, or to succeed better, particularly in the context of a team.
There's loads of people who are very much technology-first, and that's totally fine. That's their interest. For me, it's very much connected to, is this helping a group of humans to thrive or not? My interest is largely determined by the way in which we think about using technology, not the technology itself.
Manuel: In my case, what I'm seeing with especially larger companies is that it's really in the beginning, mostly, okay, let's use Copilot or let's use Cursor or any of these tools that can help us generate code faster. What I'm seeing with large companies, that's the step at the moment that they're willing to take, not much more, which is fine. That's not a problem. To me, what worries me or that raises the alarm for me is always, does the organization understand how much more this is going to still evolve, and are you prepared to learn along the way? It's two things.
I'm mentioning this so that people don't feel like, "Oh, we're behind. Everyone else is doing LLM models and is doing all this stuff." Actually, most big companies are taking these small steps, which is fine. They need to be wary of risk and ethics and all this stuff. What I'm more worried about is we take some stuff in, we use some tools, and we think that's it when a lot of it's still going to change.
We've had some chats about this, how it's not just code. Obviously, in the lifecycle, especially of software delivery, there are so many other phases of it that AI might change how we do things from requirements to migrations and stuff like that. Organizations, in my opinion, just need to be aware that if you don't have this capacity yet, you need to be able to start having the mechanisms to learn about this stuff and to evolve it over time. Because we're nowhere near done what this is going to bring.
Birgitta: I always say it's such a weird-- When you think about the three horizon model, with Horizon 1, 2, and 3, it feels like a lot of this technology is definitely Horizon 2, if not some of it is Horizon 3, and there's so much evolving still. Maybe because of the hype, or because the technology is so accessible, and you can do something with it very quickly, maybe because of that, a lot of organizations seem to start already approaching it as if it was H1, Horizon 1.
Like, "Let's do a rollout, let's scale, let's measure." I understand that, and I think measuring makes sense, but keeping in mind that because it's changing all the time still, your measurements might already be obsolete again in two months. I think like the way that organizations are approaching it more as this like, "Oh, yes, it's here, let's scale it," is sometimes very premature still. The hype is giving us this distorted version of the maturity of the technology.
Matthew: Particularly when the measurements are around the adoption of the tool and how much of your days spent using a ChatGPT prompt or whatever. Well, so what? Who cares? That's just not a useful business outcome. Measure shorter time to value, measure reduced cost to serve, measure something that actually is meaningful, and then the people in the C-suite, COO in particular, is actually going to care about it. If you're not actually measuring anything that they really care about, they're looking at what's going on in technology and going, "Ugh, another technology noodle, people are noodling on this thing. Where's the value?"
It's a bit awkward in this particular case because the CEOs of the organization are often under pressure from shareholders and other people to demonstrate increase AI usage, blah, blah, blah. It's a bit skewed. Two examples I've seen recently, which are actually similar, examples where I think AI, particularly generative AI, is really shining, and it's because it's empowering someone to do something that they couldn't do before. In the case of one, it's a tool that helps with incident diagnosis, so rapid incident diagnosis for digital systems.
Instead of having to work out how to query information from 17 different tools, you just use a chat interface or something equivalent, and it will go off and pull information for you. If that means then you get to the heart of the incident in seconds rather than minutes or hours, then actually that's actually quite empowering, and you get to the bottom of it. That was quite a nice example. There's another example. I had a demo yesterday from a vendor who works in the compliance space, and they've got a tool which is all about capturing audit and compliance information.
Then they've got this really solid foundation underneath, which is based on cryptographic hashes and stuff like this, all absolutely sound standard software engineering techniques. Then they'd lay it on top of chat interface, which actually then allowed the auditor persona, the auditor, to ask a question in human terms, like what software was deployed into the production environment on the 15th of March in natural language, English. Actually, that means they don't need to learn a query language and a bunch of other things, and it's exactly the same question they would ask of the human being, but they can ask into the tool.
That's actually, again, using that kind of empowering people lens, it's always the way to think about technology. There were two really good examples I've seen recently where actually the way in which they'd used generative AI, it was driven by the need to make someone's life easier, and it had empowered them to do something which they couldn't really do before. It was really great to see that emerging as good examples.
Manuel: Just quickly, the opposite of doing that, especially in teams that we're starting to see, and people are calling out, is that if you don't look at your organization and the team of teams as a system that are related to each other, then you might optimize, again, locally, and you're saying, "We're using these tools to generate code faster." Okay, great. You're optimizing one part of the lifecycle, and then that actually makes the system slower because now the team is like, "Okay, what's next? What's next?" You don't have capacity on the product side.
If you have that split, you don't have capacity on the product side to make those decisions faster or in line with the acceleration of the code generation. Then you have all the other questions about is that code maintainable, what happens in six months when you need to change, and no one knows exactly what the code does, just that it works. That's the way in which we might think we're empowering teams, and we're accelerating, but we're actually not looking at the organization as a system that has all these dependencies, and you're optimizing for something that is...
Birgitta: Also downstream, right? You were talking about what's next, what's next, but also downstream, you have to test faster. Do you need more power now in your delivery pipelines, are they actually parallelized enough so you can push enough through, can you even deal with more feature releases to your users, all of those types of things, right?
Ken: Yes, because I thought it was interesting that, Matthew, you were talking about the CEO and COO and who likes things. It was interesting that, Matthew, you seemed to be talking about more useful cases of understanding what has been done, past tense. Not really risk analysis upfront, but you said about troubleshooting and finding a root cause.
Root cause is a horrible term, but those types of things. Then Manuel was talking about being able to do code upfront, but that doesn't actually make it faster. One of the things that I wonder about is where's the risk basis? Who gets up at 2:00 AM and troubleshoots that code that was done by an AI that nobody understands? It's sounding like you're saying that this is more useful on the backend than the frontend. Does that make sense? Am I just rambling too much?
Matthew: No. Sorry, I didn't mean to overemphasize the after-the-fact use of AI, particularly because there's definitely use cases for generative AI in knowledge work, certainly in digital product development. One of the clear light is generative. One of the things it can do is generate options. It can generate lots of different versions of text. It can generate lots of images. It can generate lots of different product ideas. It can generate lots of different versions of a service. We can run those against synthetic users and see which version gets best traction.
Then we might pick, say, two or three different-- We might end up basically getting to an A, B test really, really quickly, but what we've actually done is an A, B, C, D, all the way to Z test because we've been able to test 26 different combinations very, very rapidly. It just wouldn't be feasible before with a purely human way of doing it. It's giving us more options. Generating those options assumes that you know what value looks like. You've still got to understand what value looks like for the value consumer. If you're just randomly throwing darts at the wall and blindfolding and seeing what sticks, then it's not really helping.
We need to know where value is. We've got the dartboard visible, and we've got our eyes open. This analogy is going to fall over in a minute. We're sending multiple darts towards the dartboard, and we're seeing which ones actually land in the right place. Being smart about what Gen AI in particular can do is it really helps us guide what we're doing. That's actually empowering. By generating 26 different options, it's empowering the product people and product teams to go, "Okay, well, we can pick the best one. Now, we've increased our confidence to be able to work on a thing which actually is going to move the needle for the value consumer."
Again, that use of it is empowering them to be more successful. That's brilliant. Using that lens seems to be hugely important. Things like legacy technology migration, I think it was you, Birgitta, actually emphasized that recently in one of your articles. I don't know, you've got just millions of lines of code in some ancient language, whether it's COBOL or whether it's some ancient C or old Java thing or whatever, doesn't really matter, something. Actually, there's not enough people to work on that technology anymore. Can you do some sort of migration thing into a newer language which has either people or, to be honest, these days, a better set of source code from which we can train LLMs?
On the basis of that, if you can create a working system which actually then is more maintainable, then we're in a good place. Again, we're using the generation aspect in a way which makes sense because we need to generate massive amounts of code in order to migrate away from an old system, which is basically legacy, which we can't evolve. The use cases which are quite clearly really super for this technology, but let's be realistic about where these capabilities are basically.
Birgitta: Let's also talk about the foundation that you need. Manuel was already starting to talk about how things can break down if you apply it just in one place. I've also recently thought a lot about there are clearly signs that it can really, really amplify a good developer in a good way. I have 20, 25 years of coding experience, and it's really great for me to use this and amplify myself. I've been thinking about when you extrapolate that to an engineering organization, it almost seems like it's similar.
When you have a really highly tuned engineering organization who's really good at delivering tens or hundreds of times a day, and has really mature processes, and really high-quality delivery processes and tooling, that can amplify really well. If you don't have that foundation, you might also just put bandaids on things or amplify bad things. Of course, Team Topologies is full of things that create a foundation for something like that. Can we talk a bit about the things that you have on Team Topologies, how they can create a foundational, almost a prerequisite? Is it a prerequisite for generative AI to do some of those things, question mark?
Matthew: There's a phrase that came out. I'll give it to you in a second, Manuel, but there's a phrase that came out from some discussion on LinkedIn recently. An AI product strategy and product expert, we were in conversation, he had a lightbulb moment, and he ended up saying he realized that Team Topologies is like the infrastructure for agency, agency in human teams and agency with AI agents, because the language, and the principles, and patterns in there, in Team Topologies, they've got an assumption that what we're trying to do is provide agency or empowerment to--
When we wrote it in 2018, it was published in 2019, we were talking about teams of humans because this is pre-generative AI. Our assumption, I think it's fair to say, Manuel, is that we expected organizations to want to empower teams, want to give teams agency to achieve things. It turns out the principles behind that seem to be really useful for considering how groups of AI agents would work, too.
There's lots of fundamentals underneath that because it relates to knowledge work, it relates to concept, it relates to domain fidelity, it relates to a lot of these things. I think that there some underlying truths sitting underneath the Team Topologies' principles, which appear to set things up really well for an AI-based way of doing knowledge work.
Birgitta: Can you give an example?
Matthew: You've got the sense of ongoing stewardship or ownership. We called it ownership in the book, and now, I talk about stewardship, ongoing looking after a thing and evolving it. AI will definitely provide the opportunity for some software to be throwaway, so we don't need to care about evolving it and stewarding it. There'll definitely be classes of software-based services which you want to continue to evolve, like your pension pot, for example. If you've got a pension, you want that to be evolved for the next 60 years.
"Thank you very much. I don't want a new version every week that then breaks every single time. That's just madness." A whole load of software that we rely on in our lives, transportation, and banking, and shopping, and a load of other things, communications, that you'd want to be evolved very safely and gradually, still rapidly but in a gradual way rather than being thrown away every time. The sense of stewardship of a thing, you've got historical context, you've got some understanding of why something is being evolved in a particular way.
There will be some future AI stuff that comes out that goes beyond generative AI that has a bit more of a model of the thing it's working on. Since you started talking about models, then that connects back to the concepts around domain-driven design, which has been here with us since 2003, 2004 from Eric Evans and all that community, where we're thinking about ubiquitous language, we're thinking about the meaning of words, and we're making sure that we're not accidentally coupling things together that shouldn't be together.
If you've got multiple systems with the concept of an order, like I'm placing an order, but you've got three different systems, does the word order in those three different systems mean the same thing? It probably doesn't, or it might do. Who knows? We've got understand it. The danger, of course, is if you just give access to an LLM, and it sees the word order, it doesn't understand anything. It's just predictive.
Does it actually then start to couple your architecture in ways which are actually detrimental because then you've got a coupling in the wrong place, and you can't go quickly, and so on and so on? It's even merged data which shouldn't be merged. Because Team Topologies talks about stewardship around sensible boundaries that work for flow, then we've set something up there which is really important about boundaries around things that work from a conceptual perspective. We are trying to maintain and enhance domain fidelity, or the meaning of things, the meaning of ideas, meaning of concepts, as we're working on these digital systems. That's a key part of it.
Manuel: Just connecting back to your question before, yes, I think if you have both the technical foundations, like you're saying, we have a good system for delivery, and we have things automated and so on, that's going to help us use things like GenAI and plug in the things in the right place so that it actually makes the whole thing go faster.
Then, from an organizational perspective, I think the same, and in line with what Matthew was saying. If you've already been thinking about the organization in a more systemic way and looking at dependencies and where is flow blocked and what can we improve, you're going to be able to make, I think, better decisions in terms of AI and agents and all these things.
For example, I think an organization that Matthew said has already done the work of identifying what are actually our business domains, what are the core domains, what is the supporting and generic, using the DDD terminology, if you have a good grasp of what exactly do we deliver of value to our customers, how are the business domains, and we've been able to decouple the things that should be decoupled so that we can have things going in parallel faster, you will be in a much better position to take advantage of Gen AI and all the tooling that might come next, and you might make different decisions.
For example, you might have, let's say, two or three teams working in parallel on adjacent domains, and you might say, "Actually, now we can with one team working on these three domains at the same time because we're able to go faster on code generation, maintenance, and maybe a bunch of other stuff that we can accelerate," or you might say, "Actually, we keep these boundaries the same, but we can do it with smaller teams because now the cognitive load, we're able to reduce partially by using these tools and new ways of delivering the services and so on."
We basically have more options from an organizational perspective as well to leverage this new technology because you've done the work before of understanding what are really our business domains, our value streams, and so on.
Matthew: Absolutely. The thing that does not change with generative AI, the thing that absolutely does not change, is when we talk about knowledge work in particular, and certainly when we're talking about building digital systems, the thing that does not change is that we are representing intent as code. That's it. That's what software is. We've got a business intent. We're representing that somehow in code. Over time, we represent it--
Birgitta: We want to know if it's actually coming to life, if verify, right?
Matthew: Yes, and to verify it. Over time, we get to a point where everyone's happy that the intent is accurately represented. Then the intent changes because the business context changes, so we have to change that thing. That's the cycle that we all know. That doesn't change, certainly, with generative AI. The name of the game is still representing that intent in an executable form in a computer. Nothing has fundamentally changed.
All the rules that we know about that have been developed and proven around loose coupling, good boundaries for flow and resilience, and all these things like security boundaries. You wouldn't dream of giving-- There was a chat on LinkedIn today, someone was talking about using the CICD tool Jenkins as an example. You got the idea of a Jenkins agent. I can't remember her name. She's involved in the DX stuff.
Manuel: Laura Tacho.
Matthew: Exactly. It's Laura.
Manuel: I think it was Laura. Yes.
Matthew: Yes, it was her. Thank you. She used the word agent because it has the same terminology there. It connects to generative--
Birgitta: I just realized that the other day as well. Yes, CICD pipelines, we also call them agents. Yes. [Laughs]
Matthew: Manuel and I cut our teeth, really. We had a lot of experience in CICD, and that's where we met, Ken, isn't it? We met at a conference that was focused on continuous delivery. We both saw a lot of all of these challenges right in the middle of software delivery, right in the middle of the organization, all the tensions between different teams and handoffs, and all this stuff. That's where a lot of the thinking and so on from team to parties came from.
If you've got a build agent or Jenkins agent, it's there doing stuff, it's automating a bunch of things as part of a deployment pipeline, it would be bad practice to give a Jenkins agent access to all of your production data stores, all of your production environment, and all of your development environment, and the rest of the organization. That would be really bad practice, and no one should be really doing that stuff. You need good security boundaries because otherwise the thing's going to go rogue and so on. We know about this stuff from a software delivery perspective. We've known about this stuff for 15 more years.
Why would we not apply the same principles to generative AI agents? We should probably be doing that. We should probably be thinking about, "Okay, let's put some sensible security boundaries in place. Let's think about the responsibility of this agent. What makes sense for this agent or this group of agents to be responsible for? Let's limit it to that. Let's give them a clear scope and focus and be able to reason about the risks of those agents operating."
It's just the same as Jenkins agents, but it's also just the same as a team of humans operating. This is why we put security boundaries around what humans can access. There's different ways to do those boundaries, but if we do it in terms of a fast flow of value, then the boundaries look very similar to what we talk about in Team Topologies, because it's coming from a fundamental set of principles.
Ken: Do you have strategies or advice for organizations that are trying to implement just that? We've already talked about the fact that this is very fast-moving and so forth. You have organizations, they want to educate people on concepts like, I'll use the blanket term guardrails for some of the stuff you just described, et cetera. How does an organization that wants to give autonomy to teams communicate and educate on these topics?
Matthew: That's a great question. [Laughs]
Manuel: In my opinion-- Me and Eduardo Silva, who is also quite active in the domain-driven design space. We've even created a course in our academy, but it's essentially one of the patterns we would expect to use are the enabling teams or the enabling pattern, because effectively that's what we're going to need to some extent. Yes, we're going to need at some point or ongoing some kind of platform, some kind of tooling that helps teams adopt some of this stuff in an easy way that's integrated in their workflows and so on. It's a fundamental aspect of enabling.
There's a really great case study from the company where Eduardo worked called bol.com. They used this approach for data science 10 years ago, where they needed to scale the approaches and the knowledge around data science to all their product teams. They did this with an enabling department where they were not there to do the work of the teams. They were there to help them understand their gaps and understand, okay, in the case we're talking about now, how do you understand the security risks around using certain AI tools and so on, how do you understand what's even available out there, and what's a better option for your workflows.
All of that work, it's hard to put that on the product teams or streamlined teams, as we would call them. You need, I think, some kind of mechanism in the organization, some actual dedicated people who are looking at bringing what's happening outside the organization in AI and bring that almost in a curation approach, bring that into the organization, figure out what's more helpful to do now, what are the things we need to be careful about, and then do a lot of education as well and helping the teams understand what to use and when, what are the risks. That kind of approach is going to need to be long-lived because this technology is still evolving a lot.
Ken: Yes. I'm glad you used the term long-lived there because the other half of that is something I know that you all are passionate about, it's cognitive load. If we look at the tool spaces that were around six months ago, our recommendation today would be different than it was even then. Not just necessarily versions, but completely different vendors or what have you, whether it be an IDE or a language model or whatever it is. Again, how does an organization manage that?
You have this enabling organization, you're out there teaching these things, but then it changes. As Matthew said, the context changes or what have you. Is there any kind of practical limit to how often you should rotate with the changes? Do you recommend a time period? Do you recommend a level of risk, a level of benefit? What's the decision factor of, "Okay, I spent all this time saying use tool A, and I'm going to enable you, but now tool B is a lot better"?
Manuel: I think it just needs to be a continuous evolution. Those type of teams, or even in that example I gave of bol.com was an actual department, an enabling area. You need to be using your time to learn from the outside as much as you're teaching inside. Obviously, like any good software design integration approach kind of thing, you need to not go all in on one tool and couple all that you do to the way that this one tool works.
You need to be thinking, "What does this tool provide me? What are the things that we should use or not use so that we're not too coupled?" In general, you're going to need this enabling approach where-- In that example from bol.com, what they did, they had a team that was looking more at almost sensing the organization, what are the gaps, what are the needs, what are the things that are going to make really a difference for us as a business as well.
They had more the deployed enabling teams that would be on the ground, if you like, together with the product teams actually, "Okay, this is what you need to know now. This is what you should be using. Let's work through an example. Let's get you started on some data pipeline or what have you." That combination of a sensing approach, what's happening internally and what's happening outside in the industry, so that we bring in the right things at the right time with the right risk control, and all that together with the more on the ground, "Okay, this is what this team needs to know. They should be started doing this and that." That worked very well.
Birgitta: I like that idea of that enabling team as a sensing team as well. One thing is for sure, the speed of this technology definitely shows us once more that control telling everybody what to do is just not scalable. When you introduce the technology of the speed, you have to enable the teams. You have to empower them to be autonomous to a certain extent. You have to create a certain level of awareness. One thing that I certainly see is that a big part of the enablement, I think, is to help everybody become better risk assessors. I mentioned the word risk a few times already, but I find myself on a micro, medium, and macro level.
I do risk assessment all the time when I use generative AI because of the non-deterministic nature. I always think about what's the impact if this goes wrong? Will I be called at 2:00 in the morning, or is it just a prototype, and nothing will happen? Then platform or enabling teams can also give people the tools to get better at determining the impact, determining the probability of the AI tooling that they have of it going wrong, helping them detect faster when something goes wrong. If you have all of those things in place already and have people who are good at that, I think then you're going to have an easier time adopting generative AI.
Matthew: Definitely agree. I think generative AI, as predicted by people like Simon Wardley, when you're thinking about things like strategic technology change and adoption and things like this, the speed of change in AI is significantly faster than the speed of change in the previous wave, which was Cloud. That was significantly faster than the previous wave, which was, I don't know, desktop PCs or something like that. This is to be expected because each wave is building on the previous stuff, and so we can go faster and faster and faster.
The organizations that are switched on about this, like bol.com and other places, realize that there's the need for this continuous awareness and upskilling, but it's an active thing. It's not just allowing people to do what they want. It's curated. It's saying we need to somehow investigate this technology, but we can't just allow 70 out of 100 teams to investigate it independently without sharing anything because that's incredibly cost-inefficient. Maybe it's only four or five or seven teams that are going to explore this now, but then what they're going to do is do some focused exploration and report back and share the results so that people can learn from it more quickly.
Sometimes I'll use a tech radar, like the Thoughtworks-style, thing like that, but similar approaches where we're being more deliberate about how we're exploring and adopting technologies like AI. Even just within the AI space, you'll have to be investigating multiple different types of technology in that space for the foreseeable future, but you can't just allow a free-for-all. It's extremely financially inefficient to allow any team to do anything they like in that space without sharing it with everyone else. That's just not a story you can go to the CFO with. She'll laugh in your face because that's just massively wasteful.
At the same time, you can't slow it down to only one team is allowed to do this thing because then you're not getting enough parallel discovery of different options and things like this. It needs this kind of active diffusion of knowledge across the organization. There's a dynamic. There's a learning dynamic. There's an innovation dynamic needed, particularly for AI, but in general, now as multiple different technologies become more and more rapidly evolving. That becomes a really important thing from a COO perspective or an HR skills perspective. We can't have people learning stuff and keeping that in their silo. That needs to be actively spread across the organization, irrespective of what the organization is doing.
Birgitta: That's even more true with generative AI, I found, because it's just not a technology where you have a handbook. Everybody's discovering new ways to use it all the time. You have to develop this intuition of when to reach for it. For that, you need all of that social sharing, the social learning, even small stories, people sharing with each other. I found that a lot of people struggle with that because it's a different type of sharing than we used to do previously.
To wrap up our conversation, last question to each of you. A question I get asked a lot is like, "Oh, how will software delivery fundamentally change? Is there something that we'll just not do anymore or that will totally turn it on its head?" Do you think there's anything, in Team Topologies, that you really feel like you are starting to get a fundamentally different perspective on where you say, "Oh, this is so much more important than before, so much less important than before?" Anything that you think will be fundamentally different with generative AI and the knowledge work mix?
Manuel: I would say yes and no. [laughs] I would say maybe when we are looking at if we have large systems, and that's still one big pain point today, and I still talk about that frequently, it's like, "Okay, how do you break this large system so that you actually have more independent services or parts of the system that can evolve more independently from the rest?" There's still a lot of organizations that don't have that awareness or have not invested in really understanding what are our core domains, supporting, et cetera, like we were talking before.
Maybe that's something that becomes less important or less of a problem in the sense that if we get to a point where it's easy enough to, like we're saying, well, migrate this legacy system and do it in this new way that is more modular or is more service-oriented, whatever it might be, that maybe helps solve the problem to some extent. That's maybe something we'll see less focus because maybe the tooling gets to a point where we can actually do these things, and how it's going to help us be able to essentially evolve our architecture to be more flow-oriented, if you like.
I'm not 100% sure, but I'm just saying that might be the case. On the rest, I don't think it fundamentally changes the need for thinking about trust level, what are the good boundaries between teams. It might change, like I said before, maybe the boundaries of what one team can take in and considering their cognitive load might be bigger because we're able to reduce certain load on doing things because we have better tooling.
We can, even would be a good thing, take on more of the domain, take on a bigger domain or expand our boundaries as a team, or it might be that we can have smaller teams that are much more compact, and they work on smaller boundary of responsibility, but they do it much faster, and they're able to actually understand a lot better the business and make what are the options that we should work on for our end customers, how do we help them understand better what's going to be helpful for them. In a way, my answer is it's going to generate more options, but most of the principles in Team Topologies remain necessary.
Matthew: I think you've said it all, really. Because in Team Topologies, we were focused on fundamentally flow of value and cognitive load on the humans that are working in this space. Those are the two main principles. I suppose, couple it with long-lived value flows because we need to evolve and look after them, steward them, I suppose. Then, in that context, I think the fundamentals remain because it's still knowledge work. Humans might not be typing out Python or Java code or whatever it is, TypeScript, in the same way as they did in the past, but we might have some helper tooling to do a lot more of that.
Like Manuel said, a team might be able to take on way more. That's fine. If a team takes on way more stuff, then there'll be an opportunity for the whole organization to do more stuff in general. Then we're back to the point where we've got the right number of teams that the organization can sustain, and they're of the right size, which probably doesn't get beyond eight people in the team. It might go down slightly smaller. Also, you might need more skills, different kinds of skills, in the team.
You might need some skills around managing a swarm of AI agents, or you might need skills around diagnosing what happens when a big set of swarm-generated services suddenly goes down. The nature of the skills that are going to be needed might be more operational, maybe. Because we've based the Team Topologies principles on some fundamental aspects of how humans work and what knowledge work is, security boundary, domain boundary, stuff like this, then the low-level detail of the work is almost certainly going to change. People are not going to be doing exactly the same stuff as before.
I think the principles are going to be pretty sound, and we'll see mixed groups of humans plus AI agents. Who knows what that looks like? Who knows what effects that will have? There's still value flow, and there's still cognitive load. Those two things will still be at play. Even for generative AI systems, there's still a sense in which there's a limited amount of context that they should have to be able to work on a system effectively. Is that cognitive load? It's something similar. It's something equivalent, maybe. I think those things are still intentional, still at play. We'll have to see how it plays out. Maybe we're totally wrong. We'll see in the next three years, right?
Birgitta: Ultimately, it's always still about the value and the outcomes and not about the code and the tools we have in our toolbox. The tools change, the skills change, but ultimately, we have to figure out what we're getting out of it. That's always been one of the challenges we've had. Yes. Nice. Thank you so much, Manuel and Matthew-
Matthew: Thank you.
Birgitta: -for being here. You mentioned the second edition of the book is coming up. Do you have anything else to plug before we finish?
Matthew: Manuel and I are doing a marathon day on the 25th of September 2025, where we've got four sessions, APAC, Asia Pacific, EMEA, so that's Europe and Africa, and Americas, and a special session, Manuel, with you for Brazil in Portuguese language.
Manuel: In Portuguese.
Matthew: Yes, exactly. I can't speak Portuguese at all. The first three sessions are in English — anyone in the world can join. The tickets are pay as you feel. Manuel and I will be going through the case studies that are new in the second edition book, talking through what the organizations have done, adding our reflections to it, highlighting things that are really valuable and important that can be used as a template or an example for other organizations around the world.
Manuel: I suspect we'll be having some questions about AI as well.
Matthew: Yes.
[Laughter]
Birgitta: Probably, yes.
Manuel: It's in everyone's mind. If you're interested in the launch, if you go to teamtopologies.com/launch, you'll find all the information.
Birgitta: Great. Thanks again. Then that's a wrap for another episode of the Thoughtworks Technology Podcast.