Enable javascript in your browser for better experience. Need to know to enable it? Go here.

Rethinking software governance: Reflecting on the second edition of Building Evolutionary Architectures

Podcast host Birgitta Böckeler and Scott Shaw | Podcast guest Rebecca Parsons and Neal Ford
November 17, 2022 | 38 min 43 sec

Listen on these platforms

Brief summary

Building Evolutionary Architectures was published in 2017. In it, Thoughtworks CTO Rebecca Parsons, Neal Ford and Pat Kua defined and developed the concept of “evolutionary architecture” and demonstrated how it can help organizations manage change effectively in an ever-shifting technology landscape and fast-moving business contexts.


The book has now been updated, with its second edition due to be published in December 2022. In this episode of the Technology Podcast, Rebecca and Neal talk to Birgitta Böckeler and Scott Shaw about the new edition and discuss how seeing various applications of evolutionary architecture over the last five years has led them to identify new issues and challenges. In particular, they talk about how the new edition takes up the question of automating architectural governance using fitness functions and what this means for the way we build and maintain complex software systems.

Episode transcript




Birgitta Boeckeler: Welcome to the Thoughtworks Technology Podcast. My name is Birgitta Boeckeler, and I'm one of your regular podcast co-hosts. I'm hosting this episode today with my colleague, Scott Shaw.


Scott Shaw: Hi. I'm Scott Shaw. I am from Melbourne, Australia.


Birgitta: Today, our two guests are actually also two of the regular hosts of this podcast, Neal Ford and Rebecca Parsons.


Rebecca Parsons: Hello, everyone. This is Rebecca Parsons.


Neal Ford: This is Neal Ford. You're going to hear a lot of familiar voices today if you're used to listening to our podcast.


Birgitta: Yes. We invited Neal and Rebecca today because of one of the books that they've written in the past together with Pat Kua, Building Evolutionary Architectures, and they're actually currently working on a second edition of the book, and that's what we want to talk about today. Maybe we'll just start. For those of our listeners not familiar with the book, can you maybe summarize, what is it about?


Rebecca: Well, when you write a book that is characterizing a new term, first, you have to decide what the term is, and then you have to define it. The term "evolutionary architecture" is trying to capture just what we're doing about responding to changes in the technology landscape. Neal and I have been talking about this for a long time, and the first time I heard Neal discussing this, he called it emergent architecture.


Neal and I had a very robust discussion about why that was a very bad name. The name "evolutionary architecture" captures this notion that there is no such thing as the best architecture across all systems. We are using concepts from evolutionary computation to specify that these are the architectural characteristics that are critical to the success of our system. We are going to ensure that as our systems evolve, the architecture continues to reflect those architectural characteristics that were our objective.


Neal: Yes. The first edition of the book, which came out in 2017, was very focused on that, defining the idea of an evolutionary architecture and defining this concept of fitness functions. We also address this question a little bit about architectural structure, but it was more about comparing different architectural styles and how evolvable they were based on a kind of scorecard we created for evolvability.


Birgitta: I think we'll get back to fitness functions later, maybe for those people who are not familiar with the term yet. Maybe wait one, two minutes, then we'll definitely get back to it a lot, right?


Rebecca: Yes. I guess there are three important aspects that we believe are critical, and so we included in the definition of an evolutionary architecture. It's guided. That's this notion of fitness functions that we'll talk about, but it also supports incremental change. Obviously, we draw a happy inspiration from the agile principles, agile software delivery, although I believe this is really more than just an agile approach to architecture, hence the name evolutionary.


Of course, it's across multiple dimensions. The favorite word of an architect or at least one they tend to use the most even if they don't like it is "trade-off." We have multiple dimensions, multiple architectural characteristics that might be important to us. Some of those reinforce each other; some are, in fact, in conflict. We want these fitness functions to address the range of architectural characteristics that might be important for different systems.


Neal: The multiple dimensions also encompasses not just software architecture. One of the things we struggled with if you talk about evolving a software system, it certainly involves the code and the architecture, but there are a lot of dependencies that it involves, too, a lot of very important dependencies like relational databases because schema changes are really part of the logic of the system.


You have to think about how to evolve that as well. That was our motivation. The multiple dimensions aspect of this was our motivation in the first book to have a chapter on evolutionary database design to include those considerations along with architecture because we were trying to be very pragmatic in this book and talk about not-- Even though the first edition ended up being fairly abstract, we were trying to ground it in real software systems, not just purely in the code parts of software architecture in lines and boxes.


Scott: One of the things I've found a little confusing is whether architecture is being used as a noun or a verb in the title. I think it's a little bit of both. It's a way of doing things, right? It's an approach to architecture, but some architectures are more evolutionary than others, I believe.


Neal: Yes. That's really reflected in what we eventually realized and teased apart in the second edition of the book. The reason we're here is talking about the second edition. The publisher reached out to us, that asked us to do a second edition of the book because there was a fair amount of interest in that. We took this opportunity to really lean more heavily into the two aspects that Scott is talking about here, where in the first book, we were very focused on this seed idea of Rebecca's about applying evolutionary computing fitness functions to looking at evolving architectural characteristics.


As we started talking about that subject a lot, we realized that the things that we're protecting from an evolutionary standpoint heavily overlap with the things that we try to govern all the time as software architects, things like security and metrics and code quality and how things are coupled together in a good way, not a bad way. That's when we realized that this is really two different aspects.


It's really about fitness functions both for evolution and automating architectural governance, which we'll talk a little bit more about in just a second. The other aspect is about how this impacts the structure of architecture. It's the activities of architecture but also how you approach the design part of architecture, and we were much more explicit in separating those two things in the second edition.


Scott: You're coming right out and using the G word there.


Rebecca: Yes, we are. Yes, we are.


Scott: Maybe, is this a traditional governance approach, or is this a different way of thinking about it?


Rebecca: It's definitely not the traditional approach. One of the things that I've gotten asked, at least when I've been talking about this, is, "Well, you're giving even more power to the evil enterprise architect to strangle me." Unfortunately, no concept, no tool, no nothing, can solve for bad behavior; that just isn't going to happen. But what we are doing is saying, if governance can be automated, well, first and foremost, and this gets back to the definition of a fitness function, and this might be the time to talk about this.


This is an objective fitness function, and the most important characteristic is that we will never disagree on the result of the fitness function. "Be maintainable" cannot be a fitness function. Cyclomatic complexity of less than five can be a fitness function. We have to get specific. Once you get specific and in particular can automate these things, then the governance is automatic.


You don't have to do code reviews for cyclic dependencies because you know that they can't get through the build. Then what you can focus the actual governance activity on are those edge cases where perhaps you've got two of your fitness functions that in a particular situation are so much in conflict that you can't get both of them to pass. Now you want to have a conversation, a governance conversation, about, "How do we address this? Is there some idea we the development team haven't thought about and maybe the architect can help?"


Maybe we talk about, what are the trade-offs of softening one or the other of those fitness functions to allow them both to pass. Governance turns from a box-ticking activity to one that actually gets into the substance of the conflicts and the challenges that often arise when you are trying to construct a system or work with, in particular, a brownfield system to satisfy some of these architectural characteristics. Your governance conversations are completely different than they were in the past.


Birgitta: Yes. I like this idea of I think we often try to stay under the illusion that all of those characteristics that we want to achieve, that they're all achievable. Then when you do have those two fitness functions to objectively measure two of them and they contradict each other, they can never be green at the same time, then we have to stop with that illusion, right? We actually have to face the trade-off. Yes, I like that.


Neal: In fact, one of the big dysfunctions that we see for the further up in enterprise toward enterprise architecture you get is they live in a very strategic world thinking about long-term technology capabilities, but architects who are shipping code live in a very tactical world because they have to ship code and get it to work in the real, messy world, and a lot of these governance frameworks are exactly that. They're these big, giant frameworks that have these checkpoints for communication between all these layers, but inevitably, there's going to be conflict because the strategy and the tactics don't meet up. 


There's no real way to reconcile that except frustration and more meetings, which everybody loves to have those kinds of meetings, of course. One of the great lessons we've learned about software is that modern software consists of hundreds of thousands or millions of little moving parts, any of which can change at any time almost freely. We need ways to automate if it changes, has it broken something else? That's one of the great lesson of continuous integration in-


Birgitta: Regression.


Neal: -exactly, the engineering practices that we've had for a long time. What this does is allow you not for every single governance activity, a lot of them are still human-based, but for the things that you can automate the governance of, it just frees up that entire cognitive space of not having to do code reviews or checks or review boards because you know because you've got an objective definition, you know that that thing has not started misbehaving in a way that that is undesirable to you.


Just like in automating things in the DevOps revolution, automating the simple things allows people to concentrate on more complex things and give more mental space to those things, which is, obviously, better not to do more busy work, which a lot of governance ends up being busy work of, like Rebecca said, checking boxes and chasing down these bureaucratic checks and balances.


Scott: I think it takes the person-to-person conflict out of governance, too, right? You can step back and talk about the metric rather than talking about opinions, and that often so often leads to resentment and conflict in organizations, I think.


Neal: Well, I often say that enterprise architects and domain architects should be equally unhappy with each other because that implies that neither of them is getting the full thing that they want because the goals are very often in conflict, and they need to be reconciled.


Rebecca: Well, I think another thing that we try to stress with these fitness functions are "Have them based on outcomes." This is the behavior we are trying to achieve, or this is the characteristic that we are looking for, not "Use RabbitMQ in this way to achieve this particular objective" because when you specify an outcome, the person writing that fit dysfunction is communicating to the development team, "This is what I care about."


They probably don't care specifically about which of those functions you're using in RabbitMQ, or they at least shouldn't. This is the characteristic in the communications that we are trying to achieve. That way, the development team can look at that and say, "Okay, this is what I have to do to achieve that behavior," as opposed to saying, "Why in the world is this stupid architect telling me to do this when it makes no sense in my context?"


It again takes it away from this, "Oh, the architects are just being arbitrary," or "The delivery teams are just being renegade," and it grounds it in “Here's an architectural characteristic, behavior or an outcome that the enterprise architect cares about for some strategic reason.” It's a way for those architects to communicate, "these are the things I'm worried about; these are the things that are keeping me up at night and that we would like to make sure are handled in all of these different systems."


Where enterprise architecture does tend to go wrong is you get architects sitting in their little room picking implementations without necessarily understanding the context in which those things are going to be used. Of all of the teams that I've gone into, that's the single greatest source of angst from the delivery teams is "They don't understand my context, and therefore, they are making my job unnecessarily difficult." If you specify an outcome, then the team who understands their context can say, "Okay, this is what we have to do to achieve that outcome."


Birgitta: Also, you don't want brittle fitness functions, right? You don't want them so specific that every time you do evolve your architecture, they break, right, and not for the right reason, but for the reason that they were just too specific to the implementation, right?


Rebecca: Absolutely.


Neal: Well, one of the things that-- and this alludes to something Rebecca said earlier, that we realize that we're giving architects a sharp stick that they can poke developers with. We're encouraging them not to do that. This is not some way to annoy developers, but what it really is, and a great metaphor that I found for this, which made its way into the second edition is a checklist that architects write for developers to make sure important things don't fall through the cracks.


There was a great book that came out a few years ago, The Checklist Manifesto, about surgeons and airline pilots who use checklists, not because they're forgetful because when you do really detailed things over and over, things can fall through the cracks. That's what checklists are for. That's what our view of fitness functions is a checklist by the architect that developers check off as they go through to make sure that they haven't accidentally left a debug port turned on a container as they deploy it or accidentally created a coupling point from expediency or using some convenient tool that's going to cause later damage in the architecture. It's just checking, in an automated way, those things.


Scott: How important is it if you're using this approach to have to cover the entire span of quality? Is it possible for things to go in the wrong direction if there's some dimension actually that you don't have a fitness function for? Is it okay to have just a few?


Rebecca: It's important to identify, "What are the characteristics of the architecture that matter most?" and then you don't really have to worry about the others. I worked on a trading system once, and of course, you hear "trading system," you think high throughput, low latency. They didn't care about that because their transaction load was approximately a hundred a day in their wildest dreams.


What they really cared about was never losing a message. We didn't do those throughput tests and all of that. We did all kinds of testing and fitness functions around the communication system. What do we have in there to ensure that even if we lose the communication channel, the message doesn't get stuck? All of those different things were important. We didn't have any fitness functions around performance simply because it wasn't important to us. It wasn't driving any of our architectural decisions.


Neal: Well, but this also helps get you out of this vague, "What would you like in your architecture?" and the business says, "We want all the things." Okay, we need to narrow it a little more than all the things because that's the common dysfunction here but really because creating fitness functions takes effort. Does this effort yield value? At some point, you reach a point where it's like, "Well, I could build a fitness function for this, but the time it's going to take me to create and maintain that is not worth the value I'm going to get at it."


It really does narrow you on the things that are core that you really want to govern: security, structure. Really, some of this depends on the longevity of the piece of software. Do you want this piece of software to last 2 years or 10 years? If it's two years, I'm not going to care that much about internal integrity and some of those kind of things. If I really want to build on this as a foundation, a platform for building a bigger and bigger system, I should care a lot more about those things. It helps you prioritize what's really important.


Birgitta: I think fitness functions basically to recap are this idea that you define how you want to measure your characteristics and how you want them to be the most important ones. You try to find ways to automate that as a test or maybe sometimes you have a ritual that you check it every two months or something like that, right?


Neal: Well, there's a really nice metaphor that I've been using for this for a long time, and you used the word earlier about regression. If you think about architecture, you can think about the domain that we're writing software about, the motivation for writing a piece of software, and then all those architectural things like performance and scale that are necessary. How do we manage the evolution of the domain to make sure it doesn't regress?


Well, we have unit tests and functional tests, and user acceptance testing, and if you're a big enough organization, you have an entire department called QA that's just focused on regressions in your domain. What we need is a similar mechanism for the architectural characteristics, which were often lacking or they're there ad-hocly. That's what fitness functions really are.


It's unit tests for architectural characteristics, but it's not as simple as unit tests because we're monitoring things or we're looking at communication or performance or throughput or more complicated things, but that's really the metaphor.


Birgitta: That's I think a challenge that I've seen that people have with this term, "fitness function." Because it's so broad and can be many different things, it's not always something that is automated, right? I think that's the one that most people more intuitively understand. "Oh, it's like an automated performance test or something like that," right? Then, why would I care that there's now this word to describe all of those different things in the same boxes? Why do I need this word for all of those different things?


Rebecca: Well, one of the things that we found is it allows you to start talking about different kinds of architectural characteristics on the same level. Security often comes in with its "thou shalt" book with 87 pages, and then the operations teams come in with their thou-shalts, and "These are the run books I need," and all of that kind of stuff, and of course, both of them must have everything, and they're both of top priority, which of course, is not possible.


When we started talking about fitness functions and putting all of those different operational security, performance, resiliency, all of those things under an umbrella, then you can start to talk about "This is the cost of this fitness function. This is the effort that it will take to achieve and maintain this fitness function. Now, help me understand how valuable this particular characteristic is."


Just like with, again, the domains, you have the customer service people, and you have the product people, and you have all of the different domain requirements that all want to be top-priority in the story meeting, and they have a discussion, and they trade off on the basis of business value, and we do the same things with these architectural characteristics where, okay, security says, "This is the risk that we're running, and this is the exposure that we have if we don't do something about this characteristic."


The operations team might come in and say, "And this is the risk that we run if this kind of failure occurs, and we haven't been able to take place," and then they can talk about "Okay, well, what's the relative business value?" and decide which one's the higher priority. By unifying the language, even though we are talking, some of them are just like automated unit tests. Some of them are manual tests where you might be pulling the plug on your database server to test your failover.


You certainly don't want to do that. You don't want to trigger that in a build. You want to know when that's happening. It's not necessarily that they have to be automated. The more you can automate, the better because if it's automated, you don't have to think about it. The only thing that really matters is it's so precisely defined that if I say, "Yes, it passes," you'll say "Yes, it passes," too. That's the only thing that really matters.


Birgitta: It's a common conceptual approach to structural thinking.


Scott: It's very similar to the concept of service level objections, I think, where you're trying to give the trade-off and put it in the hands of the business. Here are the things you need to think about, but you need to decide which ones are more important.


Birgitta: You just had a Freudian slip. You said service level objections.


Scott: I did!? Well. [crosstalk] Objectives, please. Objectives.


Birgitta: Lots of people object to service levels! Then, what else is different about the second edition? I think one of the things that happened since the first edition is that you collected a lot more concrete examples, right?


Neal: Yes. Rebecca sent out a solicitation to a bunch of our coworkers, and we've been gathering examples along the way. We also changed the structure and were much more explicit about the two facets we were talking about. The first part now focuses on fitness functions and a whole bunch of examples. The way we structured the examples in the previous edition, they were just as they came up, they were presented, but we have a lot more now.


We have an entire chapter on automating architectural governance, and it starts from the lowest code level and then escalates up through integration architecture, up to enterprise architecture. It starts at the atomic and goes to the macro in terms of examples for things. The second part is really about structure, and that's the evolution of architectural structure.


That's the part that has changed the most because last time we did a comparison of these different architectural styles and how evolvable they were with a scorecard, but after doing all that, we realized that the, really, thing that mattered on the scorecard more than anything else was the coupling aspects of the architecture. We focused a lot more on that and analyzing how things are coupled or wired together.


In doing research in the previous edition in this one, there was a book that came out in 1993 called What Every Programmer Should Know About Object-Oriented Design. This is the book that created the concept of Connascence. If you've never heard of Connascence, it's a way of describing coupling. There's a website now called connascence.io, and it's really a language for describing how things are coupled together.


One of the observations that that author made was that in distributed architectures, the more you let implementation coupling spread, the worse it is for your architecture. He wrote that, and nobody got it, and then a decade later, Eric Evans came along and wrote Domain-Driven Design, and he talked about bounded context. Basically, what he was talking about is allowing implementation details to spread is damaging to your architecture.


That's exactly what we're saying again in our book. A lot of the coupling analysis that we do in that section is about, "How do you prevent implementation details from leaking because the more they leak, the more brittleness they create, and the harder it is to evolve the pieces because they're welded together because of too much coupling?" That's really what the coupling part focuses on, the architectural structure part focuses on.


Then the third part talks about how those things interact with each other. How do you use fitness functions to check the structure and the coupling of your architecture, and what's the synergy between those two ideas?


Birgitta: How to find the leaks.


Neal: Exactly [crosstalk]


Scott: How do you measure coupling? Is that something you can do concretely?


Neal: Well, it depends, of course, like everything in architecture. It's easy to measure coupling in a compiled code base because there are lots of tools for that to measure efferent coupling. We've known about those for a long time. This is one of the things that I thought would happen more after the first edition but hasn't. We're making a strong call to action in the second edition for this because we keep having people look at things like a cyclomatic complexity check, which a lot of metrics tools provide.


They go, "Oh, I want that for my microservices." It's like, "Oh, that'd be awesome, but here's the problem, what are your microservices written in? Is it the same tech stack or different tech stacks? What kind of database are you using? What communication protocols are you using" There are a million details that go into your microservices architecture. It is not a simple metrics tool that you can download or turn a key and just run it on your architecture.


The thing that has puzzled me is that it seems like architects, "If there's not a turnkey tool that I can set four configuration parameters for after downloading, you get it to work, then I'm not interested." All the information you need is there within your architecture. If you enforce observability on all the services in your microservices architecture, you can, with 10 or 15 lines of Ruby or Python, write some code that looks at all of the logs and tells you exactly how they're communicating with each other and if they're cheating on their communication.


In fact, you can either do that reactively, "Let's do this for the log messages for the last 24 hours to see if somebody's cheating when they shouldn't," or you can do it proactively with monitors. "Thou shall not call this service because of security concerns. I'll block that as it tries to happen," which adds a little more overhead to the architecture but is certain to do that.


It takes just a few lines of code. We've really struggled to get architects to realize that, and this is the thing we're seeing in the second edition. Look, I'll check first to see, "Is the information I need there somewhere?" If it is, you can write a little bit of code to aggregate that information together to get really useful information out of it. We show pseudo-code for writing exactly the cycle check for microservices in our book. You just have to fill in the details for your tech stack and the details of your services.


Scott: This is enabled a lot more by this, the modern observability concept, isn't it, where you measure, you take the entire span of metrics from your architecture?


Neal: Well, in fact, if you use fitness functions to guarantee that everything monitors or logs correctly, and you can build a fitness function that says, "Make sure that all these things produce logs in a consistent way," you have an enormous, rich queryable set of information there that you can learn all kinds of things about the communication, your architecture dependencies, places that are not as resilient as you thought because they're down more. If the information is there, it is harvestable using some of the tools that are around that you can use and a little bit of effort, you can create some real value.


Birgitta: Also, the code and the run-time are the ultimate truth, right, not the documentation.


Rebecca: Exactly.


Scott: I wonder, what have you seen since the first edition came out? Have you seen some interesting implementations or any surprises in the way people have applied this concept?


Neal: The thing that has surprised me the most is the lack of innovation that I've seen, of people really harvesting and taking advantage of the information they have available. This is when I first joined Thoughtworks, it seemed like every time somebody bumped into a problem, they instantly went out and created an open-source project and solved that problem.


That was the instinctual reaction for every new problem that you encountered. It seems like now, and I don't know why, but everybody looks for "What can I download to solve this problem?" The first instinct is, "Oh, I need to download this and configure it," versus "Oh, I need to build a new one." Of course, in the past, you had to encourage people, "Now go look and see if somebody else has already solved it before you build it."


Now it's like, "Oh if somebody hasn't built it for me, I'm just done." That's been the thing that's most surprising. I'd actually expected after the first edition came out a lot more sophisticated fitness functions for people to build. We have started seeing that some of the examples we got from some of our colleagues were very clever ways to do things, including some really clever uses for hypothesis-driven development or hypothesis-driven architecture.


We think something's happening in this architecture, but we're not 100% sure. "Let's set up a fitness function so we can run a proper experiment to get objective measurements for these things and find out" and found out some revelatory things. We set this threshold for scaling to this value and keep getting resiliency stuff. We actually measured it and "Oh, it's four times what we estimated it was going to be." Then root cause analysis understood why.


You make assumptions a lot of times in architecture that it's hard to validate. Using this as an experimental medium was surprising but a really effective use for this. Of course, you leave those in place after the experiment's done so you don't have to do the experiment in the future.


Rebecca: Another thing I was struck by in the examples was just the breadth of characteristics that people were thinking about, "I would like to have a fitness function around this," some of it having to do with the API layers, some of it having to do at the code-level, cross-system fitness functions. It was nice to see people with that expansive view of "What kinds of characteristics can you measure?"


It really demonstrated some thought about the question "What are the things that are going to matter to the success or failure of my system?" because that's really what we want to get to. You prioritize fitness functions that are going to have the greatest impact on whether or not that system over whatever period of time you're looking for it to function is going to be a success.


I was very pleased to see that, but I still say I think one of the cleverest ones we've come up with although I was talking earlier to Scott about this and some other techniques have maybe overtaken this one, but we had a client who was very worried about open-source licenses, and the lawyer approved all the licenses and then asked the dreaded question, "Well, and all of these open-source projects are going to tell us when they change their license, right?"


We just giggled. What they did was instead of trying to do something complex like set up a natural language processor to analyze, they simply hashed all of the open-source licenses. Every time they built, they checked to make sure that it hadn't changed. If it changed, they emailed the lawyer, and then the lawyer could decide whether or not he still liked the license, and then they would redo the hash.


Incredibly simple. That actually has a lot of broad applicability. If there are configurations or perhaps reference data that you really need to know when it changes, you don't have to do something really complex. You can do something quite simple and just fire off an email because it's not so much that you need to know how to fix the problem. You need the flag that says, "Excuse me, will someone come pay attention to me?"


That's what so many of the things that we do in agile software development is raising that flag when you need somebody to pay attention to you. I think that one is so clever and so simple and really with broad applicability.


Birgitta: You don't need the cognitive load to constantly check the ticker that tells you the stock prices, but it actually wakes you up when that's a problem?


Rebecca: Exactly.


Neal: So many of these fitness functions really are geared toward "Something unexpected has happened. You should go check." It's not this deep analysis to figure out instantly what happened. It's like, "Oh, something changed that I didn't expect to change," and that's critical because, as I said before, if you have a million moving parts and one of them changes unexpectedly, you want to know as fast as possible because there's no telling what downstream effects that change may have.


A lot of these really are just a "Hey, something surprising happened." There are lots of surprising things that can happen on projects, configurations that change that shouldn't, or something like that, and having little triggers around to look at that. They don't take long to run either. It's a really fast check for something, but it gives you confidence because I know that cannot have changed.


Now when you go looking for root cause analysis, you have a lot fewer places to check because a lot of those things are governed, and you know that they've been checked all the time.


Birgitta: Okay. Shall we wrap it up or did you have anything else that you would like people to know about the second edition?


Neal: I think the only thing we haven't touched on that we probably should is the additional author that we added to this one formally. Informally, he was the author before, Pramod Sadalage, our colleague, who is probably very familiar to listeners of our podcast. He's been on several times. Many of his books have been featured here, the author of Refactoring Databases, the one he's known best for.


He co-authored with us the chapter on Evolutionary Database Design in the first edition, but we added him as a formal co-author for this edition because we've added more data stuff, but data is just more pervasive in software architecture now. We've realized that over the last few years with microservices, that pulling data into the bounded context makes things like transactionality an architectural concern now, not just a data concern.


It complicates things a lot. His contribution is a lot more holistic in this edition, which I think is for the better for it. It's good to have him as a formal author. That's probably the most important thing we haven't talked about so far.


Scott: Do you have any advice for people that are just getting started with this?


Neal: Absolutely. You don't have to eat the entire elephant to be successful with this. If you find something in your ecosystem that really wants governance and is lacking it, and you figure out a way to automate the governance around that or not even automate it, just figure out a way to objectively measure it and you gather value, you can stop at that point and say, "I'm doing evolutionary architecture."


You don't have to have your entire system protected by this web of interlocking fitness functions. That's the thing we really try to focus people on is that this is not an ivory tower exercise. You've got to go back to "Is this adding value to my project and my ecosystem by having this in place?" because this does add overhead. Architects have to define these things and the objective measures for them.


They have to implement them along with developers. Developers have to suffer what happens when they break because they've done something that was fast but inconvenient long term and now they have to go back and do it correctly, which is frustrating when there's schedule pressure. There's got to be a collaboration between architects and developers so they understand the value of these things. I'm not just poking you with a sharp stick. We're poking everybody with the same sharp stick to make all of us better long-term.


Scott: What a concept. Collaboration between architects and developers.


Rebecca: The one thing I would add to that, the whole point of our approach here is you cannot predict where the change is going to come from. Don't try to start with something, "Well, I know this thing is going to change" because you're probably going to be wrong, and you're probably going to do a lot of work that may not be very helpful. Instead, focus on the places where you're having pain.


Focus on the things that are keeping you up at night. Maybe your company has just announced that its business strategy for the next year is mergers and acquisitions. I'd take a look at your integration architecture and focus your attention on "What do we need to do to address some of the debt that we have there?" but start where you have pain, not where you think change is going to come from because, unfortunately, we can't predict that anymore, even if we could 20 years ago.


Birgitta: Hashtag premature optimization.


Rebecca: Correct.


Birgitta: When can I buy this new and improved second edition, then?


Neal: We just wrapped up the technical reviews. Everything's on schedule. The goal is to have it out before the end of the year, so early December, which is about the last time that we publish a book. This-


Birgitta: Christmas present.


Neal: -exactly, is a great Christmas present. Nothing says loving your spouse like an O'Reilly book about evolutionary architectures.




Birgitta: Cool. Well, thanks, Neal and Rebecca, for the updates on evolutionary architectures.


Neal: Thanks for having us on the podcast. The visitors' seats are a lot more comfortable here than the hosts' seats... thanks.


Rebecca: Oh, yes. Yes, this is even more fun.





More episodes

Episode name
How to measure your cloud carbon footprint   April 04, 2024 Technology through the Looking Glass: Preparing for 2024 and beyond   March 21, 2024 Diving head first into software architecture   March 07, 2024 Exploring the building blocks of distributed systems   February 22, 2024 Software-defined vehicles: The future of the automotive industry?   February 08, 2024 Beyond the DORA metrics: Measuring engineering excellence   January 25, 2024 Asynchronous collaboration: Getting it right   January 11, 2024 Looking back at key themes across technology in 2023   December 28, 2023 Leveraging generative AI at Bosch   December 14, 2023 Jugalbandi: Building with AI for social impact   November 30, 2023 AI-assisted coding: Experiences and perspectives   November 16, 2023 What's it like to maintain an award-winning open source tool?   November 02, 2023 Engineering platforms and golden paths: Building better developer experiences   October 19, 2023 Managing cost efficiency at scale-ups   October 03, 2023 Exploring SQL and ETL   September 21, 2023 Driving innovation in radio astronomy   September 07, 2023 XR with impact: Building experiences that drive business value   August 24, 2023 Leadership styles in technology teams   August 10, 2023 Making design matter in technology organizations   July 27, 2023 Generative AI and the future of knowledge work   July 13, 2023 Scaling mobile delivery   June 29, 2023 Making privacy a first-class citizen in data science   June 15, 2023 Multi-cloud: Exploring the challenges and opportunities   June 01, 2023 Scaling up at Etsy   May 18, 2023 TinyML: Bringing machine learning to the edge   May 04, 2023 The weaponization of complexity   April 20, 2023 How we put together the Technology Radar   April 06, 2023 Inside India's Drug Discovery Hackathon   March 23, 2023 Serverless in 2023   March 09, 2023 My Thoughtworks journey: Rebecca Parsons   February 23, 2023 How to tackle friction between product and engineering in scale-ups   February 09, 2023 6 key technology trends for 2023   January 26, 2023 Tackling system complexity with domain-driven design   January 12, 2023 Shifting left on accessibility   December 29, 2022 Data Mesh revisited   December 15, 2022 Low-code/no-code platforms: The 10% trap and the limits of abstractions   December 01, 2022 Welcome to the fediverse: Exploring Mastodon, ActivityPub and beyond [Special]   November 24, 2022 Reckoning with the force of Conway's Law   November 03, 2022 Exploring the Basal Cost of software   October 20, 2022 Why full-stack testing matters   October 05, 2022 Acknowledging and addressing technical debt in startups and scale-ups   September 22, 2022 XR in practice: the engineering challenges of extending reality   September 08, 2022 Agent-based modelling for epidemiology: EpiRust and BharatSim   August 19, 2022 Mastering architectural metrics   August 12, 2022 Building a culture of innovation   July 28, 2022 Starting out with sensible default practices   July 14, 2022 Better testing through mutations   June 30, 2022 Patterns of legacy displacement — Part two   June 16, 2022 Patterns of legacy displacement — Part one   June 02, 2022 Mitigating cognitive bias when coding   May 19, 2022 Following an usual career path: from dev to CEO   May 05, 2022 Software engineering with Dave Farley   April 21, 2022 Tackling bottlenecks at scale-ups   April 07, 2022 Coding lessons from the pandemic   March 24, 2022 Is there ever a good time for a code freeze?   March 10, 2022 Navigating the perils of multicloud   February 25, 2022 Compliance as a product   February 10, 2022 The big five tech trends for 2022   January 27, 2022 Fluent Python revisited   January 13, 2022 Creating a developer platform for a networked-enabled organization   December 30, 2021 The art of Lean inceptions   December 16, 2021 The hard parts of data architecture   December 02, 2021 TDD for today   November 18, 2021 You can't buy integration   November 04, 2021 The rise of NoSQL   October 21, 2021 The hard parts of software architecture   October 07, 2021 Machine learning in the wild   September 24, 2021 Delivering innovation at scale   September 09, 2021 Securing the software supply chain   August 12, 2021 Making retrospectives effective — and fun   July 22, 2021 Patterns of distributed systems   July 08, 2021 Refactoring databases — or evolutionary database design   June 24, 2021 Making developer effectiveness a reality   June 10, 2021 Team topologies and effective software delivery   May 20, 2021 How green is your cloud?   May 07, 2021 Green software engineering   April 22, 2021 Twenty years of agile   April 08, 2021 Talking with tech leads with Pat Kua   March 25, 2021 My Thoughtworks Journey: Patricia Mandarino   March 11, 2021 Exploring infrastructure as code   February 25, 2021 XR in the enterprise   February 11, 2021 Getting to grips with data visualization   January 21, 2021 Computational notebooks: the benefits and pitfalls   January 07, 2021 The architect elevator   December 24, 2020 The future of Clojure   December 10, 2020 The future of digital trust   November 27, 2020 Integration challenges in an ERP-heavy world — Pt 2   November 12, 2020 Democratizing programming   October 28, 2020 Integration challenges in an ERP-heavy world   October 16, 2020 Models of open sourcing software   October 01, 2020 Applying software engineering practices to data science   September 17, 2020 Using visualization tools to understand large polyglot code bases   September 03, 2020 Machine learning in astrophysics   August 20, 2020 Programming languages geek out   August 06, 2020 Observability does not equal monitoring   July 23, 2020 Working with 50% of code in the browser   July 09, 2020 Realising the full potential of CD   June 25, 2020 Testing the user journey   June 12, 2020 Continuous delivery in the wild   June 01, 2020 Lessons from a remote Tech Radar   May 13, 2020 The future of Python   April 30, 2020 A sensible approach to multi-cloud   April 17, 2020 Digital transformation: a tech perspective   April 02, 2020 IT delivery in unusual circumstances   March 20, 2020 Continuous delivery for today's enterprise   March 06, 2020 Fundamentals of Software Architecture   February 21, 2020 Cloud migration — part two   February 10, 2020 The price of reuse   January 24, 2020 Towards self-serve infrastructure   January 13, 2020 Martin Fowler: my Thoughtworks journey   December 27, 2019 Building an autonomous drone   December 13, 2019 Cloud migration is a journey not a destination   November 28, 2019 Getting to grips with functional programming   November 14, 2019 Compliance as code   November 01, 2019 Data meshes: a distributed domain-oriented data platform   October 18, 2019 Edge — a guide to value-driven digital transformation   October 04, 2019 Tech choices: CIO or CTO?   September 20, 2019 Microservices as complex adaptive systems   September 05, 2019 Supporting the Citizen Developer   August 22, 2019 Getting hands-on with RESTful web services   August 08, 2019 Zhong Tai: innovation in enterprise platforms from China   July 25, 2019 What’s so cool about micro frontends?   July 11, 2019 Unravelling the monoglot monopoly   June 27, 2019 Breaking down the barriers to innovation   June 13, 2019 Delivering strategic architectural transformation   May 30, 2019 Exploring programming languages via paradigms vs labels   May 16, 2019 Multicloud in a regulated environment   May 03, 2019 Can DevSecOps help secure the enterprise?   April 18, 2019 A11Y — Making web accessibility easier   April 04, 2019 Continuous delivery for modern architectures   March 21, 2019 Delivering developer value through platform thinking   March 07, 2019 Architectural governance: rethinking the Department of ‘No’   February 21, 2019 Serendipitous Events   February 08, 2019 Diving into serverless architecture   January 24, 2019 Seismic Shifts   January 10, 2019 Understanding bias in algorithmic systems   December 28, 2018 Microservices: The State of the Art   December 14, 2018 Evolving Interactions   November 29, 2018 The state of API design   November 15, 2018 How we build the Tech Radar   November 01, 2018 IoT Hardware   October 18, 2018 Continuous Intelligence   October 04, 2018 Distributed systems antipatterns   September 13, 2018 Agile Data Science   August 23, 2018

Find out what's happening at the frontiers of tech