Episode transcript
Ken Mugrage: Hello, everybody. Welcome to another episode of the Thoughtworks Technology Podcast. We have a really cool episode today talking about a recent partnership between Mechanical Orchard and Thoughtworks. With me hosting today is Alexey. Alexey, you want to do a quick intro?
Alexey Boas: Hello, everyone. Hi, Ken. I'm Alexey, speaking to you all from São Paulo in Brazil. Great pleasure to be here.
Ken: Thank you. Our special guest Rob Mee, the CEO of Mechanical Orchard. Do you want to do a quick intro, or maybe I just did?
Rob Mee: Hi. I'm Rob. Good to be here. Thanks for having me.
Ken: Then Rachel Laycock, the global CTO for Thoughtworks.
Rachel Laycock: Hi, everyone.
Ken: What we're going to be talking about today is about a partnership that involves some software that was created by Mechanical Orchard platform, I should say some consulting from Thoughtworks, and stuff about legacy modernization. For frequent listeners, you may remember that back in November, we talked to a couple of people about using generative AI for legacy modernization.
The topic of that, if you haven't looked at it, it was November 28th, if you want to look up the episode. We talked about an internal tool at Thoughtworks called Code Concise that helped us with modernizing different code bases. We've taken that a step further, created this partnership with Mechanical Orchard to include code concise into their platform, wrapped up with our consulting services, and so forth.
I guess first off to Rob, the partnership's origins, what made this attractive, why did you want to do something like this for your platform?
Rob: We're a two and a half year old startup of about 100 people. We're relatively small. We're tackling really difficult problems here. We're talking about modernizing the core systems of very large organizations. They often run into the tens of millions of lines of code, and they are critical systems.
Even when you use a powerful generative AI platform that assists in modernizing these, there's a lot of curation to be done by humans, and there's a lot of work to really coordinate the interaction with a customer or a client, and to do all the things around the platform that need to be done in order to make one of these migrations possible. We simply don't have the scale.
Furthermore, we've known Thoughtworks for decades and have worked with Thoughtworks many times before, and there's a lot of shared DNA in terms of agile software development and very disciplined engineering practices. If there's any medium to large-sized consulting company that could understand us and work well with us, is Thoughtworks.
Ken: Rachel, I guess the same question. What made you interested in this partnership?
Rachel: It's a great question. As Rob said, we come from very similar backgrounds with very similar thinking about how software development should be done well. We don't have a simplistic view of it. We're just generating code. We're just shoving things into production. We're building complex systems that continuously evolve.
Prior to me doing this role, I actually run our service line on enterprise modernization platforms and cloud. The platform's problem, although complex, like what's the new platform that you want to build? What should it look like? What are the right services and APIs in that is difficult, but mostly solved. The problem around cloud and cloud migration for most software outside of what we're talking about today, which is the really legacy stuff, is a solved problem.
When we started to get into the big legacy modernization, mainframes, things like that, we came unstuck. It was a really, really difficult problem to solve, not because it's impossible to solve, but the scope and scale, the risk, and the amount of failures that have already existed within organizations, I didn't get the time. I did a bunch of research and found out that 70% of these types of modernizations fail, and people often take a big bang approach.
We were already looking at, how do you take an incremental approach? Once we put that in front of clients, just the cost of the change and the scale involved, most of them got sticker shock, and they wanted some silver bullet, something that made it easier and simpler.
When we started talking to Mechanical Orchard about their Imogen platform, for me personally, it was like, "Ah, finally something that can help along the way, not a silver bullet, but it can actually start solving the problem and speeding up that incremental migration of the jobs that you might have in a mainframe, which is the approach that we would take anyway, but we would be doing it manually without generative AI."
Ken: You mentioned Silver Bullet. I have Youtube. I thought you just pointed generative AI at a system and it was all magic. I guess, to either one of you, what is really involved here? Why does it take the village, if you will?
Rob: If you think back to Fred Brooks and some of the things that he's written in the past, he characterized problems as having essential complexity and accidental or incidental complexity. It's never more true, I think, than in a mainframe migration in my definition of essential and incidental complexity here.
The essential complexity is taking systems written in an older language, reading and writing to older data sources, databases, running on older machines and older operating systems, and moving those to a more modern language, running on a more modern platform, modern data storage, and so on. That's the essential complexity.
Now, the incidental complexity is a whole other set of concerns. It involves, how do you get access to the mainframe.
Mainframes are a little bit like a fortress. It's a somewhat closed ecosystem. It's not like open source cloud, everything is available, everyone can do everything. It's not like that at all. It's a closed ecosystem. In fact, around that fortress, most organizations have built a human moat. There are people in networking, compliance, and security, there are DBAs, all of whom when you say, "Can we get to this data?" their first reaction is to say, "No."
Then if you say, "How do I do that?" They'll say, "We have an SLA on that request of eight weeks." There's all of this incidental complexity about gaining access to the fortress, being able to infiltrate the fortress, and exfiltrate data and code. There's all of that incidental complexity to add to the quite substantial essential complexity, which is not at all trivial. That's part of the answer.
Rachel: I would add on to that essential complexity. The challenge is in the past. I think everything Rob said is accurate about the SMEs required to even understand these code bases so that you can start to decide what you migrate, what you modernize. I think because of that lack of knowledge or being able to access the system, people have gone with this, just port it to a different language, just take all the existing code and just transform it into a new language.
What we ended up with and what we ended up being affectionately called JOBOL, where you move something from COBOL to Java and it just looks like COBOL, but it's also not performant. Then you do all the refactoring and everything on the other side, and that 20% of extra work, which seems like, oh, 80% of the work there. We've transformed the whole code base, but that last 20%, that last mile ends up turning out to be 80% of the work and so people are really not getting the results that they want.
The approach we've always taken is, you know what? You probably aren't using 80% of that code base or it's not that valuable and it's not that important. Finding out and identifying what is valuable and what's useful is actually a big part of the problem so that you can then say, "This is the stuff that we need to modernize. This is the stuff that we need to port." You are actually reducing the amount of work that you have to do. It's a very different approach. It's more incremental. It takes a lot of the risk out in terms of getting to value and getting ported over the things that really matter to you.
You mentioned the code concise stuff that we started probably 6 to 12 months ago. What we were trying to solve for there is some of that complexity Rob just talked about of getting access to an SME, somebody who even understood what the code base was doing, and we realized that generative AI could be really useful for that, being able to get that context using ASTs and knowledge graphs to understand and interrogate the code base.
That's just one part of it. That doesn't tell you how the code base is being used or which of the code base is the most used, which I think is what's really interesting about Imogen. I'll let Rob talk about that in more detail, where they're taking the approach of understanding the data flows, which really leans into what's the most important part of the system, and which of it is actually just a waste of time and money to actually pour in the first place.
Alexey: I really like the take on the conversation about incidental complexity and risk reduction because this goes beyond a lot of the conversations around AI tooling, that talk a lot about productivity, doing things faster. This is actually using those things to reduce risk and possibly, in this case I guess, almost certainly make something that wasn't possible at all to do because it had such a high risk, now into something that's manageable and that you can actually do.
Rob, maybe for the benefit of our listeners, if you can talk a little bit more about what is the approach behind Imogen and how does that work and how does that help reduce the risk and make sure that you're getting the same behavior that you have in the mainframe when you're doing the migration, if you can tell us more about that, that would be great.
Rob: At a conceptual level, the approach we're taking is essentially treating the running legacy system as its own specification, which sounds a little bit mysterious when you say that. Of course, you have the code, which is a specification for the system in a way. You compile that code, and it runs. What is happening when the system is running, what data is ingested, and what data is output is really a description of the behavior of the system.
If you can capture the inputs and the outputs on a sequence of runs of, for example, a batch job over the course of a month, you really have a detailed characterization of what that system is doing. Furthermore, you have a test suite ready to go. If you look at the case studies of a lot of these migrations, what you'll find in the write-ups is, gosh, it took 70% of our time to do the testing and verification of this system, or the migration went way over time because we had to spend so much time figuring out if the system did what the old system does, because we had to match that behavior.
We've turned that around by treating the system as its own specification, by capturing the data flows and doing that first. How that plays into Imogen, the platform is we automate the capture of those data flows. Then once we have that, we can use Generative AI along with some interesting parsing techniques to decompose these programs, generate individual chunks of the program. I'll get back to why that's important in a moment. Reconstitute the program, and then run it against the series of what we call back tests that are represented by the data packets of input and output to verify that the generated code is correct.
If it fails against this, we can feed the error back into the AI system and have it generate it again until it converges on a correct solution. Then once that's done, essentially we can say, "It did exactly what the legacy system did over a month of data runs for this particular component. This is ready for production." I think that's the uniqueness. When you go back to--
This goes back to your earlier question, Ken, about, gosh, we'll just point AI at it and it'll all be fine. What the AI can't do yet is handle large programs in terms of converting them. If you give it too much code, it'll say, "Oh, sure, here's your output." In the middle, it'll say something like implementation goes here. There's a lot of preparation and massaging of context, templates, and decompose the program into discrete self-contained units that you can feed to the LLM, and it can produce discrete self-contained units of code that are correct.
Then the problem is then stitching it together and running it, but that we can do. If you're able to do that and you're able to have the feedback from these data flows that you've captured, now you've got a closed system that can loop until it gets it right. It can have relatively idiomatic, but correct code at the end. You're harnessing what is a very powerful non-deterministic capability that is represented by Generative AI.
It's very, very powerful, but because it's non-deterministic, you might not get the right answer the first time, you might not get the same answer every time, but that's fine if you can test it against an exhaustive set of deterministic data flows and you can say you got it right.
Rachel: I think this is a really, really important point that Rob is making as well. I know that the whole industry is a little bit obsessed right now with how much more productive we can be when we generate code. I think that the tide will change towards how we build software, and modernize software is going to change to something that's very focused on really good specification and really good verification. What Mechanical Orchard are doing with Imogen is leaning into that so that you can trust the output so that when you do things incrementally, you can get to value incrementally, but you can also trust the output, not just that it does the things that the old system does, but it does it in a performant way.
The practices we've talked about and espoused for years at ThoughtWorks, which we call the sensible defaults of continuous integration, continuous delivery, test-driven development, those practices are all in play here that remove the risk as well as giving you the outcome that you want. I just think that's so exciting.
Then, to Rob's point, because you're following the data flows, you're porting the code that matters. You're not porting all of it. You're porting the stuff that matters, and you're putting time and energy into making that work. Not only are you doing something that you can trust at the end, but by porting the stuff that matters, you make the migration less work because you're not doing everything.
I think that makes a big difference. Then once you've got it into a modern idiomatic code base that's well tested, that you're pushing through a deployment pipeline, then you're in the space of having a code base that you can continuously improve, which again is one of the other core sensible default practices.
What I think we should all be talking about more in the industry is not how much faster can we go. It's like, "Yes, sure. Those things are important, but how do we ensure that we're producing high-quality code that we can trust?" We're building systems that we can trust, and we're building them in a way that we can continuously evolve them because that is the nature of what we're doing with software.
Ken: We have lots of different applications that require modernization, mainframe banking to the cat meme written in PHP. What's the profile here? Who is this helping the most? Is there an application profile? Is there an organization profile? Who is this solution targeted at?
Rob: I think really what we're targeting is the really big, really hard applications that are on mainframes. Obviously, ThoughtWorks is capable of doing modernization on pretty much anything. Imogen right now is focused on the mainframe workloads. There's nothing that is tied specifically to any particular language, or system, or a data source, even.
The system is agnostic to that, but it has been built around mainframes for now. I think that's really the big target because those are the ones that have-- Large mainframe systems have been resistant to modernization. They're the most mission-critical systems for the largest organizations. We're swinging big, going after the hardest systems. That's what we want to do.
Rachel: I see that as just an opportunity, because if you solve the hardest problem, the other problems should be easier. We start porting big Java systems or big Python systems, or we start looking at things like SAP or whatever else is out there, it opens up all kinds of possibilities because actually, the mainframe one has been the hardest one for so long, because of all the reasons we talked about earlier. Do we understand it? Can we get access to it?
Even when we do, it's just such a different paradigm to the way code is written today. Even if you can get people to understand it, it's like, how do you put it into a new paradigm, and how do you make sure that it's performing in that new paradigm? Having people that have the skills to understand the old system and then write something in a new way, that person is a unicorn. They probably don't exist. Having something that helps you move things along the way, I think, is a big game-changer for me.
Ken: I hear a lot about incremental and component, and so forth. It reminds me a lot of the discussions about decomposing monoliths, where we were taking a piece out and creating a microservice and replacing it. The monolith and the new services are running side by side during that process. Is that the same case here? Are we taking functionality out of the mainframe, putting it into a different platform, and both are running while we do this? How does the business continue to operate while all this work is going on?
Rob: Yes, it is. We are taking components incrementally, batch job by batch job, transaction by transaction, or screen by screen. We're replicating them, we're verifying that they're correct. Then we're putting it into production and decommissioning the old piece. The systems have to be orchestrated so that both are running. There is some connective tissue that you have to build between the two systems, which adds to the complexity, honestly, of doing a migration like this. I think that is preferable to doing it in a big bang and trying to cut over at the end.
It's much preferable to actually have some short-term complexity while you are incrementally replacing and keeping both systems running, and occasionally having the capability to fail over to the old system if something goes wrong in the new one. All of that is important to do.
Rachel: Yes. That's a pattern we've been using for probably 10 plus years at this point, that whole strangle of pattern and having the ability to redirect from one system to another, these are known techniques. I think that I'm not even concerned about because most organizations have done some form of modernization.
It's the mainframe ones that they struggle with the most. Most organizations are familiar with leveraging this type of pattern, especially since the advent of DevOps and the ability to deploy to different environments and be able to turn feature toggles on and off and things like that. These are all known techniques that you can leverage to do this.
Alexey: I find it very interesting that the approach itself lends to an incremental approach and separating concerns. I like what you said, Rob, about the system being the specification, because I have seen modernization initiatives in which people were trying to go back to the requirements before the migration, but then they didn't have the full picture, and the system was actually the authoritative version of what should be done. After all, it's been doing the right thing for 30 years on a mainframe.
People were going back and trying to understand, but then trying to do two things at the same time. I like the approach of first replicating behavior during the migration. You first make it easy to evolve, and then you start evolving on an easier context. By doing that, you also reduce risk. You separate concerns, and you lend yourself to working more incrementally. I think that's quite, quite interesting approach.
Rob: You're right. These systems, they've been doing the right thing for 30 or 40 years. You can trust these systems, but what you often find is that for the humans that operate the systems, they've lost the context of how the systems do what they do.
Part of what's really in an enjoyable part of doing these modernizations and using the platform is illuminating what has previously been obscured by capturing the data flows and seeing what happens and making it understandable in corners of the system where no one remembers how it does it. We can illuminate it and say, "We know how it does it now," and we can recreate something, and now we have a system that is well tested and maintainable, and understood. That's pretty cool.
Ken: What does that process look like? We've talked a lot about we can gain the understanding and so forth. I wish it was only the dark corners that were not well understood. I think the cores are not well understood in many cases as well. What does this look like? Do you come on first and do some discovery, and then run a Code Concise thing? What's the process look like here?
Rachel: As we said earlier, we already started exploring with Code Concise because we were working on clients where we were working on mainframe systems that they wanted to modernize. The blocker of course, was we have to get access to an SME so that we can ask questions about the system, the main users of the system, what it's doing, how it's doing, all these different things that you want to understand about code base when you come into it.
The team was like, "I wonder if we could leverage Generative AI to help us understand the system." This is before we were even aware of what Mechanical Orchard is doing or talking to them. Even having conversations with clients of like, "Hey, we've done this with one client where we now can interrogate a mainframe code base, and we need a lot less SME time because we want you to leverage the SMEs to validate it, but we can interrogate the code base."
It's like a light bulb went off on people's heads of the ways that they could use that and the way you could point that at any code base. That's just the understanding of the code part, that's not even understanding what are the main data flows, what's the important parts of the code, and having that captured as well. Obviously, the first part of it is the discovery and understanding, which normally we've done manually. We would've spoke to SMEs, we would've built out essentially a document form set of understanding and requirements of the system so that we could then say, "These are the pieces that we want to modernize."
Then we take the typical Thoughtworks approach of understanding what are the domains, what are the different bounded contexts, what are the different services that we want to build. What are the different modular components? Where do we begin? What are the platforms that they want to build off the back of that? All the kinds of things that are more like the forward engineering of what's the future going to look like?
Now that we introduce Imogen into the picture, first of all, we're still going to leverage code concise and understand the code. At the same time Imogen is going to be understanding the data flows and running against the data flows. Then we'll have a map of both sides of that picture, the code and the actual data flows. Then it's the same as we would always do.
We are into the forward engineering part of like, "Thankfully with Imogen, now we've actually forward engineered a bunch of it automatically," but we've got to a place where now we've got the verification in place, we've got new code in place. Then for many organizations, it wasn't just about porting the code, it was like, we want to be able to do X, Y, Z now that we've got the code in a modern code base. We've got more engineering that we can do off the back of that.
As Rob pointed out, as we've talked about, we do that incrementally. It's not we're doing the whole thing at once. It's job by job. That's really the process. We want to get as soon as possible to a point where we can say, "This new job is in production, and we've turned the old one off," and then we keep going. These things can be run in parallel. It's not like we can only work on one job at a time. That's in my mental model, how I've seen these things working.
Rob: To dive a little bit more into what does it look like in an engagement, how does it unfold? Even that we try to do incrementally by starting out with what we call a minimum viable modernization and say, let's take a piece of this, a cluster of jobs, say, with their data sources, and let's, in a test or a staging environment, run the entire platform, and do a modernization without yet putting it into production.
Because that way you'll knock out a lot of the risks with the essential complexity, deferring a bit of the incidental complexity for the next phase. We'll do short phase of 8 to 12 weeks, something like that. We've done all kinds of planning and debating with Thoughtworks about how this unfolds in a typical way.
There's a lot of different ways to do it, but we're always trying to isolate complexity and solve and make progress so that we can just eliminate risk. It's really helpful to understand where those lines are and what is essential and what is incidental so you can focus on the most important stuff.
Rachel: I think the critical thing is breaking it down. As Rob said, we've gone back and forth. We've spent six weeks on this piece, eight weeks on this piece. That's the kind of timelines we think about. We're not going to spend a year in analysis. It's not that. We're talking one to two months understanding the system and breaking it down and even plotting out a roadmap of which systems we're going to port and what our rough estimate is of what that will take, but you'll get incremental value along the way.
Obviously we'll go into testing environments before we go into production, but we'll get to a place where it's incremental value and we're able to switch things off along the way. In the first year you could expect a bunch of stuff to have been already modernized or already migrated. Then we're working through however many streams we can do is really however many teams you can get in the ground and however many parts of the system you can modernize in parallel.
Alexey: Rachel, you mentioned a number of engineering practices that stay. They're still relevant that we still need to do. Also, the incremental approach is something we've been doing in the software industry for a while. That's something that's still quite relevant. At the same time, many things change in the way we can approach these modernizations application development.
I don't want to ask you the crystal ball question, but how do you see the trends in the future? How can this modernization evolve as the tools evolve as well? What are some of the things you feel we could possibly expect to stay or to change? That would be interesting to hear.
Rachel: I would say the predictions of the death of developer and no longer needing a developer are greatly overestimated at the moment. I hope in listening to this, people recognize this isn't just a point and click. We come in, we point Imogen at the system, and we press the buttons and out pops the modern system. No. That's not the case.
You still need software developers that really understand the system system and all the business folks around that. Whether they're analysts or product owners or whatever who really understand the system to make sure that we're doing the right thing. I think the change in how we build software, how we modernize software, how we run it is we've moved up a layer of abstraction.
That's what these tools are enabling us to do. They're creating, sure, some efficiencies in what we do, but the fundamentals of what we're doing with software, which is building complex systems that need to continuously evolve and need to be out in production and dealing with whatever happens in the wild, that hasn't changed. The focus on how can we do things faster is not necessarily the right focus.
I understand why people are concerned with that because time is money ultimately. It's missing the point of what's involved in building complex software. There's still going to be a lot of humans with deep expertise and knowledge required to make sure that we've specified this in the right way and we've verified it in the right way, and the architecture and everything is meeting the requirements of what we're trying to do once it goes into production.
I haven't seen any tools that can magically do any of that yet, which is why I say I think the death of the developer, in terms of their role, is a little overestimated at the moment. I always go back to that Bill Gates quote of like, "We overestimate what we can do in 2 years and underestimate what we can do in 10." I don't know where we'll be in 10 years, but I think in 2 years, a lot of this stuff will become the way that we build software, in the same way that agile eventually became the way that we build software.
We will stop talking about it in the way that we're talking about it now, but the roles of what people do, what we all do day to day, when we build software, they're going to change in adjust because we have different tools and technologies that our fingertips.
Alexey: It's interesting. One thing to keep in mind is that when we look back at what happened in the technology space, we started developing more complex systems, when we had new layers of abstraction in new capabilities. Right now, people are thinking about applying efficiency to the current state of development, but the bar is going to go up. We're going to need to develop more complex things, more sophisticated, faster. That will also require different capabilities. That's interesting.
Rachel: We were just talking about this today in another call because this efficiency one keeps coming up over and over. It's like, "How will we measure it?" Will we take the story points that we use or the function points that we use and say, "Now we expect it to take this much less time to use those," to meet that three-point story.
The problem is that as the tools become adopted and that becomes the default way of working, that gets incorporated into the estimation. That metric then, relative from the old way of not leveraging these tools for the new way, just becomes meaningless. I think that's where we'll get to. I think for me, the hope is that people stop trying to do big bang modernizations because they're just so fraught, they're so risky. They fail so often.
I know that the deterrent for people doing the incremental approach which we've been pushing and driving for a long time because people are like, "That's going to take years and years and years." If we then have tools that enable us to make that timeline shorter, that people start taking this more incremental approach, that also changes the nature of how we think about software as well and how we build software and how we modernize it and how we run it.
Rob: I think with these modernizations, the way they've been often done in the past with a big bang or with an emulation approach with COBOL in the cloud or the transpiler giving you so-called JOBOL. I think the outcomes have not necessarily been ideal. You may end up with a very large code base that isn't well understood, that isn't well tested, isn't easily maintained, perhaps doesn't even run very well on the cloud because it was architected for something entirely different.
First and foremost, we want to get a better outcome. The next thing to tackle is, as you say, Rachel, "Get faster." That's where we are now is working on getting faster progress.
Rachel: We've been working on the better outcome, but it takes too long. It's like, "Ooh." There's only so many bodies you can throw at it because you've got to go back to the “mythical man month”. There's only so much you can parallelize. I think that's a really good important point is like, we're trying to get to the outcome. It's not about writing code faster. It's about getting to the outcome, but doing it efficiently actually matters because it costs money.
Rob: Back to the crystal ball question, AI is not a panacea for every software development problem. I think the imminent death of the developer is overhyped. If you go back to what I was saying earlier about having to decompose large programs in order for language models to understand them and be able to generate a replacement, we're not talking about huge.
You get up to several thousand lines of code, and the models start having problems translating those from one language to another in a consistent way. We're definitely, at Mechanical Orchard, betting on generative AI and betting on the models continuing to improve. They're incredibly powerful, but they're not magical yet, but they are powerful.
My advice to developers is embrace this capability and use it. Don't resist it or reject it because it doesn't do everything right. It will help you. I don't think there's really much argument now that the latest models can help you be a better developer. Don't be like the assembly language programmers in the '60s who refused to move to COBOL or to Fortran or to higher level languages because they couldn't see what value was in the register or they were worried that the performance would be subpar.
That higher level abstraction of those languages really helped people's productivity. The folks who wouldn't move probably found themselves retired prematurely. I think it's worth embracing it, but also understanding limitations. Look at it as another tool. Even though this is a bit of a paradigm shift, it's not just another coding tool. This is pretty significant stuff. You should embrace it.
Ken: I want to thank you both for your time, and Alexey as well, of course. It's just really exciting, because what I see out of this when I hear the incremental, and Rachel touched on the sensible defaults and things like that, is we're able to apply the practices that we've been applying elsewhere for many years onto a mainframe so that we can get the incremental approach and the incremental value and so forth.
Faster isn't just writing more code in less time, it's getting the right things built. I'm very excited about that. Again, thank you very much. To our listeners, we'll add links to more information in the transcript. Thanks a lot.
Rachel: Thank you for having us.
Rob: Thank you.