Brief summary
AI promises to reinvigorate the way we work, and many organizations are already seeing its potential come to life. In this podcast, we explore how we at Thoughtworks are transforming our own operations with AI. This episode is packed with insights for leaders who are embarking on a similar AI journey within their own organization.
Episode highlights
- Jessie shares how Thoughtworks became an early adopter of Google Gemini in 2023, leveraging pilot access to explore generative AI features in Google Workspace, providing feedback, and gaining valuable insights.
- Thoughtworks launched initiatives like the "AI for Software Festival" and generative AI hackathons, empowering teams with training, workshops, and hands-on experimentation to boost AI skills and capacity across IT services.
- Early 2024 experiments centered on software maintenance and support, where generative AI was applied to ticket handling, categorization, and response automation, leading to faster problem-solving.
- Sara shares how successful AI scaling requires close collaboration between business teams, IT, and partners. Weekly check-ins with extended teams help to ensure alignment and compliance while streamlining internal processes.
- Evaluating AI tools involves assessing data security risks in close collaboration with InfoSec other relevant teams. Tools accessing sensitive data require stricter oversight, while flexible tools can allow for greater experimentation. Begin with controlled scenarios, such as limited data dumps, rather than real-world data. Starting small helps accelerate progress while minimizing risks.
- Effective experimentation requires clear hypotheses and measurable outcomes. Track lead and lag indicators like time savings or productivity gains to determine where to invest or pivot.
- Align AI initiatives with global business priorities and mature technologies. Identify and focus on areas with high potential ROI, such as go-to-market strategies, account planning, or software development and service management.
- Evaluating vendors requires careful consideration, and it's important to review tools at a feature level, not just at a product level. AI capabilities evolve so fast that sometimes, it might be necessary not to roll out a new feature due to data protection, or a lack of maturity.
- When scaling AI, one team inspires the next. Amplifying internal success stories inspires other business functions to experiment with their own use cases.
- People at Thoughtworks are naturally curious and take pride in their work, so they want to improve and learn new capabilities. Instead of fear, we're seeing excitement about new AI tools, and we are setting up training programs and campaigns to drive impactful change.
Transcript
Kimberly Boyd: Welcome to Pragmatism in Practice, a podcast from Thoughtworks where we share stories of practical approaches to becoming a modern digital business. I'm Kimberly Boyd, and I'm joined by Jessie Xia, Global Chief Information Officer at Thoughtworks, and Sara Michelazzo, Head of Innovation and Strategy at Thoughtworks Global IT Services.
We spend a lot of time talking about how we've helped our clients transform their IT operations with AI, but today, we're going to discuss how we've done that in our own organization. Jessie, Sara, welcome to Pragmatism in Practice. Excited to chat about this with you today. Perhaps before we get started, you could both introduce yourselves to the listeners and tell us a little bit about each of your roles at Thoughtworks.
Jessie Xia: Hi, everyone. I'm Jessie Xia, the global CIO at Thoughtworks. My role focuses on enabling our business through technology. Everything from the core systems, infrastructure, to driving innovation with emerging tech like AI. I work closely with our teams globally to ensure our IT strategy not only supports but also accelerates our business goals.
Sara Michelazzo: Hi. I'm Sara. I lead innovation in global IT services here at Thoughtworks. Jess and I work closely on strategic initiatives. Over the last year, I've been leading our internal AI transformation. That's meant rolling out tools like Gemini, supporting experimentation across teams, and setting up pilots like Agentspace to help scale AI adoption across the business.
Kimberly: All right. Lots going on in both your roles, so let's dive right into it. Perhaps we can kick off, and you can tell us what Thoughtworks is doing to transform IT operations with AI. How did you get started?
Jessie: I can talk about that. I think, probably from 2023, generative AI just became a very popular topic in the market. During that time, definitely we pay attention on that. We're thinking about, first, we probably need to build capability and started doing a lot more research. Actually, the good thing is our partner, Google, has invited us as pilot users for the early version of Google Gemini in Workspace Enterprise, I think in about early 2023.
We were able to experience a lot of generative AI features in the Google Workspace and provide feedback to them. That was a really good experience for us and a very privilege to be part of the experiment. Then for ourselves, then we start thinking about how we can help people to really build a capability, generate a lot of interest in the organization. We work with our global technology organization.
Then we start something called the AI for Software Festival in about second half of the year. Then going on in our global IT services function ourselves. We are thinking about maybe we just run another hackathon. By the way, we actually run hackathon once a while, about two or three times a year. Then we thought, "Maybe the next one we'll just do something related to generative AI."
We did our gen AI hackathon in Q3, 2023. Then with giving people a lot of training materials and also workshops, it helps boost the gen AI capabilities in the global IT services. Then building capability is another story. Now we start doing a lot of more experiments.
I think early 2024 we feel like there's one area we could start experimenting and very easy to see the results, which is a software maintenance and support because a large language model, you can really leverage that to look at the software supporting, like how you handle tickets and many things like how you categorize tickets, how you analyzing tickets, and even respond to some tickets leveraging our historical knowledge, or it can help humans to solve problems faster.
today, I'm not talking all the story because we have done a lot of experiments, but I think, from early this year, we are in the stage of scaling AI across the organization.
Kimberly: I know myself and my teams are getting the benefit of some of those tools and have really enjoyed digging in and seeing the power of everything they can do. Very excited to have it starting to scale across the organization more broadly. When you kicked all this off a couple years back now, how did you decide, and who did you decide initially needed to be involved to get this AI for IT off the ground?
Sara: It definitely takes a lot of collaboration between the business and IT. The business team bring the user case and has people to prioritize based on their priority. As IT, we advise and explore the how, running spikes and experiments. As part of the cross-collaborative team, we also get the vendors and the partners in as part of extended team. For example, they are invited to our weekly check-ins to stay aligned.
One of our key learnings was that the high volume of the request and tool was really stressing our internal processes. Closer collaboration also with some of the ops team like procurement, data protection, and InfoSec was very critical. This also forced us to improve our internal processes so we can move fast without compromising compliance and security.
Kimberly: A question on the tools, just because I know it's such a fast-evolving space. There's new tools coming out literally every day, and I'm sure you're getting requests from every pocket of the organization asking to trial these. I guess reflections or thoughts on how other organizations, as well as ours, can stay ahead of and stay on top of that because I don't anticipate that the proliferation of tools in this space is going away anytime soon.
Sara: Being curious, being hands-on, keep an eye on the market on what the existing partners are offering is very critical, and also having people that are willing just to get their hands dirty and try new things. They're important. Also, probably accepting that, from an IT perspective, we'll have to redo it, retooling and consolidation of the tool later. There is a time to explore. We don't want to slow down the exploration. At the same time, we need to be financially responsible.
Jessie: Maybe I can also add something here. Previously, I talk about capability building. This was really enabling us the capability to be able to evaluate and then decide what is the best. Beside that, actually, I just realized I didn't mention one very important thing. We are also pushing ourself to be really AI-first software delivery team because, as a global IT services, we both use a commercial off-the-shelf products and we do some integrations and then some level of customization.
We also have homegrown tools. We have a lot of developers, QAs, and other software delivery roles, and how we can make sure they really use the capability that learn to apply to everyday, that's also where, in the scaling phase about leveraging what we have learned from the pilot to allow them to really apply those capabilities every day and get a lot of efficiency gain and even become more effective and smarter as well with the AI systems.
With that, I think this will really help us to make a better decision when we use other tools. Also, when we build things. What is the right way to build things, and how many we should build, how many we should consolidate? Those give us really good insights as well from our own capability building.
Kimberly: Jessie, can you talk a little bit about, I guess, two dimensions of when you're evaluating also tools. What's some of the criteria that you're taking into consideration, but also some of the criteria you're taking into consideration for what you just mentioned, what to consolidate and what to build? What frameworks are you using to make those decisions?
Jessie: I think one thing I have to mention is I know a lot of people know you use general AI tool, you have to think about the risks of data protection related. Definitely, when we choose products, we have to look at the risk. We work very closely with our procurement, InfoSec, and data protection to identify the potential risk and also mitigate them as well. Also, depending on different tools, if the tools are really required to access our own data or client data, we'll be very cautious about what kind of risk we can take.
If they don't need to access a lot of our data or use our data for training, we can be more flexible. I think, on the other side, it's not easy answer about how many to consolidate. Are you using more tools or tools from one partner? I guess really we have to keep an eye on that, what kind of business cases we are going to resolve and experimenting with more, but have to make a conscious decision. There are many multifactors we have to consider. We work very closely with the business functions to make joint decisions together.
Kimberly: You talked a bit about risk, but I guess, on the flip side of that, too, is experimentation. Sara, you talked about the importance of remaining curious. Have you found, I guess, the right balance between allowing for enough experimentation in the organization while also accounting for risk? I guess any learnings from the past year or two on this journey in this space?
Jessie: Yes, sure. I can talk a little bit and I think Sara can add more view. I guess we talk about risk but the most important thing is what results are we getting [chuckles]. Initially, we're focusing more about capability building and make sure we are up to date, and also make sure we stay advanced in the market of our leadership as well, but then ultimately, you have to create the results by using those technologies.
What we do is when we experiment anything, we generally identify the hypothesis, some lead indicators or lagging indicators about how we measure the results, how we measure the success, and then we keep collecting data to see where we are. For example, I mentioned earlier when we were experimenting using generative AI capabilities to automate, prioritize the ticket handling for maintenance support, we do look at how many are the cycle of solving tickets, how many tickets can be automated, how many hours we can save, and the trend as well.
While we see the good trends, we keep investing, keep going, but if we see something not happening as we expect it, then we reflect what we should do different today. I talk about from the result angle. Then I'll probably let Sara talk about if there anything we should look at when we do the experimentation.
Sara: For the experimentation, if you take that we're doing is starting small. Maybe start with a data dump rather than giving access to real data. This is something else that we are trying to move faster. I think those are probably the top 2.
Kimberly: I know you mentioned that we're at the point in time now where we're doing more scaling of AI capability into the organization, but before that, there was probably a lot of experimentation and selection of use cases. I know, quickly, that those can probably start to form a lengthy list. I know, in marketing, I feel like, every day, I am coming up with new potential use cases for how we can apply AI. I'm curious how you went about identifying and prioritizing what the best use cases were for AI acceleration for the business.
Sara: There's probably a sweet spot between the global priorities and where AI is mature enough to drive some real impact. On the business side in particular, we focused on the AI for go-to-market. It started with a research with the users, identify where their problems are, and we decided to try to really improve some of our top workflow, like account planning, meeting preparation, client proposals. Internally, in global IT services, we doubled down on AI for software development and service management like supports and ticket, because we started small, but we saw, very quickly, a strong potential for productivity gain and ROI.
Kimberly: Can you share a little bit, even if there-- I know it's a constantly moving space, but what kind of results have you seen in those spaces so far?
Sara: From a software delivery perspective, we had some very, very, very big improvement in how much accurate the estimates are. We are hearing that some work that previously was taking weeks is now down to days. From a service management perspective, we have reached an impressive level of automation for L1 around 80%. We were not expecting that immediately but we also see an improvement in customer satisfaction because automation has boosted our response and resolution time by about 20%.
On the sales side, we're still working on it but we are quite on track with our goal to be able to create some high-quality client proposals in half time, accelerating the initial phase that is the most time-consuming, to identify the market intelligence, the expert insights to come up with a very compelling proposal tailored on the client.
Kimberly: From weeks to days, and L1 at 80%, that's great, and sounds like there's a lot more, I think, opportunity to do even more. We talked about there's a crazy amount of tools. It's very important, too, to also have the vendors closely involved with you as you're on this journey to become more AI-led in our IT operations and in our business overall. Could you talk a little bit about how you approached vendor selection, especially with newer, perhaps more experimental AI startups in this space?
Jessie: Yes, I can talk about vendor selection. We are open to explore new vendors, including smaller startups, especially when they bring more innovative capabilities. Some startups they move really, really fast. However, we are very intentional about how we evaluated them. If the tool doesn't require us to share internal or client data, we are more flexible, but if the data is involved, we expect enterprise-level security and the compliance from the start, because data protection, we care about that, especially when the data are client-related. We are more serious about that too.
With AI, we actually review at a feature level, not just a product level, because sometime they just roll out a new feature, which sometime the new feature can concern us. Were there any new features roll out? We review them again because AI capability evolved so quickly. There have been cases where we've chosen not to roll out certain features due to concerns around data protection or lack of maturity. Our goal is to enable experimentation while staying responsible and secure.
Kimberly: Have you found a lot of vendors in this space have a pretty thorough approach to data privacy and protection, or is that still a place where they need to grow and mature?
Jessie: The big enterprise solutions, because they have been dealing with large clients for a long time, they probably have more mature processes to handle that. They will think about that, but it's a dilemma about moving fast or taking risks or be conservative or moving slow. I think it's always a lot of discussion. Smaller startups, they tend to move very fast, but sometime, we have to really work with them to review deeply about are there any concerns about data, data protection, or security.
I think it's really depending on the solutions and companies. It's hard to say which company is doing better, which is not, but we just suggest the audience here, you pay attention for the feature level, not only on the product level.
Kimberly: Yes, I think that's a good callout. When there's so much happening, it's important to look at each feature because it can have significant implications. We talked a bit about how we're on G Suite as a organization and how you had the opportunity to get in on the Gemini platform more early on. They've continued to expand what AI tools are available, particularly NotebookLM and Deep Research.
I know we're starting to use and experiment with those more as a business, and I think they're really interesting additions to our toolkit. How do you foresee these changing the way that we work at Thoughtworks?
Sara: NotebookLM and Deep Research are already changing the way we work, at least for the pioneers. I can share a couple of examples of a demo that I've seen in the last couple of weeks that were mind-blowing. The first one was from the go-to-market team, where they combined the use of Deep Research, Gemini, Notebook together to create super high-quality client content in, basically, a click using some advanced prompting.
That's an incredible powerful way to use the best of our knowledge. The second case is actually from your team, from the research and intelligence team. They have used NotebookLM to build an AI-powered research assistant tailored on specific industries. That is so brilliant. It allows every consultant to identify market intel in seconds and tailor better their conversation with clients backed up by real data from the experts.
Kimberly: I love both those examples, but yes, the second one is near and dear to my heart with our research team. It's been amazing to see really how quickly the organization has jumped on and is actively using these Notebooks. It's a great way to, I guess, disseminate and upskill our talent in specific areas. I'll tell you, this is hot off the presses. I just saw a brand new one yesterday, too, that has been created for our customer feedback, our voice of the customer program. It's a great synthesis and central source to understand what our customers think about us on a number of dimensions. I can't tout the NotebookLMs enough. I've seen great things from them, even early on.
Jessie: It's great to hear.
Sara: What is really cool is that one team inspires the next. Being able to amplify some of those stories really help other business functions to take on some of the good work and try it with their own content or their own user cases. That definitely amplifies the power of those tools because there are almost endless possibilities now that you're just starting.
Kimberly: Absolutely. That's what I love so much about this time right now, is because someone shares something like, "Oh, that's really cool idea." That inspires me for how I can think about and apply a similar functionality to a problem I have in my space. It's really just a compounding effect, I think, across the business as a whole. We talked also a bit about the role of applying AI and being AI-first in our software development and, obviously, how the Thoughtworks software development is really core to who we are.
I know there's much talk, and there will be continued talk about how AI is seen as a game-changer for development. Jessie, can you talk a little bit about how Thoughtworks is transforming its engineering practices with AI?
Jessie: Yes, in the AI-first software development, we're exploring how deeply integrate AI into the software development lifecycle. We found it's not just a tool, it's not just assisting you for software development, but it's a true collaborator because the general AI, the large language model has developing so fast, and now today is not only assisting you. We found both human and AI becomes smarter when they collaborate.
Actually, we understand, in the industry, there's concerns about the AI generator code is a low code adoption rate, and also there's concerns about quality and also the capability shift for human as well. To address them, we actually had a pilot for a couple of months, and we learned so much, and we built our own methodology framework, and we focus on the three main things.
One is how we can do faster problem-solving. The second is to address the engineering excellence and also AI-augmented workflows. It's not just a tool, and it's not just using a general AI as a software deliver tool, you have to do a lot of other things to make sure you really can get a benefits about using this AI versus software delivery methodologies.
In the recent pilot, we actually saw a three times productivity boost. Also, our own experiment is about 95% of code adoption, which we generally see the code adoption for the average in the industry is low, but we are happy we got this very high adoption rate. Also, the unit test, the coverage, jumped from 65% to 96% as well. In practical terms, what we used to take days to estimate or deliver, now can be done just a few hours.
It's a major shift in how we approach software deliver at a scale.
Kimberly: Do you have a view of what's next on that journey? I know you're in the scaling phase, so is that-- I guess I'd love to understand, what does that look like for the IT organization and its impact within Thoughtworks as a whole, maybe over the next six months? I know it's such a changing space, so it's hard to look too far ahead.
Jessie: Yes, it will take time to do the change because, like I mentioned earlier, it's a lot of capability shifting and it's not only use a tool, you have to change the way of working, the mindset and you have learned the new capability to be able to get a benefit from using this tool and a methodology. I think also I've always heard there's some question in the industry that, will human feel happy about that?
I think the good thing is, at Thoughtworks, people are naturally curious and take pride in their craft, and then people want to improve, want to learn new capabilities. We haven't seen much fear around AI replacing us. Instead, we see a lot more excitement about people need to learn new capabilities, and we have set up so many trainings and programs, and we have the campaigns, and also we have these role models to help to drive the changes. I think, yes, like you mentioned, the next six months, we'll hope by end of the year, we can have the 100% adoption for our global IT services team fully transformed to AI-first software delivery.
Kimberly: I think you hit on a really important point because we automatically think, "Okay, it's technology," but it's also very much like a people change and a mindset change, like you talked about. That's really the most difficult one and the one that takes the most time. I think that's always an important part of the equation, not to forget.
Jessie: Sure. Also, recently, definitely, we observed people who have already adopted the new methodology on the way and using the tool already. Then they are very happy about their productivity improvement. They are happy to share. Actually, the team have created a regular newsletter to share their wins. This also helped to generate a lot more interest in our team as well.
Kimberly: We know this isn't just a one-time project. AI is here to stay. It's here to be embedded in our world from here on out. What else is Thoughtworks doing to continue to build momentum and continuously improve our AI practices?
Sara: Like you said earlier, we're very serious about capability building and adoption. We have almost broader success people that are driving the awareness, running the demos, sharing newsletters, amplifying success stories. This is very critical, especially at a very early stage. We have the hackathon coming up, that's a great example to bring AI to maybe some people that were, let's say, less than early adopter, and to build that structure for them to explore in a safe way.
At the same time, we keep the bar very high for our hackathon because our jury is made of C-level leaders, and that's also a great opportunity to expose new capabilities to our leadership. Since the space is moving so fast, keep an eye on the market, build very strong relationships with partners, stay updated on the tools, and definitely try out new things safely and responsibly from a security and compliance perspective.
Kimberly: I should have asked this earlier, but is there someone on the team whose primary responsibility is to stay on top of the space in tools, or is that something that is really distributed across the team, and everyone is encouraged and has some responsibility to be understanding tools and experimenting?
Jessie: I guess maybe I can take this question because you have Sara here, she's a leader to drive that, but definitely, a lot of other people are driving different initiatives, but Sara's the key leader for the AI transformation for the global IT services team, and also support the whole Thoughtworks organization to do the AI transformation as well.
Inside our team, we have people in different area. Like I mentioned early, we have people who are leading the support and maintenance business, and then they actually use a lot of AI capabilities to solve that, to really automate a lot of repetitive tasks and make human easier. Then we can actually reallocate people back to more interesting work, which these people they're very unhappy about doing the repetitive work anyways. Those just really make their life easier. Also, in the deliver hub, we have leaders to drive the change for AI software delivery and also for the enterprise strategy level.
We have our head of enterprise architecture also looking at the tools with Sara together as well, but Sara is really on the front end to look at the market research and bringing a lot of intelligence and work with different leaders to make sure we make the right decisions.
Kimberly: I think reflecting on, I guess, from 2023 when you started on this journey, to where we're at today, what advice would you give to organizations who are undertaking similar AI acceleration projects?
Jessie: I'm going to repeat what Sara [laughs] has talked earlier. What the public [crosstalk]
Kimberly: That's all right. It's important to drive home core messages. Absolutely.
Jessie: The advice to the audience will be, start small, learn fast, and scale what works. Make sure you involve the right stakeholders early so you can move quickly and responsibly.
Kimberly: Start small, learn, get the right people together at the table. Sounds like a good mix [laughs]. Since this is such a fast-moving, so much going on, I feel like I try to stay on top of it and there's just so much to consume myself, but what are you most excited about for the future of AI at Thoughtworks?
Sara: This is a great phase in history, I think, where we are. Actually, a lot of companies are starting to move from the experimentation phase to the real impact. This is happening on two levels. One is the individual level and the other one is the enterprise productivity level. On the individual level, things are moving incredibly fast.
We've seen, for example, with Gemini reaching about 50% adoption just in a couple of weeks, and people are genuinely excited about the change and excited about the time they get back to focus on more strategic task or capability building. From the enterprise level productivity, it's really, really fascinating to see some early ROI on how we can scale the impact for functions or business units. I think there are some real transformation coming and very excited for the next chapter.
Kimberly: Me too. I'm a believer. I've seen it. I use it personally. I've seen the power of our teams experimenting with it and what really amazing capability they've been able to spin up in a matter of days. I think things we talked about several years ago, "Wouldn't it be nice to do," are quickly becoming a reality. It's really such an exciting time to be a part of this. Really appreciate you both joining me today to talk a little bit more about how we're drinking our own champagne when it comes to AI and adopting and scaling it here at Thoughtworks. Thank you, Jessie, and thank you, Sara.
Jessie: Thank you, Kimberly. It's a pleasure.
Kimberly: Thanks so much for joining us for this episode of Pragmatism in Practice. If you'd like to listen to similar podcast, please visit us at thoughtworks.com/podcast, or if you enjoyed the show, help spread the word by rating us on your preferred podcast platform.
[music]
[00:34:31] [END OF AUDIO]