Brief summary
Thoughtworks Technology Radar Vol.32 was published at the start of April 2025. Featuring 105 blips, it offered a timely snapshot of what's interesting and important in the industry. Through the process of putting it together, we also identify a collection of key themes that speak to the things that shaped our conversations.
This time, there were four: supervised agents in coding assistants, evolving observability, the R in RAG and taming the data frontier. We think they point to some of the key challenges and issues that industry as a whole is currently grappling with.
To dig deeper and explore what they tell us about software in 2025, regular host Neal Ford takes the guest seat alongside Birgitta Böckeler to talk to Lilly Ryan and Prem Chandrasekaran. They explain how the themes are identified and discuss their wider implications.
Episode transcript
Lilly Ryan: Hello, and welcome to the ThoughtWorks Technology Podcast. I'm Lilly Ryan. I'm one of your regular hosts. Hosting with me today is Prem, another one of our regular hosts. Hello.
Prem Chandrasekaran: Hello, folks. I'm glad to be here. Another really refreshing episode in the wings.
Lilly: Joining us today as guests are Neal Ford and Birgitta Böckeler, who are members of the Doppler team, part of the team that puts together the Tech Radar at ThoughtWorks every year. They're going to discuss with us today some of the themes that came out of this year's Radar. Welcome, Neal. Welcome, Birgitta.
Neal Ford: Thank you. It's great to be here as a guest.
Birgitta Böckeler: Hi, Lilly.
Prem: All right. The Tech Radar is just out, the latest edition, volume 32. Looks like we've got four themes on the Radar. Can you introduce us to what these themes are, Neal?
Neal: Sure. Let me explain first what the themes are on our Radar, because there are two things that show up on our Thought Technology Radar, which is always available at thoughtworks.com/radar, if you haven't seen it before. There are blips, and there are themes. The way the Radar comes about is this is purely curated from actual project work. Teams at ThoughtWorks either get really excited about a technology, or they really dislike a technology, and they nominate it for our Radar.
Twice a year, a group of people called Doppler, that's Birgitta and I are both part of this group, gather face to face. We gathered in Bangkok this time to basically curate all those blips and have a discussion about them, talk about how general they are, how applicable they are to the software development ecosystem generally. We select about 100 of the best ones out of that group, publish that as a Radar twice a year. This is purely curated from actual project work and experience on the ground, which makes this a unique publication in that it's globally curated from actual project work.
During the course of that meeting, it's a week-long meeting, and we discovered years ago that there are a variety of topics that come up that aren't reflected purely in the blips. The blips are extremely discreet. Because they started out as a written publication, there are only a few sentences each for each blip, even though it's moved online. It's still just a paragraph about that particular technology or technique or platform.
We realized that there were a lot of conversations going on that stitched a lot of the those blips together in a way that was not expressed in the Radar. We, at that point, started creating themes. For each Radar, we have three to five themes, and the themes are meant to represent the glue that happened within the conversations during the meeting that can't be reflected in the purely discrete blips. The things that came up a lot, some things that we ended up not making it onto the Radar for various reasons, but we talked about a lot, become themes.
These are not always things that are immediately reflected on the Radar, but do reflect the overall conversations that we had. In fact, the themes are the only things that this group actually creates organically. Everything else is curated from the ground up. For each one of the Radars, we have three to five themes, and one person takes ownership for each of the themes and shepherds the writing of those things, and they get published alongside each Radar. It is the most ephemeral part of the Radar because it's very much a snapshot of our conversation in time.
Prem: Yes. Thanks for that. That's pretty illuminating in terms of how these themes get formed. Given that there are only 3 to 5 themes across, let's say, close to 100 blips on the Radar, how do you even decide? Wouldn't you have a lot of debate and heated debate when you discuss what becomes a theme versus what doesn't?
Neal: Nothing involved in the Radar doesn't involve heated debate. The Radar is made up of heated debate. That is the essence of the Radar. We do, just like you would do any project activity, we have sticky notes. As the meeting goes on, as we notice things are coming up in conversation, we put sticky notes, and we put them on the wall on Friday morning, always the last, one of the last exercises of the Radar. We look at all the sticky notes and we dot vote them. Each person gets three to five dots, and you go over there and put the ones that you think are most prevalent, the ones that are most important to talk about. It's a very democratic affair. Then we curate those.
Sometimes we cheat and merge them together because we get clusterings of things that are closely related to each other. Sometimes those get merged together. We have a two-hour discussion just coming up with what we agree is what the theme should be.
Birgitta: We've actually had a new challenge in the past few editions and in this one as well, that we always try to debate a little bit on the meta level, which came up in the themes as well, because we voted on the themes and it turned out, I think, Neal, that three of the four that we came up with, or like almost all of them, something like that, were AI related. Then we have this whole meta discussion because now this time also we have the highest proportion of GenAI related blips on the Radar that we've ever had. Last time it was 30-something percent, and this time it's about 50%, I think.
We were discussing, is this actually a representation of what's happening right now, or it's just reflecting what's going on, or is this too much? Are people tired of the hype anyway? Is this just a bubble in which a lot of us are at the moment? I do think that it is reflecting the area in technology where things are changing and moving the most at the moment. That's usually what the Radar is about because every time we take a fresh snapshot of what we're seeing and that just doesn't seem to be as much movement in JavaScript frameworks at the moment, that doesn't seem to be as much movement anymore in Kubernetes or in all of these areas.
It does reflect that in a way, but we did have a longer heated discussion about how representative this is. We actually ended up with four themes. Two of those are AI related and we'll talk about those now in this episode.
Neal: Made a conscious choice that two of the four, we would only do half of them, which also means from another meta standpoint that the two that weren't AI-related have to be really strong to claw their way to the top of the pile of things that are not AI-related.
Lilly: We have four themes. Let's take a look at all of them and talk our way through what they are and what you can look forward to with what's in the Radar as well. We've got supervised agents in coding assistants, evolving observability, R in RAG, and taming the data frontier. I wanted to start off with the supervised agents in coding assistants and ask you, Birgitta, to talk through what this theme is about, because this is something that I know you've spent a lot of time working with.
Birgitta: Yes. I thought for a while also about the title. I was the person from the group who wrote the final version of this theme's description and supervised agents in coding assistants is a little clunky. For me, this clunkiness represents this space a lot where we're still trying to look for the right words. What we're trying to describe with that is that over the past few months, there have been these new features in coding assistance that are often called like an agentic mode. It's not an agent that fully autonomously builds a feature or an application for you, but it's a supervised mode.
You have a chat that you drive as developer and you prompt and you tell the tool what you want to do, but you still supervise what's going on. You can intervene and say, "Oh no, no, that's the wrong direction. Let's stop to do something else, or do not execute this RM command, please." You can intervene. Whereas, if you have tools that are fully autonomous, like Devin, for example, they just go off and try to do the whole thing for you without you actually looking what it does and intervening.
This supervised mode is actually a lot more realistic, at least at the moment, who knows what happens in the next five years. It's a lot more realistic than sending off an AI and doing it fully autonomously. This has been a big step change, actually. What these features do, and I list a few blips that we have on the Radar of tools that have this. Cursor has this, there's an open-source tool called Cline that has this that we're using at some clients. Windsurf is a relatively new editor, maybe half a year old by Codeium that has this. Those are all IDE-based and there are a few terminal-based tools as well, like Ader, Goose, or just recently Claude Code was released by Anthropic as well. That is a terminal-based agentic mode.
I actually, today I wrote a LinkedIn post and I realized, oh, it's like agent-assisted coding. Maybe that rolls off the tongue easier. I'm constantly evolving the way that I talk about this. It's all moving so fast. GitHub Copilot is working on a feature like that as well, but it's still in preview, and it's still being baked in the oven, let's say. Long story short, what these modes actually do when you drive them from the chat is that they don't just generate code for you, but they can also take actions. As is the loose definition of an agent.
First of all, they can change multiple files for you. Whereas previous versions of coding assistants would only change one file in one place. They change multiple files, they can execute terminal commands. As a developer, you can decide if you want to have control over that and allow them to do that or if you want YOLO mode, as Cursor calls it, where you just go through and let the agent just execute, execute.
This means this terminal command execution that they can execute a test for you, for example, and then automatically see, "Oh, there's an error in that test as well. Let me look at the stack trace," and immediately go and try and fix that for you without me having to copy and paste the error over or tell it to do that. They're also integrated with the IDE in other ways, like they can pick up on linting and compile errors that the IDE is showing up. It's basically a marriage between the large language model that thinks in tokens and the IDE who, to a certain extent, understands the structure of code.
It can be quite powerful, but it's also increasing the blast radius because you have bigger problems that are being solved, so there's a much higher probability for me as a developer to become complacent because there's so much going on, and I'm like, it's very tempting to just commit and not look at the details. Which is not a good idea because every single time I use these and I prepare a commit afterwards, I find something that I want to change to make the code more maintainable or to fix an issue where the AI didn't understand my requirement or something like that.
Prem: For those who are interested in this and related topics, we did talk about a variation of this colloquially known as Vibe coding. That was the previous episode. If you are interested, you should catch up on that one.
Birgitta: It's like a lot of people now use Vibe coding synonymously to this supervised agentic agent-assisted coding, whatever we're going to land on as an industry at some point. Vibe coding is one particular mode in which you can use these tools. A mode where you don't even look at what code was generated, but you just accept everything until it works.
Lilly: There are a lot of interesting considerations to come up from all of these tools being used. One of them being the old issue of alert fatigue, maybe approval fatigue is what we'll call it in this case, where you are usually, as the developer, continually being asked, "Do you want to approve this?" "Can I run this?" "Can I execute this?" Which is seen as being responsible, having this human in the loop, so to speak. The question of how far you get and how annoyed you get by continually having to say yes or no might be something that we run into here. It also is something that if we are in the mode of working with these agents in these ways, we're not always going to see exactly what is happening all of the time as well.
That's where I wanted to understand a bit more and maybe segue into the next theme about observability in LLMs as well, which is the next theme on our list, because while we've got a lot of elements in the stack, this is just one part of the stack where LLMs are being used, is at this command line level to write code. We're also orchestrating others as well. Neal, would you mind speaking a little about the observability elements that we have in our themes here for the Radar?
Prem: I do have a related question. The related question-- Look, observability is a fairly well-established thing nowadays given the fact that cloud is so ubiquitous, but it keeps evolving. What's driving the new wave of innovation in this space?
Neal: It's a great question. It came up a lot during our conversations for our Radar at this time, which is why it ended up as one of our themes. There were two things that really drove a lot of our conversation. One is related to Lilly's question about LLMs and observability about what are these things doing and how can we get better handles on the black box that is the LLM because at the end of the day, as fancy as the magic tricks that the LLM does, many of us, our job is to put that thing into production. Putting it into production means observing things about both its macro behavior through guardrails and evals, but also its capabilities behavior, like its scale, and its performance, and responsiveness, and those sorts of things.
That's where observability comes in and how we're evolving our thinking about that as regards to GenAI because it's a brand-new thing that we need to observe. The other part of this is that I think it's finally, to Prem's point, I think it has finally sunk in that observability is not optional in distributed architectures because it is just too hard to figure out how it broke and why. It's really the chaos engineering mantra of, it's not a question of if it's going to break, it's when it's going to break. Then what are you going to do after that?
One of the things we highlighted as part of our theme is that there are standards now showing up like OpenTelemetry that make it easier to standardize across tools and across platforms and get consistent observability across your ecosystem, which is one of those critical building blocks for being successful with a lot of distributed architectures like microservices. It had the ubiquitous observability, and the easier that becomes and the more the industry supports that, I think the better off we are. That's the two streams that came out of observability this time around.
Prem: Is there anything unique about observing AI systems compared to traditional applications?
Neal: The non-determinism is interesting from a behavioral standpoint. We're used to things being deterministic and not being non-deterministic, which is always an interesting challenge. I think the difference here is trying to get a handle of why is it doing what it's doing from a behavioral standpoint, but then also observing it to capability standpoint. Anytime I put my architect's hat on, I always try to split it into behavior, which is the domain, what it's doing on behalf of my project and capabilities, which is what capabilities does it need to support in terms of my project.
Observability is very much one of those capabilities things, but we separate that from trying to understand what the internals of the thing you're doing, of the LLM. Although we're getting better and better hooks into that, and explainability is one of those things that is gradually arriving, but nowhere near as mature as the capabilities observability.
Lilly: I've also seen cases with LLMs that doesn't tend to happen with other applications where depending on what you're actually observing and listening to, if you're actually listening to the output of the LLM itself rather than the system that is running it, you run the chance of having hallucinated logs and hallucinated metrics. That itself is something that we haven't necessarily had to worry about in the past. I think that's another interesting feature of the landscape that we're in right now.
Birgitta: Yes. To mention a few blips, maybe like one in the area of observability for LLMs. One of the key components actually that's different from other applications, like LLM-backed applications, I think is what we would technically have to say observability for LLM-backed applications. One of those is evals or testing. Both like the model that you're using, if you maybe want to switch between different models, but also the prompts that you're using in your application. When I change my prompt, does that actually improve the user experience or whatever I'm trying to do, or does it not?
That's a very tricky topic in a non-deterministic world. Then another slightly specific area, so is costs. Observability, even though it's about logs, metrics, traces, often cost management is also, it's either related or you could say even part of that. In large language models, that's a huge thing, of course. How many tokens are we using here? When I put a prompting cache in here, will that reduce my token usage and therefore my costs, and so on and so on?
A few blips that we had on the Radar of tools that are in this area is a tool called Helicone, Arize Phoenix, Humanloop, or also Weights and Biases, a particular part of Weights and Biases, which is an established product, which we've actually built before, but now they started splitting up into a lot of different areas. We're getting into trouble with our previous blips. We now have Weights and Biases Weave, which is also an LLM-backed application observability tool.
That's the LLM world. Then on the other side, like Neal talking more generally about observability for distributed architectures and so on, we actually put OpenTelemetry into Adopt this time. Adopt is our most centered ring where we put things that we see as, "If you have this problem, you can't do anything wrong when you use this because we've seen this successful so many times that we think it's a no-brainer to use this." There might be other alternatives, but it's like a sensible default. We put OpenTelemetry in Adopt this time because we just see so many of our teams using it and happy with it.
Then slightly related to that, we talked about a lot of tools that also support OpenTelemetry in that space. Interestingly, three products from Grafana or tools from Grafana, we put into the trial ring this time, Alloy, Tempo, and Loki. The trial ring for us is things go there when we've seen them in production at least once successfully. Those three tools cover like-- Tempo covers traces, Loki covers logs, Alloy is like an advanced data collector implementation, like OpenTelemetry data collector implementation, and all of these have been used by our teams with great satisfaction, let's say. There's still good movement in that space. It seems like OpenTelemetry is really settling in to be the standard in that space to help people also switch between different tools.
Neal: We're not above a little bit of editorializing in our themes, because we think observability is a really good idea in distributed architecture, so the more we can encourage people, Prem's earlier statement that it has become pretty commonplace, but we still have a lot of clients that aren't doing as to the degree that we think they should. Putting it there helps reinforce our view that the ecosystem is building good support for this. Get on board if you haven't yet. It's a good idea.
Prem: Moving on to the next team then, the R in RAG, or Retrieval Augmented Generation, is often overshadowed by the glamor of LLMs. Why is retrieval having a moment right now?
Neal: What's interesting to me about this is there was so much conversation this time about the R in RAG, but not the A or the G. I think it's going to be interesting. In fact, our first theme that Birgitta talked about also reflected this agentic coding assistants. That's where the rubber meets the road. You expect a burst of innovation in that space, but there are unexpected bursts of innovation, too. Why wasn't it the G in RAG, not the R in RAG? I think we're going to see in the AI ecosystem these little hot pockets of innovation where suddenly something comes up where there's a huge amount of innovation in just this part of the ecosystem.
The R in RAG came about this time. Of course, the G is just the LLM, of course. There's not much to add beyond the two other R's in RAG, but the hotspots, I think, are interesting, and I think we're going to see more and more of these hotspots over the next few additions of the Radar. The next Radar, it won't be RAG. It'll be some other part of the ecosystem, guardrails or evals or something else where there's been this burst of innovation that's coming about.
Birgitta: Yes. The retrieval part is basically the search part. It's a search problem. We're retrieving information to augment a prompt that gets sent to an LLM that will generate a response. Retrieval or RAG is often mentioned in the context of people trying to ground the model, and sometimes even it's claimed that with some form of retrieval-augmented generation, people claim they can totally remove hallucinations, which I think is not the case. You could never totally remove them, but you can certainly reduce them. For the facts in your particular space.
We had a bunch of different proposals about different techniques, how to do this retrieval. There's the typical vector database and similarity search, then using more classical search techniques of like re-ranking. There was a lot of different stuff in the area of using knowledge graphs and graph databases to build up a knowledge base that can be used to retrieve the right information to add to the prompt or some vector databases again as well. Yes. There's a lot of this like, "Okay, can we introduce additional steps in the retrieval that can make even more sure that it's the right information that we're adding to set to the model?"
Prem: Like I said, Birgitta, the idea of this R or the retrieval in RAG is to augment the LLM with enough information so that you reduce the possibility of hallucinations. Are there any specific pitfalls that teams fall into? Maybe they underestimate the complexity involved in this R step, or are we looking at other problems?
Neal: It's not so much that teams are finding that they need to do this so much as the capabilities are appearing really quickly in the ecosystem because, as patterns emerge for, "Oh, it's hallucinating. We've got to figure out a way to make it stop doing that." "Oh, here's a tool that helps you do that. Here's a technique that helps you do that." The churn is immense in this space right now, because there's so much innovation and there's so many different ways you can combine new technologies together to make new things and combine things together.
I think it is the richness of the ecosystem that's interesting here versus the way people are using it because as these capabilities appear, people are going to find interesting ways to use them. There seems to be a lot of interesting churn in that part of the ecosystem right now.
Lilly: One of the things I thought would be interesting to call out here is also that RAG is a form of using LLMs, but it is enduringly the most popular way of using them, and that this is what people seem to really circle around, when people have been trying to find a use case, it tends to be, how can we use an LLM to navigate through the things that we have more effectively and find insights in the things we've already got, rather than coming up with new stuff. This seems to be one of the stronger calls and the stronger use cases, which means that the way that you feed it that information or however you augment the information there is important.
I don't know if that's worth calling out. It's just like an overall RAG is one of the use cases that people seem to actually have, we've spent a lot of time trying to find use cases, and people have tried a lot of really weird things, but in my view, the focus on RAG is really telling in the sense of like, "Oh, this seems to be where people are actually finding utility in it more than anything else."
Birgitta: I would more say that RAG is the thing that you need for 80% of the use cases or even more, like you need some retrieval, and in a way, it's also what agents do all the time. When I was talking about the agents in coding assistants. They like look up where's the right code to change, that's retrieval. Some of them can now do web searches, that's retrieval. Most of the use cases don't make sense with just an LLM. You have to use RAG on top of it to really make it work for your use case.
Lilly: I suppose that's what I mean, is that there's always this question of what could you do with an LLM just by itself? There are a lot of things that you can, but that people are finding the most useful tools when they're immersed in their own context. I realized that that is more of a-- that's an extremely high-level thing to pull out. Because there's been so much hubbub in the space in general, the existence of this as a theme in itself points to the fact that this is one of the things that people want to do with it is navigate their own stuff however it looks in a smarter way. In some cases, RAG is just like glorified search. You look at that, and you're like, "You just need a vector database. You don't need this," but like--
Birgitta: Or you might not even need a vector database. You just need a search engine.
Lilly: Yes. In general. I think that, honestly, this is what this speaks to is that what's emerging from the possible is really people are like, "We need better search, man." That's really it. I don't know if that's too trite, though, but that seems to be what's really coming out of this to me.
Birgitta: Also, because of the things that RAG is unlocking, like these new ways of searching data in a more semantic way, I guess.
Lilly: Yes, exactly.
Birgitta: That has also now led to people doing these use cases where they pull together lots of sources of information in one place in areas where before we didn't do that because it felt like it was too costly or the cost benefit trade-off just didn't work because of data confidentiality, and then there wasn't enough benefit to do that trade-off. We see that in the software space with products that now pull together data from your Wiki, from your issue tracker, from Slack, from your code bases, all in one place to index it together, to take advantage of this new type of search. Which we could have done before with previous search capabilities that we had, but nobody did it. Now, because we see this new benefit, there's more incentive to try this.
Neal: One last thing about that that is interesting is that vendors always look for the distinction between a fad and a trend. How many people on this call remember Ajax? For a hot minute, Ajax was all the rage in the tech space, and then it was gone because it got eaten by the abstraction layers. It's still there, just nobody talks about it anymore. Microservices was a trend, not a fad. Vendors look deeply at this because you don't want to invest time and resources in building tools for fads, but you do want to be early adopters for trends.
That may be where a lot of the R in RAG, this is an easy tool category that you can sell things in either open source databases or databases, vector databases. There's a tool solution here. Part of this heightened interest in search all of a sudden may be that people see the trend here, and they're reacting to the trend part of this and think this is beyond a fad.
Lilly: Precisely, and that's why I'd say this is a theme for us as well, like it points to that trend itself. It's what's coalescing out of the noise.
Neal: If Lilly's statement about that we need better search is trite, well, our last theme is also just as trite, which is we need better analytics. That's really what our last theme was about, is about attaining a data frontier, which is about a lot of conversations this time around the tooling and around data mesh and data product thinking. There was a strong advocacy within the group to actually put something about data products on the Radar outside of data mesh, which is this idea that treat your analytical data as if it were its own product, which means it has a lifecycle, it has requirements, it has independent life outside of just a little side project that's going on.
That's really fed into the data product thinking and taming the data frontier. There was a lot of discussion about analytics and including, of course, and generative AI snuck its way into this theme as well, using some of the generative AI tools to help with analytics and to augment some of your analytics thinking and some tool support around that as well.
Birgitta: It's definitely a readiness thing here. The more mature you already are with your data platforms, your existing data products, maybe even, the easier it also is for you to take advantage of that with generative AI and build a sensible product with generative AI that can actually bring you value and build a sensible retrieval that helps you for your applications. I think the theme was mainly triggered by all the discussions about data products. I think we had at least two or three proposals of tools that were related to building more data products native ways of dealing with data.
Where some of the things that data mesh describes and that organizations have been trying to implement with our traditional "tools" have actually not always led to success because the tooling as Zhamak Dehghani, our former colleague who coined the term data mesh, always says the tooling is actually not ready for this new way of thinking. There's actually a much older podcast episode where Rebecca Parsons and I talked to Zhamak Dehghani and Emily Gorcenski about data mesh that helped-- that conversation really helped me understand this gap between the data mesh and data product concepts and the existing tools.
Zhamak explains it really well in that episode. What we're seeing now, in particular in the area of catalogs, so where products are coming out that are not data catalogs but data product catalogs, which makes a difference when you want to really embrace this concept.
Prem: Thanks, Birgitta, for talking about data products. Now, can we unpack what data products are and what data product thinking is for folks who don't want to go back to that episode or are unable to? Maybe that will be really helpful.
Neal: I'll do a very brief description of this. This is part of data mesh, as Birgitta mentioned, and it's the overall idea of product thinking versus project thinking in projects generally, where if you have a project, a project is ephemeral. You gather a bunch of people, you build something, and then they break up, and you go away. Now, you're left with this asset that nobody wants to maintain or touch, or enhance. Product thinking is this idea that you should treat it like a product, which it has a backlog, it has a lifecycle, it has requirements, has a dedicated team, and that's exactly what data product thinking is.
Analytics should be a living part of your ecosystem, not a one-off thing that you do every once in a while. Data product thinking is applying that same product thinking to your analytical data. It should have a team, it should have a backlog, it should have requirements, it should have all the things that are normally associated with products. That's the short version of that. We see benefit in doing that, even if you're not going all the way to something like data mesh, because the concerns are exactly the same as in a product thinking on projects.
You build some analytic capabilities and then you leave that and go away, but you still need to enhance that and make improvements to it, their lifecycle events to that as well. Towards what Birgitta was saying, we also saw this intersection with AI. A couple of the blips that we mentioned, Synthesized, is a tool that uses AI to help take production data and mask and consolidate it, and get rid of connections that you don't want. Basically, a giant filtering operation to make it so that you can use it in lower-level environments, like testing environments, or for testing, staging, and that sort of stuff. There are several blips like that that combined the utilities you need for building these data product ecosystems, combining them with AI, like everything is combined with AI these days.
Prem: Awesome. Thank you, Neal. I think that was very insightful. That brings us to the end of this podcast episode. Before we go, what advice would you give our technologists and decision-makers in terms of getting ahead of these trends and not just react to them? Any parting thoughts?
Neal: The first thing I would recommend is you look at our Radar twice a year. Even those of us who curate the thing are always surprised at all the things that show up on it, because no person can possibly cover the breadth and depth of the technology world. We are not assuming that we're covering the entire technology landscape. It's just things that we have to touch as part of our professional day jobs. Even though those of us who gather to curate this, there's always interesting and fascinating things that we didn't know about before.
Particularly with things moving as quickly in the AI space, it's great to have a trusted source that you can gather some info from. I think that the Radar serves particularly well always in that case, and in particular when you've got a super fast moving target like generative AI is right now.
Birgitta: Yes, and I would always encourage people to maybe download our PDF and whatever reader you're using, just read through the whole thing once or even skim through. Specifically, also read the things that are maybe not in your area of what you do day to day. We sometimes hear people say, "Oh, I'm a back-end developer. I only want to see the back-end blips," or, "I'm a data engineer. I only want to see the data engineering blips." For me, as one of the curators in the group, I find it super interesting to hear about the blips in other areas that I'm not as familiar with. I often find, "Oh, I didn't know that was a problem that needed to be solved. That's interesting."
It just helps me with my meta model and with the boxes that I see in the world, and then helps me continuously tune the meta model that is in my head. That really helps me stay on top of the technology landscape in general. I actually find it really interesting. Also, this cross-- yes, what would you call it? Cross functionality. I feel like makes me a better technologist. Even though I do not fully understand to the littlest detail every one of the blips that we put on the Radar, it really helps me tune my understanding.
Lilly: The Tech Radar is out right now. You can go and check it out. Neal, what was the URL again?
Neal: thoughtworks.com/radar, always the most recent version — volume 32 just came out.
Lilly: You can have a look there. See where you can spot our trends. Thank you very much, Neal and Birgitta, for joining me and Prem today for another episode of the Thoughtworks Technology Podcast.
Prem: Thank you, folks.