Brief summary
Google I/O 2025 took place in May. It's always a great opportunity to find out how Google is trying to shape the industry agenda, but this year the predominance of Gemini meant the event was a chance to get a better look at how Google will play its hand in the AI market in the months to come.
To dissect the headlines from this year's Google I/O and explore what we can learn about Google's strategic focus — and how the company is thinking about AI — host Ken Mugrage is joined by Andy Yates on the Technology Podcast. As Head of Ecosystems Development at Thoughtworks, Andy plays an important role in helping the organization and its clients undertstand, analyze and engage with the major platforms and vendors.
This edition of Google I/O, he explains, was significant and particularly useful for helping us understand how the world is going to be consuming AI products and services as the technology becomes more and more embedded in the mainstream.
Episode transcript
Ken Mugrage: Hi, everybody, and welcome to another episode of the Thoughtworks Technology Podcast. Special guest here today, to me at least, I want everybody to meet Andy Yates. Andy and I worked together in my early days at Thoughtworks, 15 or 16 years ago. I think he's been here a little longer than me, though. Andy, go ahead and introduce yourself, if you would.
Andy Yates: Hey, Ken. It's good to be here. Thanks for having me on the show.
As you said, my name is Andy Yates. I've been here for quite some time, about 20 years now. I am part of our global partnerships team, where I head up what we call ecosystem development. I also have a seat within the global technology team, supporting Rachel Laycock, our CTO. A big part of that role is engaging with our larger partners, particularly the cloud service providers like AWS, Microsoft Azure, and Google Cloud, deepening our understanding of their respective offerings, understanding what the strengths of those different offerings are, identifying where those things are most applicable for our clients and the work that we do with them.
Ken: Cool. Last week, there was a big Google event, the Google I/O. I'm sure folks have seen press about this announcement and so forth. I don't think we're going to go really into depth into what every announcement is, but I would like to get your take, because part of your background I know when we worked together was with some of our technology products. I know you worked in our internal IT operations for quite some time. Really, how do these impact our end users, our practitioners, how do these impact the organization?
First off, one thing that was funny, because you and I were talking about this earlier, and I've seen other things, where they go to the Google I/O, and everything's Gemini. It's Gemini this and Gemini that, and where's the Gemini car and airplane and what have you. I went to the Microsoft event about six months ago, and everybody said the same thing about Copilot. It's really hard for people to understand, which of these are products, what are these are features? What's your take on that?
Andy: There were a lot of announcements at this event. There were a lot of things where-- the overall impression I think people had was that Google's really hitting their stride. I think that part of that was this idea of Gemini at the core. Gemini is the center of what they've been announcing, and everything spans out from that. I have heard lots of people being like, "Oh, yes, there's so many Gemini things, and I can't remember which one's which, and I'm not quite sure. Is this a product? Is it not a product? Is it an application?"
I also remember when we had Duet, and Bard, and Vertex, and Palm, and all the different names for the same sorts of things. Everyone had the attribute, the same thing. It was like, "I can't remember which one's which. I don't remember which one's which." I think it's a tricky problem. I think this is one of the things that Google is making good steps with in terms of making the framework to understand what is a product and what is a building block more clear. Maybe it's a hard problem in computer science, and it is here too, I think.
Ken: What's your take? Just for the rest of the next half hour with our listeners here, what is Gemini?
Andy: That's a tricky way of putting it. I think, for me, Gemini is the wrapper around all of these things. I think it's always going to be qualified. You're going to have Gemini, the assistant, the Gemini app. I think that then you're going to see Gemini embedded in experiences, and those experiences then will have their own names. Some of those will be, again, named after Gemini. Some of those, as they get further out, they become more standalone experience, like Search or Flow, as an example of the thing that was announced. Those then become their own named things. That separate naming shows that it's more of a product. I think Gemini is more of the building block that sits in the middle of these things.
Ken: Great. Was there any particular things that really jumped out on you, whether it be applications or use cases?
Andy: There were a lot of announcements. I think the theme, actually, from research to reality, again, reflects that sense of people working on things and then bringing them into products. The overall Google really hitting their stride as a theme is something that many people in the press and otherwise have been talking about. For me, there are a couple of places that are worth digging into. I think that the increasing amount of Gemini that shows up in the Search application is one thing we should probably talk about. I think some of the productized visions, whether that's Flow and how that builds VO8 into a product, whether that's things like Gemini and Android Auto, AI mode in the shopping experience. Those things, I think, are super interesting just to show how things moved from a research thing to an actual product.
Ken: I don't know if this came up at the event. I'll be honest, it just popped into my head. We talk about Gemini being embedded in everything. I was having a discussion with our web team, and we've seen this in other places where a lot of their traffic went from individuals coming and maybe they went to typical old-school Google, if you will, or what have you, and they go to the website and they read things there.
Whereas now, if someone says, "Hey, tell me about Thoughtworks," instead of Gemini saying, here's a link to Thoughtworks, it does a summary and stuff. What's the impact there about our people that are making e-commerce stores, making content? I don't know if it was announced at Google I/O, but I saw an ad the other day that Google announced the ability to check out from search. They actually never come to the e-commerce site. What's the effect in the industry as a whole, if Gemini's doing everything for you, whether you like it or not?
Andy: This is the super interesting thing here. Actually, they announced AI mode within search, within the AI mode, there's also a shopping experience where you've got a virtual try-on of clothes, and I have a lot of questions about that. [laughs] I think it's a super interesting thing. The first of which is, what's the fidelity of the experience? How true to life is that trying on of clothes? Is it actually a substitute for trying clothes on? Is that going to be a good thing?
A lot of trying clothes on is like figuring out the fit. How is the item being made? How does it drape? How does it fall? How does it accentuate or hide the way that I look in different ways, what are the material properties? The announcement is Gemini now has modeled those material properties and can adapt the way it draws the image so that it updates it and you get a more realistic how I'm trying things on.
I think that then this ends up, though, in this really interesting place, this feels like-- Now as a retailer, I am incentivized to give all that data about the materials I use and the way I've stitched things, the way I've made things to Google, into the shopping graph so that it can give the best answers to the customer so the customer can understand what the thing is they're actually buying. This intermediates brand.
This means that the thing that my brand used to signify, which was a collection of the quality and the material choices and the way I make clothes no longer is necessary to tell the customer the quality of the thing they're buying. Is it the right thing for them? On the face of it, I'm like, "Well, maybe we don't need brands." That then is something that from a customer's perspective is a good thing. I don't need to trust about brand and then guess, I actually know for real is this the thing that I'm buying?
That also is only half the story because brand isn't just about quality and the thing that I'm buying, brand is also about the messages and the things that I share with other people. I select a Nike thing because I like and feel affiliated with that Just Do It campaign and want to display that to other people. If Google takes that ability away, because I can't search for how is this clothing going to make me feel and what's it going to say about me in quite the same way, then I think we end up in a bit of an interesting place. There's a little bit of a battle there.
It's the new SEO. How does a brand get past the things that Google is looking for to also show those other things that it's wanting to associate with its brand? I don't know that there's an end result to that. I just think it's a really interesting problem back and forth.
Ken: It's interesting too because they say everything repeats. When I had a web development company about 100 years ago, and our tagline was perception is reality. It's like, okay, small vendor, you can look like a big vendor because it's about the quality of your website and how professional you come across and those sorts of things. That changed a little bit with things like supply chain automation, so I can get it to your home in a day or what have you. I live in a metropolitan area and there's many, many things I can have off Amazon in a few hours.
There are things that the big people can do that the little ones can't. Could this help that? If there's an AI experience and I can see the thing in my room, I think clothing specifically is a pretty hard thing. With augmented reality, we recently wanted a new planter for something that's in our living room, is the ability to take that picture and then meeting it with some of the stuff at Google, the automatic images, that sort of thing, is Is that an equalizer?
Andy: As a consumer, I think that on the face of it it feels like it should be. It feels like I should be able to find the thing that I'm looking for at the quality and the materials and the look, and I've got more information to make that decision. I don't know whether that actually is the way that things result though. If you look at any number of shopping sites now, they're filled with way too many choices with very little to differentiate them. Even though there's lots and lots of information about them, it's super hard to figure out what's the thing that you want.
More because the vendors, the sellers have become wise, or some of the sellers have become wise to the thing that the search engine is looking for, and then they just optimize for that. You end up with a situation where the people who are better at optimizing are the ones who show up in front of your face or show up in your living room, rather than the ones who are building the things that you want. It's a game of cat and mouse. It's the SEO battles all over again. It feels like we should be trying for this, but it also feels like these are hard things to really achieve.
Ken: Back to the event and some of the announcements and so forth. One thing that all the big vendors, and if I'm honest, Google especially has been accused of is perpetual betas and products that go out there. You and I worked together on Thoughtworks' Mingle many, many years ago. We had based something on one of their chats, and then the product went away, [laughs] one of the group things. What is this process, I heard you were talking about earlier, not in this recording, about graduation and taking products from idea to when they become a product? What's Google doing to make people more confident that these announcements are things that they can include in their businesses?
Andy: I mentioned briefly AI mode as part of search. I think this is the place where it's easiest to see this. Search is a mature product. For what you will, it's the mature product. AI mode is a set of experimental features that are one click away and things that people can explore and play with. Then the Gemini app, to me, feels like building blocks that are, again, a step further, closer to that research side of things.
Then Google, I think they were using the term themselves, certainly, I've heard others talking about the term of graduation. As things go from releasing the Gemini app for people to play around with and to see how it works for them, to moving into AI mode, which is a little bit more polished, and then ultimately moving into the search app itself. I think that that path feels very similar to that early perpetual beta that Google pioneered in some sense.
It feels that they're doing that not just because that's how they want to release products, but also as a demonstration to other organizations that this way of releasing AI products in particular is a good way to build out the experience and to help guide your customers or your users along that path also because a lot of these tools really do need to be tested in reality at scale for us to really understand where the maximal use is of those tools. It's only once you've done that testing that you can start to move them up into more rich experiences for a broader audience.
Ken: In that vein, you used to work for our tech ops where we roll out tools to all Thoughtworkers, so 10,000 people are going to use them, some of these tools, if they go away, rollouts, change management, that kind of stuff, that's not easy. Getting it out and saying, "Hey, all you folks in my organization, here's this new thing. Take use of it." Then it changes direction or it pivots or it cancels or what have you. How would you in your previous role or what would you recommend for our listeners who are in those kinds of roles, when do you jump on these things? Do you want to be an early adopter? Do you pick and choose? What's your thought process there?
Andy: Historically, we've always been early adopters. Even there, there's different pieces to that. Some of that is a very small set of people who get very early access just to put a tool through its paces or to put an application through to stretch in to see how it may fit. Quite quickly, we'll then want to move to a broader group within our organization who are trusted testers in Google's terms. We would roll something out to that set of people to try it and to understand it. Being pretty clear, like signposting very clearly, "Hey, we're trying this thing out. It may go away, it may not be the forever tool for us. We think that there's enough here that's worth trying, and we think there's enough here that we can provide feedback to the people building this tool," whether that's an internal tool or a vendor that we're working with.
Really, as we're going through those trusted tests, the kind of processes, that's the point is that we want to make sure that we are actively gathering feedback and steering, to the extent we can, the roadmap and the development of that tool towards the use cases that we have. We do that because we believe that we are reasonably cutting-edge, that we understand how software is built, and we give good feedback.
I think those are investments that you want to make, is to make sure that you are that kind of organization that is not just saying, "Make it green and two pixels to the left," but actually talking about the use cases you have and the ways that the tool you're using is trying to solve your problems or aiming to solve your problems. Then, as you keep going, you want to then start to roll that out to the broader audience. Again, it's a lot about communications, it's a lot about making sure that people understand where you are with the process of a tool.
Again, it's super hard to get the balance right. If you are too cautious as you roll something out, you'll get a lot of people sitting back and being like, "Well, I'm going to pass. I'm not going to put my all into this." If you're too all in, then everyone's clamoring to use it, and then you're stuck potentially with a tool that isn't serving you well. That's less often the case. If everyone's clamoring for something, it's probably serving them well, but you've got to make sure that all of the voices across an organization are heard for those kinds of things.
Once you get to that point, again, it's about making sure you've got a dialogue in place with the tools vendors, making sure that part of your choice of that tool is not just, "Does this solve my problem right here, right now?" but also, "Are we going to work together to keep solving my problems as we go forward into the future?"
Ken: Let's jump to a couple of the productized experiences that were at the event. You talked a little bit about AI mode already, and of course we know the takeoff of images here, just practical example, my partner here wanted me to edit a photo for her in Photoshop because I'm not good in Photoshop, but I'm a little better than she is. I hadn't gotten to it. She instead uploaded to Gemini and said, "Hey, you try," and it did a great job. It was great. What's next? The video? I know you were talking about VO3 a little bit. What were your impressions there?
Andy: There's a huge amount of excitement with VO3. I see my LinkedIn feed at the moment, there's lots of people posting short commentary with very realistic-looking videos in various scenarios. What would this look like if a talk show host said it or if it was said in a horror film? A lot of it actually is very self-referential at the moment, but you can see how this is becoming very rapidly picked up and something that people are pretty interested in because it is combining now voice with video. That seems to be a step change in how people see and appreciate these videos. It's a bit hard for me to comment because I'm here in the UK, and it isn't rolled out here. I had a play around or tried to play with it at the weekend.
That's one of the things that I think is a bit of a challenge with a lot of these tools, is that there's a regulatory environment that means that-- I think it's announced and everyone's very excited by it, but it isn't quite rolled out evenly in all the different places. I think, yes, there is a lot of excitement. That's really cool. I think when we talked about that product size experience, so the VO is the model, Flow is the product version of this. It's an end-to-end tool for building short movies, ultimately. Again, I'm not able to access it, so it's a bit hard to comment.
I think the positioning's really interesting. A lot of people were saying there's a pro license and an ultra license for all of Gemini, but really it surrounds Flow as the core piece of this. You're looking at $3,000 a year, which is a lot of money if you compare it to a Google Workspace license. If you compare it to the equipment you might buy as an independent movie maker, however, a drone can cost in that order, a camera can cost in that order. I think it's no surprise that a lot of the shots that they were using to demonstrate how Flow could be used were drone shots. I see that's the comparison that they're looking to make, and it looks super interesting. I'm quite keen to get my hands on it.
Ken: I have to admit, sitting here in North America, I sometimes forget, and we have a global audience, about things that, in the roll out, across different regions. What about Android Auto? I know you talked a lot about both from a Gemini perspective and from an XR perspective.
Andy: Gemini Android Auto, there's a super neat demo I saw of this where a person was, not actually driving along, but they were showing how they could be driving along and say, "Hey, Gemini, where am I supposed to be picking up my son or daughter? Can you check in my Gmail for me?" Gemini went and checked into their Gmail and gave them the answer.
That improved hands-free experience, to me, seems just super useful. There's an interesting piece to this. Lots of the kinds of things you might ask when you're on the road need a factual answer. Am I picking my son up in this building or that building? If it gives me the wrong building, then it's a low-stakes, high-stakes kind of thing.
I think that it'd be interesting to understand how they've tweaked the application around the model to make sure that it is giving more grounded and factual answers, because I don't think folks would be terribly forgiving if it got them wrong.
Ken: I'm curious, as a technologist, what's your thought on that? From a privacy perspective or what have you. You're driving down the road, and you ask where to pick up your daughter or your son, how much data is too much? Who owns it?
Andy: In that kind of personal context, and it's definitely a personal context, I'm okay with that kind of sharing. There's been a big bet on Gemini being able to connect to personalized context, that your browsing, your Gmail, your YouTube. There were parallel announcements at Next around Agentspace and Gemini being able to connect to the organizational context and bring that corporate context to bear when you use Gemini.
As a knowledge worker, as somebody who works in this area, it poses an interesting dilemma. I want the work that I do to stay with me to some extent, but my corporate InfoSec team or data protection team wants to make sure that the data from the company isn't leaking out. We've got to figure out some balance here, I think. I'm unusual, I said I've worked here for 20 years.
I understand these days that's somewhat unusual, that people often switch companies. If you're going to switch company, there's an implicit agreement here. You don't take any of the materials, but you can take the things that you've learned along the way and apply those to do your job in new places, because that's you getting better at your job.
The more you offload that work to an AI or the more you work in conjunction with AI, the more, if you move companies, you get part of what you are or what you've been doing cut off from you. I think that this is going to be something that's going to drive some really interesting behavior. I suspect a whole bunch of new shadow IT where people are, for a variety of reasons, not talking to their employers about the AI tools they're bringing in, because they don't want to lose them were they to change later on.
I talked about Android XR. Android XR is glasses that you wear. Glasses are an intensely personal thing. You put it on your face, and it's very much you and part of you and part of your choices. I would not feel comfortable-- You see already with phones. People are like, "Do I have a work phone and a personal phone? Do I allow a work profile on my phone?"
A work profile on my face that allows the computer-- that switches things around depending on where I am. That feels weird.
I'm trying to figure these things out. I think we need to figure these things out, because otherwise, we're going to end up in situations where people are also going to start asking, "What's me, what's my contribution? What's the company contribution? How do we, as a collective, work together? What's the right reward renumeration?"
It may be I provide the me, the labor, company provides the tools as part of the capital and the tools. I saw there was a podcast I was listening to over the weekend, where Alexia Cambon from Microsoft was talking about maybe there's this new category in the middle of labor and capital for agents. I'm not sure I would go quite that far. I think that there is something different here about that ownership of the work and the outputs of the work. I think that we need to think about that pretty carefully, because otherwise we're going to end up-- we're already seeing productivity pay gaps keep growing and growing and growing. People start to hide their use of AI from the company. It ends up with the data risk. This is a complex problem that we need to start talking about.
Ken: Another product that we use quite a bit internally and have for some time, but I honestly don't hear a lot of other folks talking about, is NotebookLM. I know you said there was a little bit of a discussion there. What's your take there? What's Google doing?
Andy: Notebook has been super popular across Thoughtworks, NotebookLM. We've seen teams that you and I have worked on where people have been compiling and sharing research. We've done some really interesting stuff where voice of the customer interviews we've all put into a single Notebook and then shared out with people as a podcast so people can hear, and it's a bit more visceral what customers might be saying about us.
I've seen people putting product vision roadmaps. I've seen somebody else using it. It's just the inspiration for change in comms management. They're like, "Hey, I've got this document I need to roll out to the organization. How can I do this in different ways? What are the interesting points from this document?" Notebook has been super, super useful for us for doing that.
Ken: What is it at the core? What makes it different from just searching my Google Drive or what have you? For those that have never used it on the thing, what is Notebook?
Andy: NotebookLM, it's a way of uploading a set of documents, and then doing RAG against that set of documents.
Ken: Only against that set?
Andy: Yes, only against that set, I believe so. What Google have done on top of that is when they've generated the summaries, they've used a couple of different models to generate summaries in interesting ways. One of those is a Two Voice podcast that talks much you and I are talking about things, and it's somewhat realistic. The voices interrupt each other. They started there. Some of the new announcements they've added video overviews, so now they can generate a video that will explain the content to you.
Which is a neat step on from that, that next from that podcast summary, Google actually have built their own notebook for I/O. If you want me to explore I/O, maybe we can put this in the show notes. It's a pretty neat notebook that allows you to explore all of the announcements and allows you to define the things that you think are interesting or that might be relevant to you in your context.
Ken: What are your takeaways? Just your observations from watching the event and that sort of thing?
Andy: I think it comes back to that thing we said at the start, Google really feels like they're hitting their stride. There's the confidence in the way that they're announcing things is refreshing to see, actually. That balance of research tools and tools for people to build with, and products that they're using to demonstrate the kinds of things that people could be building is really interesting. I think that the products that are there, they are announcing span a wide range of industries, that it's pretty interesting to see.
We talked about the virtual try-on and the shopping experience. We talked about automotive, we talked about what it's like for knowledge workers. I think there's a bunch of different things in there. Whichever industry you're in, that Google are doing some interesting things. They're building some interesting things, and they're also building some tools for you to build your own interesting things on.
Ken: All right. Andy, thanks a lot for your take on this. It's always interesting, we can read the press and I'm sure folks have about the announcements, but it's always good to hear an insider's take.
Andy: It's been an absolute pleasure, Ken. Thanks so much for having me. Like I said, I wasn't able to be at I/O in person, but I will be at I/O Connect in Berlin on the 25th of June, so if there's any listener who's going to be there, look me up, come say hi, give me a wave. I'm looking forward to getting hands on and seeing for real some of those announcements that we just talked about.
Ken: Great. Thank you very much.
Andy: Thank you.