Enable javascript in your browser for better experience. Need to know to enable it? Go here.

Google I/O 2025:

What were the key takeaways?

As Head of Ecosystem Development, a big part of my role here at Thoughtworks is to engage with our larger partners — particularly the cloud service providers like AWS, Microsoft Azure and Google Cloud — and deepen our understanding of their respective offerings, to see where they have particular strengths and identify where these are most applicable to our clients, and the work we do with them. Although I was unable to make it to Mountain View in person this year, it was possible to tune in virtually and, after catching up with some of the press too, I wanted to share some thoughts on the recent Google I/O announcements. 

 

There's been extensive coverage already, so I won't rehash everything. Instead, I want to give a quick summary of my key takeaways and then zoom in on a couple of items that demonstrate what I think is most relevant for us at Thoughtworks, and for the clients we serve.

A lot of announcements!

 

Google's theme was "from research to reality;" they weren't kidding when they said they've been "shipping at a relentless pace." The prevailing opinion has been that Google found themselves on the back foot when ChatGPT was first released. You may have heard the refrain: "you know, the 'T' in GPT is for 'Transformer', which Google invented ...?” from them. They had a deep bench of research, but it didn’t seem to be showing up in products. 

 

The overwhelming feeling when watching I/O is that Google is now really hitting its stride. This confidence was palpable at Google Next back in April, and it feels like it's just getting stronger. They are no longer playing catch-up: they have the leading models, leading price/performance, and are successfully integrating these into products which range from hundreds of millions to billions of users.

 

From one angle, it could be said that there was still the occasional glimpse of the classic "Google ships the org chart" approach that Google watchers are all too familiar with. But I have a different view: what struck me is the level of coherence this time around. What was presented wasn’t simply a grab-bag of products; you can actually see how features fit together and build on each other. 

The unifying theme: Gemini is now at the core

 

The big story is that Gemini is now fully at the core of everything Google is doing. It’s the horizontal AI layer that underpins everything, moving features progressively from research to the underlying platform to the ‘productized vision’. DeepMind CEO Demis Hassabis calls the group Google's “engine room”, which feels pretty accurate. 

 

The speed at which this engine is moving, however, creates new and interesting challenges for product designers and managers; no sooner has the train been built, than it has been upgraded to a jet engine and then again to a rocket. The approach Google have taken to address this is to release wholly different applications or experiences each of which cater to a different pace of audience.

 

This is most clear when looking at the interplay of Gemini and Google Search

 

  • ‘Search’ is the mature product. AI Overviews are now built in for some queries, but such changes are only gradually introduced here. 

  • ‘AI Mode’ requires action on the part of its user and it introduces more AI features, packaged nicely, but may not yet be ready for everyone (or they for it). 

  • And then there is the ‘Gemini App’, which feels closer to a set of building blocks. Users of this application have a lot more options and choices to make, but they get the benefit of more experimental features. 

 

Multiple experiences are all concurrently released into production, as a stack of independent applications. As features first released at the foundational layer are tested and mature, they are allowed to ‘graduate’, moving up that stack. 

 

In setting things up this way, I believe Google is also establishing this as a pattern, a new norm for AI-infused product development, using that journey from experiment to finished product as the reference example, much like they popularized the early continuous delivery approach with their ‘perpetual beta’ program.

Google's evolving suite of products

 

AI Mode: shopping experience

 

Within the AI Mode, there is a new virtual try-on feature that sits at the heart of the shopping experience. This exemplifies this new experimental product approach – and it feels clear that this is going to be an important area for retail clients to also explore. With a custom image model trained to understand both people and material properties, the expectation is that those who provide high quality and accurate data will be more discoverable by the right customers, and this will drive sales. This is a natural extension of Google’s overall philosophy regarding SEO: ‘you should focus on publishing high quality content, and let us take care of bringing you an audience’. 

 

Of course, identifying quality and price-point is a role that branding has long played – rather than examining each item in detail, knowing who makes something can help you shortcut your evaluation process. However, brand is also deeply associated with identity, and people choose clothing because they also want to be associated with the messaging that accompanies it. It will be critical for retailers to find new ways to continue building brand associations in an ever-more algorithmically driven department store. 

 

It is also likely that there’s a tension between the factual correctness of the product information, and what people will come to understand as the driver of a sale. And with our current eager-to-please generative techniques, how do you prevent them from always showing a perfect fit? Finally, it seems there may also be ethical considerations around whose photo you can upload, and which clothes are ‘tried on’. The responsibility for tackling this kind of potential misuse sits primarily with Google, but it would be wise for retailers to maintain vigilance, in case things go awry.

 

Google Beam

 

In a working world increasingly focused on the use of AI agents, it's interesting to see an investment in bringing out the most human aspects of person-to-person meetings, even while they are held remotely. When I first saw Project Starline in 2022 it filled a small room. By 2023, it was the size of a large cabinet. Now launching as Google Beam it's a large screen on a desk – not a huge leap from our many of our current AV setups. With some indications that pricing will be similarly on a par, the ‘graduation’ model again provides a gentle onboarding path for leading-edge businesses. 

 

To date, demonstrations have been focused on one-on-one conversation, and though there is an integration of the technology into Google Meet, the hardware is a core component of the full experience. This provides an interesting constraint. In the scenarios where this would be most useful – a job interview, a doctor's appointment, or a high-stakes negotiation – it’s probably more important that the counter-party is the one that is the beneficiary. 

 

As it is released for Workspace users (as one would expect the majority of Beam customers to be), the real-time translation feature could prove compelling. Subtle body-language cues are going to be even more valuable in the multi-lingual conversations that this will enable. That said, until it’s more widely adopted, businesses should expect to be running experiments in order to find the optimal use cases.

 

Flow

 

By now, you’ve probably seen at least one video where realistic people are expressing surprise that they are in fact an AI creation. Certainly, my social feeds have already been buzzing about these new capabilities of Veo 3, where the addition of voice to video seems to have brought about a step change in people’s perception of the quality of the outputs. Flow has been introduced as part of Google Labs; it is the next step up the stack, a newly productized version of these models, an AI movie studio.

 

Given its Labs status, some have been surprised by the somewhat less experimental seeming price, especially when looking at the Ultra version which would cost the best part of $3000 for a year. However, if you were to compare this to professional filmmaking equipment, it seems more well-positioned, and I think it’s no mistake that there’s a channel dedicated to drone shots in the demos.

 

To be clear, even with the more expertly created showcases, there is still a strange quality to the movies. I think it may be due to relatively simple seeming things, such as the limited clip length resulting in a lot of quick cuts. But there are many places where a few very short clips are exactly what is needed – Google demonstrated exactly this in the I/O event video countdowns – and it’s likely we’ll see these tools rapidly adopted by creative teams across many organizations. It continues to require a professional understanding of the craft to really make the most of the tool, but in bringing some of the time and costs of creation down, adding idents or other motion graphics to company webinars, or even internal meetings will now be in reach. All businesses should start thinking about additional places where they could be enhancing the quality and experience of the materials they publish.

 

Gemini on Android Auto

 

All neatly developed hypotheses bump into reality at some point, and the choice to go with “Gemini” on Android Auto, rather than integrate the more mature features into Assistant, does somewhat muddy the waters a bit from a branding and packaging perspective. However, this demo of Gemini helping check emails for destination details was spot on. It feels like a genuinely useful improvement to the hands-free experience, and exactly what people expect from a smart assistant. Of course, it’s also worth mentioning that if it gets factual answers wrong while they are driving, I can’t imagine people will be too forgiving, so there may be some need to keep it under a more experimental banner until it’s seen some use.

 

The interesting question for automotive clients is going to be when this moves from the more infotainment-oriented Android Auto, into the Android Automotive operating system itself (as was announced in the expanded partnership with Volvo). This gets into safety critical operations, where it’s important to ensure drivers know both what they can ask of Gemini, and how it will respond.

 

NotebookLM

 

NotebookLM is much further along its journey from Labs. In September 2024, the combination of a novel ‘conversational’ approach to audio summarization with the tightly focused document RAG drove focus to the tool. In the ensuing months, this has seen continuous incremental improvements, allowing users to adjust style and tone, to pass ‘notes’ to the ‘podcast hosts’, and even join in the conversation. Following the recent addition of mind maps, Video overviews, and a new mobile app were announced at I/O.

 

NotebookLM has been super popular within Thoughtworks, where colleagues have used it to support a surprisingly diverse range of activities. Compiling and aggregating market research, querying for patterns and themes now feels quite commonplace across a number of teams. Of note, giving a voice to ‘voice of the customer’ surveys has been particularly powerful. Another used it as an alternative to a roadmap, sharing a product vision that stakeholders could directly engage with. Additionally, it has proved inspirational  – one team had some relatively dry, yet important, material to share. In seeing the points that the overview pulled out, they were able to craft a more engaging communications strategy.

More to come...

 

In pulling together this overview, it was also unfortunate to note that many of the announced products and experiences are not yet available to me in the UK or to others in Europe. Presumably this is not just a progressive roll-out strategy, but because of the different regulatory environment. Contrary to the notion that regulations are simply created as a drag on progress, I think it’s worth noting that regulations can also be the reflection of values and legitimate concerns of different communities. It would be great to see even more experimental products released in a way that acknowledge and seek to address these concerns from the outset, rather than just gating release by geography, and waiting it out. Perhaps a strategy where experiments start with the strictest restrictions on data usage, allowing for broader data use by more mature services, as they ‘graduate’?

 

This is, of course, also a reflection of my impatience too – the opportunity present in each of the items above is very clear, and I’m keen to get working with these, and start implementing solutions. This is an exciting time, while I’ve only been able to touch on a few items, I hope it’s clear that there are many areas that will have a major impact on each of our businesses. If you’d like to explore more yourself, I’d recommend checking out the Notebook with all the I/O announcements that Google created. 

 

I will also be heading to the I/O Connect event in Berlin next month (June 25th). I’m really looking forward to getting hands-on some of the new hardware and digging deeper into the features we've been hearing about. If you're planning to be there, definitely reach out – I'd love to connect.

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Learn more about how we work with ecosystem partners