Each year, Thoughtworks's Looking Glass report explores the technology trends we think will have far-reaching consequences. We catch up with a couple of our senior technologists, to hear more about what the future holds for enterprise technology.
Mike Mason: Hello, everyone. Welcome to the Thoughtworks Technology Podcast. I'm one of your hosts, Mike Mason, and I'm here today with Rebecca Parsons, who's the Thoughtworks CTO and one of your regular co-hosts. Hello, Rebecca.
Rebecca Parsons: Hello, Mike. Hello, everybody. It's Rebecca Parsons, Thoughtworks CTO. We're joined by two guests both from a group internally we call OCTO, which stands for Office of the CTO. We have Ken Mugrage and Dave Elliman.
Dave Elliman.: Hi, folks. Good to be here.
Ken Mugrage: Hello.
Mike: Excellent. Today, what we thought we'd do is Reflections on the Looking Glass, which is a mystifying subject line. Let me explain that. Once a year, Thoughtworks publishes something that we call the Looking Glass, which is an analysis of the technology landscape and tech trends that we think are going to be important to businesses and technologists, of course, in the coming year.
We publish this in December. We have one that's just about a month old now, and we thought we'd do a podcast episode where we talk a little bit about the Looking Glass and what technologies are in there that might be interesting to everyone in the next year or so. Rebecca, do you want to tell us a bit more about the Looking Glass, what's in it, how it's structured, and so on?
Rebecca: We, obviously, as a technology services company, have an internal technology strategy and we've had one of these for the last several years and we started thinking, "You know, maybe it made sense to actually publish this externally as well." We looked at all of the things that we had collected internally. We track many individual trends. In this Looking Glass, there's roughly 100, give or take.
Then we try to step back and look at, "Are there themes? Are there clusters?" In particular, for this publication, we want to focus it on what do we think the impacts will be on businesses, on business models, on capabilities that businesses might have to develop, opportunities that might exist in certain industries. We divide up these trends, if you will, into these, what we currently have now, or five lenses.
These lenses are really looking at broader directions of changes as opposed to the individual trends. In addition, for each of these individual trends, we try to identify where it sits. Is it something that we're being faced with right now, or is it something that is further out on the horizon? To get a sense of how urgent the response might need to be.
We do try to include a response as well. What do we think organizations and individuals should do as a response to these individual trends? Now, of course, much of this advice has to be rather general because the urgency, for example, for a financial services firm might be different from a manufacturer, or a retailer, or a healthcare organization.
We try to not get very specific with respect to particular industries in the overall report, although, over the year, you will see different specialized reports coming out for things like healthcare or public sector or financial services. In that, they'll take that more industry-specific view of some of these trends and these lenses in particular. That's what the Looking Glass is all about.
Mike: Excellent. The Looking Glass has five lenses in it as Rebecca noted just now. What we are going to do in this podcast is just talk briefly about each lens and what we think the implications might be for 2022. The first lens, I'm going to tackle that one. The first lens is called realizing the potential of platforms. Again, we hope to pique people's interests with the way that we talk about these things. Because platforms are not new, why are we talking about realizing the potential of them?
Really what we've seen over the last year or so, and longer-term is that organizations are making investments in building platforms, but unfortunately, they're often disappointed with the results. We think that this is important to call out and to talk about how to do the right thing with platforms because there's a lot of value that you can get from a platform, but largely, today, that value is often not being realized.
The key point that we think is important is when you're working on a platform is to agree on the why of the platform and the what you're going to build. Somebody the other day asked me, "How do you define a platform?" I thought it was a great question because it is so fuzzy and muddy. To me, a platform is something that you build so that you can do something else faster. The platform accelerates something, and if we can agree on the something, then we're much more likely to all get to a good outcome when we put effort, time, and money into building such a thing.
In the Looking Glass Report, we've identified what we think are the three main types of platform. There's developer-focused infrastructure platforms, which are really about accelerating getting software into production. Being able to build it, test it, and then deploy it through to, probably, cloud infrastructure.
There are business capability platforms. That's more like an API platform, an internal API platform an organization where you are encapsulating business functionality within the platform so that it can be more easily reused and remixed to create new things for your customers, to create new value.
Then there's platform business models, which is something different entirely, which is the Ubers and the Lyfts of the world where their platform, their entire business is about connecting buyers and sellers, consumers and service providers, and the act of connecting them all together creates this value.
What we're talking about here is that it's important to distinguish between these three different types of platform because if an IT organization thinks that they're building a developer-focused infrastructure platform but people in the business think that they're trying to build a platform business, somebody's going to be disappointed in that outcome.
Then on this one, what do we think is going to change in 2022 around platforms? Personally, something that I see happening a lot is that there's a continuing expansion of what a platform can do. If you look at the AWS services dashboard or GCP, or any of the other big cloud providers, they are adding new services at a phenomenal pace. Every week, there are more and more things that that platform can provide you that you may not need to build yourself, and so you can potentially go faster, get better scalability, do stuff cheaper by taking advantage of what's in the platform.
I think that's going to continue, and we might start to see moves beyond just infrastructure available as a platform and into things that are becoming a bit more like business services.
I think, in relation to that, there's the continuing need for portability, switching between clouds, interoperating between clouds. There's a lot of different fuzzy words there, like multi-cloud, cloud portability, all that kind of stuff. I think something that we see which is a bit of a pipe dream, which we might caution against is people trying to write stuff that is actually cloud-agnostic. That's probably a little bit of a step too far, but certainly developing things with a view to moving between clouds is important.
Then something we call redecentralization. Back in the early days of the internet and the web, everything was decentralized. Everything ran on a different server. Today, 75% of internet traffic actually lands on Amazon, Google, Facebook, and a few other centralized providers. One thing we are keeping an eye on for 2022 is the move to decentralizing again and to removing the concentration on those major providers.
Is possible that some of the Web3 stuff might help with this, but I think things are very unclear where things are headed with Web3, crypto, NFTs, all of that jazz right at the moment.
Rebecca: I think there's also something around the way a particular organization evolves their platform. They might start with an infrastructure platform and then, over time, as they start to recognize areas of potential reuse, you can start to add new capabilities. This thing that started out perhaps as just a developer platform might start to morph into more of a business capability platform as different opportunities for expanding the platform arise.
One important thing to me about how to be effective with a platform is to ensure that the platform team is viewing this platform as a product and the customers for that product are the developers. In a traditional product sense, developers don't have to use the platform. The roadmap for that platform should take into account, "What do I have to do to lower the barriers for people to use my platform productively? What are the things that I can do to make this platform more desirable?" And use that as the driver for evolving that platform and adding new capabilities.
Dave: I wonder whether-- One aspect to this that often gets discarded is that we've heard a lot about DevOps, particularly in the last 10 years since the continuous delivery book that we wrote. There's been an increasing adoption of some of the terms but the potential really still remains in that people aren't necessarily changing their development or operating models to actually adopt a self service platform.
In actual fact, we're seeing a series of missed opportunities where people are missing the point of trying to create a developer-centric platform in the first instance. Then that prevents them from getting some of the value adds in terms of getting their products to market more quickly and potentially being a part of a network economy. All the sort of add-on things about fundamentally getting your products to market in a predictable and fast fashion, which is, continuous delivery is really an operating equivalent of the Agile Model. We're not seeing that, time and time again.
Mike: I think that's a good point, Dave. The potential has so many facets, so many angles to be fully realized there. That's why we're calling this one out this year. The next lens we'll move on to, Rebecca is going to talk about partnering with AI.
Rebecca: What we're trying to get to with this lens, and again, it comes out from the title. We really want to talk about the partnership that is forming between humans and the AIs that are being used. If you think about that word, 'partnership', when there is a partnership, we look at what are the things that we want to do together and who should do what? Sometimes, there might be just advice that comes from the AI that allows the human then to potentially make a better decision because more information is made available.
One of the early IBM Watson examples was in oncology, and because Watson could consume vast numbers of academic papers, when Watson was making a recommendation to the oncologists, it had available information that the oncologist hadn't had a chance to read yet. It wasn't that he was giving something novel but you were taking advantage of the fact that the Watson engine could just process information so much more quickly.
On the other end of the spectrum, we have completely autonomous AI systems that are making decisions. Very often, they'll have an escape path to say, "Wait a minute, I'm not confident enough in my decision around this." Things like fraud detection or screening of medical tests, for example. You start out, you're trying to give the low hanging fruit to the automated system and over time, as the system continues to learn and sees more data and sees how the human resolved those decisions that it couldn't make itself, it can then improve itself and raise the bar of what it can handle.
This is what we're really trying to focus on is this partnership between human decision makers or human job performers. This isn't just about decisions, but more broadly achieving an objective through a partnership between the AI system and the human. That doesn't mean that we think completely autonomous systems are wrong, but at the stage that we're in right now, it's this partnership that we want to focus on.
Dave: One comment I'd like to make is that there is almost like a merging together of the utility of AI and machine learning into some of the device spaces that we're seeing. We're seeing stacks of these things occurring, say in a autonomous robot or a self-driving car. This is not just an intelligent entity, it's a host of intelligent components that communicate together to make sets of subsystems that work together.
A lot of that functionality and computation and data manipulation happens locally but there's an awful lot that's now happening externally to that too. We're seeing a complete different model about how data and compute work together and how data and AI are being deployed, either doing something for you somewhere else or doing something for you on your local device and, very often, a mixture of both.
I think we're starting to see a complete difference in the overlay of the fabric of computation that's leading to the ability to get answers for things to people more quickly or to auto systems more quickly.
Ken: I'll throw in a bit of a spoiler alert, but a lot of these lenses interact and this idea of partnering with AI is one of the places where you have to be really intentional about bias in your models and making sure that the advice that the humans are getting in that partnership reflects what you need it to reflect, reflects reality and it's very, very easy for bias to creep in on these things.
Mike: Great. I think we're done talking about that one. The next one, the next lens, Dave is going to talk about. When you are-- Introduce that one, Dave?
Dave: Sure. I would like to talk about evolving the human-machine experience. We've been tracking the interactions that people have with their systems as a trend and as a series of trends within our organization now for a few years. We've seen, of course, the mobile revolution, we've seen the device revolution, and we've seen people's, consumers, and also enterprise expectations change.
We expect richer gestures from our devices now. We've started to interact with devices via voice. We expect to have feedback and uptakes that allow more freeform gestures to be interpreted. This has led us to an increase in expectation about how we use the devices that we interact with, but it also means that certain things need to be in place to enable that to happen.
Again, there is an increasing use of machine learning behind the scenes to interpret your voice or to interpret your actions. Some of that happens on the device, some of that happens elsewhere. There is also an increase in consumer expectations about what they expect to get back more conveniently.
Alongside all of this, we've seen an expectation, if you like, that augmented reality, virtual reality, mixed reality, this spectrum of interaction models that one has, or even those things deployed in cars, we've seen an expectation there's going to be a massive consumer boom in this. We've been watching that curve, that forecast curve just move forward in time.
Because if you look at predictions from like 2017, 2018, they were saying the same things about the potential size of the market in one, two, three years time. You look at the ones for the next year and they're looking forward and there's similar or slightly larger projections, and we've been waiting for the kind of consumer-driven AR particularly revolution to occur. It hasn't happened yet. We've had a few failed experiments. We've had the Google Glass, we've got other devices that surface, but everyone's expecting the big boom to happen.
From a device perspective, we're still waiting, even though there's been some notable devices that have appeared on the saints. Of course, the big thing that has happened, which is related to this, but not necessarily device-specific is, of course, the metaverse and this has become the poster child of growth within the human-machine experience paradigm.
Even though we've seen companies state their claim, Facebook have renamed themselves, Invidia have got a whole creation platform called Omniverse. There's a number of companies that are staking their claim to this. The gaming companies obviously have a natural route to the content creation within these metaverses. Some of them are actually trying to create their own metaverse instances.
In a way, at the moment, nobody really knows quite how big this is going to be, other than this expectation that it's going to be huge, but it raises a number of questions. For a start, I don't think there will be a metaverse. It's likely that there will-- Let's leave the M as a small m and say that there's going to be metaverses and the need to interact is possible. The need to exchange value. You can see why certain blockchain-dependent technology or NFTs are starting to be talked about as the mechanism of currency within these frameworks.
There are a lot of questions to be asked. If they're owned by a particular company, then you are going to have the same opportunities for, I guess, interacting with people, but there is going to be small micropayments that are going to be needed or have to be in place for you to interact with that. It is a commercial experience. Within that, the ethical questions about how one is represented, what happens if you are presented in a different way? What does that mean to your image within that environment? What does it mean to your image outside of it?
There's a lot of social interaction questions that are only starting to beginning to be asked. Of course, the technology's running ahead of any of these things. Everything's running to chase to be a part of it. In a sense, there's a lot of question marks about some of those technologies, their suitability, their ability to scale effectively, but we run with that simply because everybody else is running with it. Everyone's struggling to be a part of it.
We're seeing more investment in this. We're seeing a lot of people, a lot of the big cloud providers. Tencent have created, sort of positioning this in China. We're seeing a lot of the hardware vendors positioning themselves ready to be the people that supply the platform for this to run on. It is a big thing. It is coming.
It's a proven model in some senses because we've had virtual environments like Second Life. The first real estate platform in Second Life that sold for a million dollars was 15 years ago. It's a proven model, it's just not necessarily a proven model at scale with this level of interactivity and this level of possibility.
Within that lies an awful lot of issues. I think a lot of personal issues that are just an expanded set of new dimensions of over and above the kind of problems that we experience within every day on social media platforms, for example. I think it's incredibly interesting. We watch it keenly and we can see companies positioning, but, in essence, we're watching to find the best way that we can help to guide where appropriate.
Ken: I would like to add there. Dave, I'm curious, we haven't talked about this in the advance of your reaction here. Some of the foundational technologies here, the AR, VR, when we talk about metaverse or whatever, are seeing very successful adoption in certain areas like remote training. We've had some clients that have done things around repairing the hardware and training people to do that where it's easier to train put on a model. What's the maturity in that area of these trends?
Dave: Yes, that's quite interesting. If you step back away from the hugeness of the metaverse, if you like, that's some of the core technologies along that AR to VR spectrum. We're seeing some adoption of some relatively simple AR technologies, and some companies are keen to engage with those. I think the most interesting ones, which you allude to in terms of training, and simulation is within the mixed reality environments where it's the combination, again, of AI and the visual elements of mixed reality in some advanced headset devices, for example, that will actually allow us to create and be a part of some quite advanced digital twin-type environments. There are niche areas that are certainly growing and will continue to grow.
Mike: Thanks, Dave. We're going to keep moving along because we've got five lenses. We've been through three, so two more. Ken's going to talk about hostile tech. That sounds ominous, Ken.
Ken: Yes, a little bit. Worse than that, it's the expanding impact of hostile tech, it's getting worse. First off, I alluded to it a little bit earlier. Rebecca said at the beginning that there's 100 and some odd trends, and we have five lenses through which we view them. It is more common than not that a trend will be in more than one lens. It's actually very, very rare that a trend is only in one lens.
This is an example of one where there's a lot of things that you've already heard about around partnering with AI and interactions and language processing, and all that kind of stuff, that we have to think about for the hostile tech view as well. First off, what is hostile tech? Part of it is, I'm sure, what everybody is assuming, it's the bad actors, malicious intent, whether that'd be ransomware, or hackers, or what have you.
Security is certainly not getting any less important, and so that is included in here. The need to create software in a secure way, to understand that these malicious bad actors are every bit as skilled as the good actors. We often think of the teenager in a dark room that's running scripts, and that's a hacker. No, these are nation-states often.
We really do have to not forget about the security part of it. At the same time, we're also talking about the tracking and sale of personal data as openly hostile, and we use that term purposefully. We want to call this out as hostile. Many years ago, when we started the rise of search engines and that kind of stuff, we used to say that, if the product is free, that means you are the product.
In other words, they're collecting all the information about you to sell it to somebody else, most of the time, advertisers. Unfortunately, in the last couple of years, even that bar is gone. I personally bought a new Wi-Fi mesh system for my home, and in order to activate it, I had to sign up for a cloud-based account that would give me updates and things like that. There's no technical reason why would need to know anything about me to give me a firmware update, but they wanted to know what I was doing and so forth.
Luckily, the retailer I purchased that from has a very good return policy. More and more consumers will choose to not do business with you because of these policies. I purposely chose that way. I'm not saying they'll choose to go with somebody else, because there's multiple reasons. I did a lot of research. This was the best router for my purpose from a mechanical standpoint, and I actively chose not to do business with the company.
There's lots of statistics out there, Mike, and I actually did a call with The Analyst a year ago, where a fairly large percentage of people say, Yes, but that's okay." I'm willing to give up my information in return for some value. It might be a club card discount. I'm going to see ads anyway, I want to see the ones that I might take advantage of, et cetera.
I'm sure for some of those people, it is truly the phrase that goes around called informed consent, that they do really know what they're signing up for. I suspect a lot of them it's not informed consent. They don't really understand how much they're giving away, and so forth. That's a big part of the hostile tech, is just this collection and sale and monetization of our personal data and our privacy.
Also, under the same umbrella, though, it's things like bias. I touched on it a little bit when Rebecca was talking about partnering with AI. We certainly have to worry about there. It's not just that, you really have to be conscious of it. I should say if you ever really want to go down a hole, go into YouTube and watch things on cognitive bias where you can't understand what someone is saying until they write it on the screen, and then you hear it perfectly for those that are not hearing disabled.
Your brain is tricking you into not seeing your nose right now. There are biases that we need to be aware of that go in there. Yes, I've seen. Since this isn't on video, I wish everybody had the benefit of Mike trying to see his nose right now, but anyway. We do have to be aware of those kinds of things.
Thoughtworks has published a thing called The Responsible Tech Playbook, there's other ethical frameworks out there, and so forth. Really, companies need to be intentional about this. Think about where your data is coming from, what are the algorithms you're using? Rebecca's example of Watson talking to the oncologist, "Oh my gosh, if that was biased," and I'm sure there was some.
You really have to think about those things. Then even building on, Dave, like the expanding interactions and so forth. Think of the interaction with video and so forth, there are airports and those kinds of things that are taking pictures of people's face and if the computer determines that they look concerned, then they might elevate what they're going to do for security check for that person, or what have you.
If I was from certain parts of the world and looked different and had a different surname than I have, I would be nervous walking into an airport, not because I had bad intentions, but because I was afraid of being targeted and classified based on who I am and what I look like and now those people are yet more likely because the facial recognition, emotional computing-- Thank you, Dave, for typing my cheat note in there. It takes up that, "Hey, this person's nervous", and so it just makes it worse. We really need to be aware of those kinds of things from a hostile perspective.
For this year and moving on, there are some things we need to be aware of. First off, the whole internet of things and the number of things out there which can be hacked, webcams, and what have you, is constantly growing. Be aware that your attack surface is growing. Again, be intentional, think about it, those sorts of things.
I hope that a concept, a trend that we call secure software delivery, which is really thinking about security all the way through, including the systems that build and test your software, so your continuous integration systems, your continuous delivery systems, and so forth. Making all of those secure, we hope that'll become more of the default.
It's a subset, it's important that we acknowledge that it's not everybody, but a subset of consumers will get louder about their fundamental right to privacy. Others, as I said, accept and will accept, but as a business, most of our listeners are businesses, understand that consumers are increasingly going to hold you to account, not just for getting hacked and spilling their data accidentally, but for collecting more than you need and then monetizing it.
Rebecca: I guess, one thing I'd like to add to that is, as humans, we get very nervous and defensive when people use the word bias but bias is not just the cognitive biases that Ken brought up. There could be no malicious intent whatsoever, but if you do not understand the system that is generating the data, you can end up with a bias that you don't know was there, and certainly has nothing malicious attached to it.
One of the things I'd like to see happen more is our ability to defuse the tension that comes just from the word bias because it isn't necessarily malicious and too often if people start to get defensive, then everything's even worse when you're trying to work through a problem around bias.
Mike: I think on the topic of hostile tech, another thing I'd call out is addictive tech and the fact that it's so unclear whether that's okay or not. There's a great book last year written by Dr. Anna Lembke called Dopamine Nation, which is all about the fact that when we-- In the modern world, we have access to drugs, alcohol, and experiences that all give us dopamine hits. As you're scrolling through Instagram, you are actually getting little dopamine hits that are enhanced by the machine learning algorithms, figuring out what to show you so that you will continue scrolling and can be sold more advertising eyeball of time.
On the one hand, there's some very efficient technology going on there, which is great for the provider of that interactive engaging experience. The question is, when does it cross the line into promoting an addictive behavior which then becomes a negative thing for the person who's involved in that, and where do the commercial interests and the people interests, where do you draw the line between the two of those? I think addictive tech as well is something I call out in this space. Onto our final lens, we're going to go back to Dave to hear about sustainability.
Dave: Thank you, Mike. I think that there's a difference here with the way that we're approaching the question of sustainability. There's very much on everybody's lips at the moment. We hear a lot about sustainability and quite frankly, a lot of it feels a little bit separate, maybe becoming a little bit closer in terms of reality to us, from what we're hearing. We're seeing the effects all around us. You don't have to switch on a news channel without seeing some new record in terms of weather or something related to the environment that's a surprise or a change, something unexpected.
We decided to use the terminology that we are accelerating towards sustainability for two reasons. Firstly, the science backs up that claim that the predictions that people made about the effects of, for example, greenhouse gases or carbon effects within the environment have changed and are changing and changing more quickly than people anticipated. It's not simply just the importance of that on our environment around us, it's the imperative that creates on not just humanity, but therefore on business.
We're accelerating towards it because we know that all the restrictions that are going to come into place that people talk about-- We've just come out of COP26. We're recording this while I'm in the UK. We hosted this in Glasgow this year in Scotland, last year now. The rules that are being put in place, some of them are doing their normal political job of shifting in time but many of them are coming into force within the next few years. In political terms and in enterprise terms, it's a heartbeat away.
We're going to see these changes affect us in terms of punitive taxes, in terms of science-based testing initiatives that force protocols, businesses to quantify themselves, look at their carbon footprint of their direct use of carbon and their indirect use through the supply chain. We need to be in a position where we can record all of the energy costs within our business so that we can address the reduction of those things, but knowing about it first is going to be a massive deal for us.
Just as we needed to know where our data lived and what its risk of privacy intrusion with GDPR, we needed to know all our data, where it was and who has it, and who's got access to it. Now we need to know where all our computer is, where the data that utilizes that is, and its cost. It's more about the self-awareness of our systems and understanding what they do, how they do it, and what it costs.
We can't just go to the energy provider and say, "How much does it cost to run my company and what do I do about that?" Then try and invest in a few trees in Norway as carbon offset. It doesn't work and that's a completely bogus argument anyway.
The point is that we are running headlong into these restrictions and it's going to affect business and therefore it's going to be called on technology quite directly to provide the answers for like, "Who am I, what am I, how much does it cost and what can I do about this and how can I actually address the indirect and direct costs?"
There is an opportunity to make a difference. Historically, companies look at things that don't affect their bottom line as either CSR or now ESG initiatives. One's about being accountable and one may be about trying to put some costing in place for that. Things that maybe limit growth. It's the wrong way to look at it. We've got these things that are affecting the consumers buying choices, they want to see companies do the right thing and that stands for a lot of things, but sustainability is definitely one of them, but we've got this legislation that is coming into force for the car industry, for example, can't issue petrol or diesel cars in the UK past 2030 and hybrids past 2035.
If you are a company that leases cars, you've got a lot of work to do in these next eight years and eight years is not long to prepare for that. We've seen this with our clients directly. This isn't just speculation, we are seeing people directly worried about this. The chances are that it can be about your brand image and about appearing to do the right thing and make people buy from you, but it's also about this is going to become an absolute business imperative. You need to be prepared to act on this.
Thirdly, look at the planet, it is about doing the right thing. I think that that's going to become more important overall to everybody. It's already starting to transcend political terms in office, avoidance of targets, downplaying of these things.
Expect change, expect a lot of disruption, expect the not just the disruption of technology for opportunity, but if you are not in a position as a company or as an enterprise to respond quickly to change, then this is yet another massive, massive change that's coming.
Mike: Well said, Dave, and I think the other angle on sustainability is that we talk constantly about how difficult it is to get the right people to join your company, to be part of your workforce. Certainly, the younger generation-- God, I sound old when I say that. The younger generation have shown that they will vote with their feet and they would rather take a job for a company that they feel good about. They'd rather make good, sustainable choices in their life where possible, so even for things like trying to build a workforce, having a good sustainability stance starts to be important.
Rebecca: We're also seeing movement towards providing tools and processes and standards to help address some of these things. We've open-sourced something called the cloud carbon footprint tool, which allows an organization to get a handle on what is my carbon footprint for my cloud compute, and therefore, perhaps, do something about it.
We are a founding member of the Green Software Foundation, which has various efforts to try to make it easier for technology to become greener by understanding what does this compute really cost in terms of carbon, et cetera. I think we've got some positive momentum going that we just need to continue to keep an eye on.
Mike: Well, that was the five lenses from the Thoughtworks' Looking Glass. We will link the report in the show notes so you can check that out. We hope people enjoy reading it and do send us your comments. One thing that is helpful to us, if you enjoyed the podcast, please give us a thumbs up or a star rating and a little comment on whichever platform you're using to listen to this, that helps more people find the podcast. My name's Mike Mason. I'd like to say thank you to my co-host Rebecca Parsons and to our guests Ken Mugrage and Dave Elliman. Thank you for listening.