One of the biggest challenges in developing automated vehicles is ensuring that the vehicle can successfully and safely interact with the environment and other humans. Fortunately, researchers around the world are looking closely at this problem. One of them, Josh Siegel — Assistant Professor of Computer Science and Engineering at Michigan State University — joins Prasanna Pendse in the second episode of the Architecture of Future Tech to examine how trust can be ensured, discussing everything from sensors through to computation, simulation and regulation.
Prasanna Pendse: Hello, and welcome to episode 2 of Architecture of Future Tech podcast. In this volume, we are exploring autonomous vehicle technology. My guest today is Josh Siegel. Josh, would you mind introducing yourself?
Josh Siegel: I'm Josh Siegel. I'm an assistant professor of computer science and engineering at Michigan State University, where I run the deep technology lab. Our job is to make technology boring, which is different, I should note, than making boring technology. We try and work at the precipice of the possible to develop things like self-driving cars, architectures for the internet of things, advanced approaches to cybersecurity. We work across disciplines and really try and be faithful to problem-solving rather than to any one academic area.
Prasanna: Absolutely. All of those things are very relevant to autonomous vehicles, which is the topic of this volume of the podcast. Before we go into the future, let's go back into the past a little bit. How did you get started? How did you get interested in vehicles and autonomous vehicles?
Josh: I grew up around the Detroit area, where I was surrounded by car culture and I saw people driving classic cars down the street and having a lot of fun with it. I got my first classic car when I was 14, a 1955 Chevrolet 210, which is kind of like a Bel Air. I learned about engineering from restoring that car. I learned about mechanics, I learned about chemical reactions, I learned about electrical systems. As I got older, I wanted to keep working with cars. I needed a car that was a little more reliable and a little safer than my '55 Chevy, so I ended up working on a 2004 Chevy Impala.
That car was really, really different to work on. It wasn't things that you could fix with a screwdriver and a socket wrench. You needed to actually program things. You needed to understand code. That really is the start of the trajectory that got me into automotive research, where I started with, basically, boxes on wheels that you could fix with a hammer, and got into the complexities of trying to hot rod cars that had digital systems in them. That got me into working with connected vehicles, and that led me to do work now with automated vehicles at Michigan State University.
Prasanna: Nice. You got started in this world very early. We are here recording this podcast in person, which is definitely the first time. It may happen to be the only time I will actually be in the same room as one of my guests. We are here in the suburbs of Detroit on a beautiful Michigan summer day. I see something here in your room here. It looks like a table, but underneath it, it looks like an engine. Would you mind describing that a little bit, and telling us what that is and why do you have that there?
Josh: Sure. You're looking at my coffee table that's made out of the engine block pistons and connecting rods from a Chevy 350 engine. That was inspired by Top Gear, the TV show, but also coming across an engine block that someone had in their backyard that they were trying to get rid of. I pulled this out of their yard and built a table out of it. The glass top is actually an upcycled shower door. Try and scrounge and scavenge. Anything with wheels on it, anything with gas engines, with electric motors, that's what drives me.
Prasanna: You have a deep affinity to anything to do with cars and engines, but also being-- in India, we call it jugaad, which is, you make things happen with what you have rather than necessarily imagining what you may need, hypothetically. You made something that's quite beautiful out of things that somebody else did not want. We are here in the Michigan summer, but let's go into a future since this is a podcast about future technology.
We are going to imagine a winter day in the beautiful Michigan State University campus. Let us say it is snowing, and winters and the snow storms in East Lansing can get pretty heavy. This is a future where there are a lot of autonomous vehicles on the road. Your research today has become widely successful, and pretty much every car that is on the road is an autonomous vehicle.
You're standing outside the engineering building on Shaw Lane and you see a colleague of yours, Professor Jay, let's say, and you have a very urgent message for him. You see his autonomous vehicle coming down the street and you recognize his face, and then you jump in front of the car and flag down the autonomous vehicle. It stops, does not skid, does not hit you, and then you're also on the window and you're able to talk to him. Between where we are now to this future day where something like this could happen, what are all the things that need to happen before we get there?
Josh: I think the most important thing is I need to have the confidence to jump in front of a moving vehicle and know that it's going to stop. More pointedly on the tech front, in automated vehicles, we have a control loop that really is about perception, planning, and control. All of these elements are fairly refined today, but they need significant advances to get to that future state. In terms of perception, for the vehicle to stop safely when it sees me coming, it needs to be able to recognize me as a person on the road. It needs to be able to navigate in poor weather situations, maybe low light, maybe where there's water on the lens of the camera or high reflectance from laser that causes the LiDAR not to perform very well.
More than that, it needs to know that when there's a person in the scene, that person has a trajectory, and be able to predict where that person is going next in order to stop, as you say, without skidding, without incident, without causing harm to the people outside the vehicle, but also stress to people inside the vehicle. In terms of planning, that vehicle needs to be able to anticipate, what are these possible future states, what are the other vehicles doing around it, and who's in the next lane, who's behind it that might rear-end it, assuming that we've got high penetration of automated vehicles, but not 100%.
Then control. Traction in wet weather, that's a huge concern. It turns out it's incredibly hard to model the dynamics of traction in environments where maybe you have ice, maybe you have water, and maybe you have a bit of dry surface or some rock salt that you're driving over. I think broadly, those are the areas where we need advances with a little bit more nuance. We need to have the compute capabilities that scale up to support a fleet that size. We need computers that are performant, but they're also efficient. We need algorithms that can take a small amount of data and derive insight from the information that they're provided. We need advances in the sensing capabilities so that we can make better inference about what's going to happen next.
Prasanna: I presume all of these things need to happen fast. You can't wait seconds even to make that decision.
Josh: Yes. In an automated vehicle, a typical control loop for something safety-critical might take about a millisecond. That's absolutely essential. That's how we get these marginal advances that, marginal is not the right word, but it is the correct word at the same time, that take us from human capability and put us into the realm of automated vehicles that outperform humans regularly and do things that we can't.
One of the things latent in this vision that you've presented, Prasanna, is if people walk in front of a car because they know it's going to stop. We've built vehicles that perform perfectly 100% of the time, or people don't necessarily know the limitations of automated vehicles. That's an interesting area of research that's ongoing right now. There are moral and ethical problems associated with self-driving. You may have heard of trolley problems in the past. Do you pull the lever and cause the train to switch from one track to another and save lives, but in so doing, be responsible for causing harm to those who wouldn't have been harmed if you hadn't pulled the lever?
Automated vehicles, it's both subtler than that and potentially more direct than that. If you run out in front of an automated vehicle regularly and it stops every time, you create these interesting social network effects, where self-driving cars never get where they need to go because nobody needs a crosswalk. They just cross whenever they want to, and cars can't get to their destination. One of the areas that's being addressed in research now is how frequently should a self-driving car choose to delay getting on the brakes? What is the safe threshold? Should self-driving cars ever come closer to people than they have to in order to instill a healthy respect in the need of the vehicle to get where it's going? That comes back to that planning element.
Prasanna: Yes, with the mass and the velocity, there's certainly a danger to people if something doesn't go right, and they need to have the respect and fear of that moving chunk of metal that's coming right at you.
Josh: That's one of the big challenges, which is you never want to do harm. If you look at Asimov's rules of robotics, first, do no harm. If we can instill fear while we know with certainty that we're not going to do harm, that actually can be a social benefit for automated vehicles and drive their adoption.
Prasanna: Makes sense. We touched on a few different topics here. Let's go a little bit deeper into some of these things that are happening. You talked about one of the things that's happening in terms of perception is all the various sensors that exist on the vehicles today, and potentially in the future, there'll be new sensors that we don't know yet. How do autonomous vehicle systems make sense of this? How does these sensors turn into something useful?
Josh: An automated vehicle has hundreds, if not thousands, of different signals coming in from sensors, and there are sensors of all different types. We have LiDAR for light-based ranging, we have radar, we have ultrasonic, we have cameras in them. At the end of the day, it comes down to sensor fusion. Taking diverse data streams and understanding how do we get the good from one, how do we get the mediocre from another, and how do we ignore the bad from a third input?
It's complicated from a mathematical perspective to fuse these sensors. There are a lot of algorithms that are largely self-adaptive if we can figure them appropriately. In essence, what they do is they say, "Hey, if it's nighttime, my cameras aren't very good, so I'm going to rely on LiDAR. If it's raining out, maybe my LiDAR is not very good, so I actually want to use radar that's going to be less susceptible to the small droplets." It does this dynamically on the fly in real time with ultra-low response time. That 1 millisecond or 10 millisecond loop.
Prasanna: Can you talk a little bit more about, what are the algorithms that are being researched? What are the approaches that are being taught through in terms of how to do this?
Josh: Probably the most commonly used algorithm is some variation of a Kalman filter. Effectively, what it does is it looks at the uncertainties of measurements from diverse sensors. Those uncertainties could either be static or they could be dynamic, based on the operating condition of the vehicle. These algorithms learn to iteratively estimate the state, predict the future state, and measure the error between the predicted state and the actual next state, and then inform the algorithms future predictions that way.
It sounds like a mouthful if you look at the math. If you know linear algebra, it's actually not that complicated to get through. It's updating weights and biases to say, based on what I saw just now, here's what I anticipate about my ability to trust the data from a sensor in this next coming time step.
Prasanna: That'll allow you to shift your focus as it were from one set of sensors to another. Earlier, you mentioned the term multimodal sensor fusion when we were doing the preparation. Can you talk about that? Is that what we just talked about now, or is that something different?
Josh: This is one of those things in technology where it has a definition and everyone defines it slightly differently. I would say, if you were trying to do range estimation and fusing estimates from LiDAR and radar and stereo cameras, that is not true multimodal in the sense that maybe we add GPS to the mix for it, maybe we add an accelerometer to estimate our own state. That adds new dimensions of the measurement to it, where it's not just about looking outwardly from the vehicle. It's not just exteroceptive sensing. We have something proprioceptive as well measuring inward to understand how our weights and biases change by our own operating state.
The faster we go, the less we want to trust slow-update-rate cameras in a vehicle, for example. I think you will find a lot of people who say that multimodal is any two types of sensors, even if they're doing ostensibly the same thing. I like to think of multimodal as really being sensors in different domains being used to get a better, broader picture of the context of a vehicle and its environment.
Prasanna: This might be even more immediately required before we get into a full autonomous vehicle AD world. There is a ADAS world, which is a driver-assisted world, that we are in already and are making some progress towards. When you have a potential driver and you have a system that can do certain things but not everything in terms of driving automatically, there you may have to deal with in-cabin sensors. Looking at attention, looking at gaze detection. Focus on, where is this person paying attention to? Are they looking at their phone? Are they talking? Is there a loud child in the backseat and they're distracted?
Being able to add that into the external sensors that we talked about, or the sensors that are about driving, to then realize that, okay, essentially, here the driver becomes one AI system in a way, or HI system, where the AD system decides that, "You know what? I don't trust the decisions taken by this human anymore because of these things, at least for the next few minutes, and I'm going to be extra watchful." Is that one of the ways that multimodal sensor fusion is going to work?
Josh: That's part of it. If you have a human driver behind the wheel, especially in the near term when we've got ADAS versus full automation, you've got these issues of drunken driving, drugs driving, drowsy driving, distracted driving that we need to solve for. These biometric sensors absolutely play a huge role in that. I would say one of the biggest challenges in automated vehicles is not about the core technologies so much as human interoperability and human handoff. Handoff being the notion that if your car decides it can't drive itself because there's too much rain and the sensors don't work, or some component fails, and it doesn't have a fail-safe module, a human needs to take over control of that vehicle.
As it turns out, right now, we get about a half second to one second to handoff notification, and it takes a human 45 seconds to a minute to resume stable control of driving. Now, grapple with that as you think about, we're doing double integration to figure out our future positional states, and you do that once every millisecond, and you're doing that for a minute. If you have 0.0001% error, you're going to be hundreds of meters off by the time you get to the end of that minute, and so you don't know where the car is going to be, but the person also isn't able to resume control.
That really is the big barrier now. That we don't know what we don't know, and that humans who don't understand the capabilities of their vehicles are not ready to intercede and take over because they know that some dangerous scenario might be coming up. To the broader point of interoperability, if you had automated vehicle technology today and you had policy that said, "Let's get all the cars off the road. Let's take two years to build this out. We've got the investment that we need." I would say self-driving is largely a self-problem, but it is the fleshy meat bags behind the wheel and outside the car that are irrational agents that are hard to model for.
I say that as someone who you'll never take my keys away. I've got my classic cars. I'm going to keep driving them. It's not only the technology that's the problem. It's the technology inter-operating with the humans and to close the loop. That's where multimodal sensing does come back in. That we need to understand inside the car, outside the car. Can someone resume control? Are they being attentive? Is someone going to jump in front of the vehicle? If they jump in front of the vehicle, how far are they going to go into the road?
Prasanna: You talked about compute. You need to make these decisions faster. You're using Kalman filters, which aren't, from a linear algebra perspective, not super complicated. As time goes on, the kind of decisions that need to be made in vehicle at that instant, within millisecond, they start becoming more and more complex. What kind of compute do you think we are going to see by the time this vision that we had earlier becomes a reality? What is the kind of compute power that you think will be in a car?
Josh: Compute for automated vehicles is already changing rapidly. We've seen a move from few-core high power systems to multi-core low power systems. Certainly, for deep learning, that is one approach that makes a lot of sense. Increase the core count, run more operations in parallel. Frankly, get compute that doesn't always return the right answer 100% of the time because, effectively, what you're doing is you're predicting multiple future states, weighting them and figuring out what's most likely here, or what's the average best outcome?
More broadly than that, more broadly than going parallel compute and high efficiency compute, we are seeing the beginning of an era of the software-defined car. Some of that compute is actually taking place in the cloud or at some remote computing environment. We're seeing a split between latency-critical operations, things where you're responding to a stop sign or someone jumping out into the road. Then we're seeing these latency-insensitive operations like route planning take place, perhaps, outside a vehicle at some data center. That is probably three to five years on the horizon. As we see increasing uptake of autonomy, as we look at things like federated learning, learning from vehicle fleets writ large, that's going to play a very significant role, I would say.
Prasanna: You're looking at compute happening not only in vehicle, or maybe even you go one level deeper and say there's a bunch of compute that happens on a per sensor basis or per area of a vehicle per zone, all the way to what I've been hearing, the term fog compute, where you're looking at, it's not really full cloud, as in some central cloud, but there is a distributed part of a cloud that exists. Maybe it is part of the infrastructure, maybe it is part of a peer-to-peer network that can be created by several of these vehicles together. All of these things are making rapid progress. One of the concerns that I keep hearing about is the heat. Is that a challenge, energy consumption and the heat output of this as being one of the things that need to be solved?
Josh: First, to the point of fog compute, I would say we have a lot of that already as we think about building connected and automated vehicles. Automated intersection management systems that coordinate vehicle passage without traffic signals exist, and that's done with an intersection controller. You've basically got a server running at every four-way intersection or more that coordinates vehicle traffic through it. This paradigm is already out there in industry, both in research but also in practice.
To the point of heat and energy efficiency, self-driving is not mining cryptocurrency. There's a lot of heat generated, there is a lot of deep learning going on, but it's not heat for the sake of heat. Read into that what you will. The heat generation from the compute in vehicles is not a problem. What is a problem is actually the energy consumption because if you've got an electric vehicle and you're trying to power compute for self-driving in it, that's a significant percentage of your battery capacity that goes into the self-driving versus actually propelling you forward.
This is an artifact of two things. One, electric vehicles are incredibly efficient, and so it doesn't take a lot of watt hours per mile, but also, the compute in the vehicle has not been optimized up until very recently to not use a lot of power as your car drives down the road. What you're effectively seeing is this nearing of parody of your motive energy and your compute energy in a vehicle. I think that will need to change in order to drive adoption of electric vehicles, which, because they're natively drive by wire for the most part, will drive adoption of automated vehicles, and it becomes the virtuous cycle.
Prasanna: When you talked about a server in every traffic signal, that got me thinking, every movie that you see where there's a evil hacker who's trying to disrupt something, or there is a bank robbery and somebody is going through and they have a hacker who is changing traffic signals for the benefit of whatever evil plan they may be hatching. From a security perspective, does the distribution of decision-making and compute to so many places simply create a very large surface area that you now need to secure? What are the challenges that you're seeing from a security perspective in this world?
Josh: I think Fast & Furious is a great movie franchise. They get a lot of things wrong, but some of the hacking they actually get close to right when they're doing signal control or when they're doing some of the car hacking in the earlier movies at least. I think your point is right that these are honey pots. They're large attack surfaces, they're hard to secure, they're critical infrastructure, and bad actors go after those systems more than they would go after systems that are smaller impact or more tightly secured.
I always come back to, when we talk about car hacking or infrastructure hacking, this idea of, is the hack that we're talking about really the easiest way? When we talk about hacking controller area network, the network that makes your car run, it's hard to hack that, but it's easy to cut a brake cable. I think that's where I am on the intersection management as well. Could it be a problem? Yes. Could it be a huge problem? Absolutely, but there are easier ways to sow discord and chaos.
I think that is my bigger concern. It's the people who willfully run the red lights, not the people who act the intersections. In the same way that I'm not even worried about people cutting brake hoses, I'm worried about people who don't inflate their tires properly. Way more accidents are caused by poor vehicle maintenance or inattentive driving than whatever be caused by the malicious attack.
Prasanna: The news, though, is going to talk about, there's a particular vehicle that was hacked in under 12 seconds while it was driving and it was stopped, or you look at the enterprise data leaks and so on. Security certainly is something that creates a lot of concern among people, regardless of whether it actually causes a rise in accidents or not. Is there something that the industry is doing or is not doing that should be done to take away at least a low-hanging fruit from a lot of these areas?
From what I've seen in the past, when you get into devices, once they work, people stop looking at it beyond that. Early, the electricity meters had basically no security, and anybody driving by could read your electricity consumption. Now that's not the case anymore. Simple things like authentication, not using default passwords, encryption of the things or of the data that's going through, or even just needing a key to be able to access certain functions of a vehicle, and not just the same key that every dealer in the whole world has.
Josh: I have to be careful how I frame my response because I work with the auto industry closely, and they've funded me in the past and I hope that they will in the future. What I will say is their hearts are in the right place. There are definitely standards that are being worked towards for connected and automated vehicle security. The security landscape within automotive is a million times better than it was five years ago, is a million times better than it was five years before that. I think it is a significant concern, but it is being thoughtfully addressed. I'm hopeful that best practices will catch up with vehicles soon.
I think the broader point that you raised is that people see something on the news and they freak out. Part of security culture, part of security hygiene is educating oneself on the capabilities of their devices, good and bad. I would bet that almost all of your listeners, if not all of your listeners, have never read through the end user license agreement for their vehicle. I would lay money in the fact that all of your listeners have not read it for their friend's vehicle that they've been a passenger in.
Their friend's vehicle, it's still got controller area network, it's got sensors, it's got inward-facing data streams that know who's in the car, where they're sitting, what they're listening to on the radio, where that vehicle is. Because of something called the clickwrap agreement, when you open the door, even if it's not your car, you get into a vehicle, that data goes somewhere and someone has access to it. To me, that's far more concerning than the security issues that people raise because you're willfully giving all of your personal information to, whether it's OEMs or third parties, someone that you don't even think about day to day. That, in aggregate, is a far larger issue, I think.
Prasanna: You got to do implicit consent at that point and you need to-
Prasanna: -turn that into-- how do you actually say that, "Okay, this is my data. I do not allow you to use it." Homomorphic encryption maybe or some other techniques to do that someday. One of the things that I find interesting about autonomous vehicles is the assertion that an autonomous vehicle needs to drive safely for, I don't know, 8 billion miles before it is considered safe. Before an autonomous vehicle gets its driver's license, it needs to show that it can drive 8 billion miles without crashing anything.
I don't know about you, but I didn't need to drive that much to show that I can drive. How do you think this is possible in the amount of time that we have for cars to actually drive that much and get to demonstrate to regulators that they have met the safety criteria?
Josh: For regulators, it's easier to say no than to say yes. Sometimes you just make a goal so hard to reach that it's effectively saying no, because then you're not responsible for the outcome of it should it come to pass because it's not likely to pass. I think that your point is valid, that we hold self-driving systems to a higher standard than we hold human drivers. I don't frankly think that that is to the benefit of society. I think it'd be better to get drivers who are not comfortable driving off the road. I think that there are plenty of people who drive out of necessity who don't want to, who are hazards potentially to themselves and to others.
Latent in that question is, how do we get these miles? Some of it is in reality and some of it is in simulation. Frankly, most of it is in the simulation. There's also the question of, where are we today versus getting to that 8 billion mile mark? If you look at data from 2021 and look at miles for disengagement, with disengagement as defined by manufacturers themselves, the best of the best, they're doing about 40,000 miles per disengagement. That's actually pretty close to on par with average human performance right now.
On the one hand, we've largely matched human performance before having scary events happen. On the other, we've got this big number that we're trying to get to that we may never in reality, but we can use simulation to catch up on that. Maybe the real question is, is 8 billion the right number? Is there some other way of evaluating both what I would call local and global welfare for automated vehicles?
If you are a safer driver than an average automated vehicle, maybe you should be able to keep driving, but that's inequitable. If we put automated vehicles out there, maybe you are in a slightly less safe vehicle, but we get more vehicles out to more people that are safer than their average driving. Net, we save lives, we save time, we save efficiency. That really is one of the big questions that companies now are struggling with, and legislators, of saying one is enough.
Prasanna: From a regulatory perspective, one of the things that I've been thinking about is how do we move from, essentially, a deterministic list-based regulation to a more probabilistic regulation? I think what you're saying is it's not just probabilistic, but it's also adaptive. It's also adapted to the person and the situation that you need to be in. Autonomous vehicles may be safer in certain situations, in the hands of certain people than others, which if I'm just thinking from a regulatory perspective, it's hard enough to go from deterministic to probabilistic, and now you're saying, now customize it, make it adaptive to each person. That sounds like a challenge in and of itself.
Josh: It doesn't even have to get that granular, but the reality is, driving is very different around the world. Now, if you travel to 10 different countries and drive, you find that the rules of the road are very different, but also the way that people follow the rules of the road is even more different. It does get down to that at the Zip code level in the US. We've got these little tiny-pocketed regions called Zip codes that are maybe five kilometers by five kilometers or smaller. When you cross from one end to the other, maybe right turn on a red is legal in some places and it's illegal in others. The way that we stop for school buses, even though the law is the same everywhere, it varies from one Zip code to the next.
Evaluating AV, automated vehicle performance is very different, not only for the individuals, but how they perform in these regions. Go to Boston, people drive on the shoulder of the highway during certain hours of the day. That's the scenario that people don't train for in Michigan, if you're building a self-driving car. If you are training a vehicle in California, it's got to deal with wildfire smoke, but it doesn't deal with snow like we get here. All of these are big challenges, and that's why there really is not going to be a one-size-fits-all metric anytime soon.
Prasanna: I was talking to one of our colleagues in Germany about autonomous vehicles, and there was a lot of optimism that given how the roads are, that we are close. I said, okay, now let's bring that same box that does autonomy well in Germany, let's bring it to India and see how well it performs. It's a completely different dynamic. That was a moment of like, we haven't even thought about how to drive in India or Lagos or wherever else where the traffic is notorious.
One way to do this, you talk about simulation. One of the ways to simulate a lot of these different situations is, instead of sending cars everywhere, you can do that in simulation. You've done some interesting work in simulation in the past. Can you talk about that briefly?
Josh: We've done three simulated studies in the deep tech lab. Two of them look at the human factors of self-driving, and the third really looks at how do we develop better algorithms for human factors. We developed simulations that actually try to figure out, what's the optimal level of scared that a pedestrian should be if they try and cross in front of a moving vehicle? We conducted a survey with that. We had a game where people knew that vehicles either were or were not automated, or sometimes we surprised them by not indicating. We had them cross the road as quickly as possible to get the high score, and saw if they had knowledge of whether or not a vehicle was automated, would they take different risks around it.
We saw that as people became familiar with autonomy, they would take more risks around that vehicle, and might actually become less safe as a pedestrian and impact the transit time of that vehicle. We're continuing to do work in that to figure out, how do we optimally strategize indicating that a vehicle is automated, that it sees you, and then how it responds to you as you're crossing?
Another study that we are launching now looks at the trolley problem and looks at ethnographic elements of it. How do you respond to different moral and ethical dilemmas based on your own background? I would say the most interesting from a marketing perspective or an outside view perspective, is that we've done work on adversarial self-driving. When you grew up, if you learned how to drive, someone might've told you, "Drive like everybody's out to get you." Certainly, in Michigan, that's something that you hear in driver's education. Drive like everybody's out to get you.
We actually built this in simulation. We created a coupled network of reinforcement learning agents, where we had a protagonist who tried to get to their destination, and an antagonist whose job was to keep you from getting to your destination. We ran this for a few thousand cycles. What happened was, over time, the protagonist got better at avoiding the antagonist. The antagonist got better at hitting the protagonist. When we took the protagonist and put them in a non-adversarial environment, they were able to avoid all sorts of incidental collisions. Someone who's not keeping their lane well, someone who is drowsy behind the wheel, who might drift in front of the vehicle. You can dodge that, you can anticipate that a lot better.
We haven't done billions of miles. I would say we've done thousands or tens of thousands of miles, but it demonstrates the potential to test things in a virtual environment that we would never be able to in real life.
Prasanna: That is very interesting. When I was learning to drive in Michigan, nobody told me that. Nobody told me to drive like everybody's out to get you. I think when I was learning to drive in India, that is definitely what I was told to do, is to just assume that they're trying to run into you and how do you keep going. I think with that, we can leave our audience with that thought, drive like everybody's out to get you. If you're listening to this podcast while driving, this is the absolutely perfect time to ask you to do that. Thank you very much, Josh.
Josh: Thank you for having me.
[END OF AUDIO]