The world of autonomous vehicles is full of hype and promise, but it's easy to overlook the work that’s happening in the space.
In this, the very first episode of the Architecture of Future Tech podcast, Prasanna Pendse talks to Joerg Seitter, Head of Advanced Engineering at embedded systems design and manufacturing company ETAS. They discuss the state of the autonomous vehicle industry today and delve into technology and production challenges, establishing a broad view on the field from which the rest of the series can dive even deeper.
Prasanna Pendse: Hello and welcome to the Architecture of Future Technology podcast. My name is Prasanna, and I will be your host. I love science fiction and I'm fascinated by what the future may hold. I'm especially curious about future technology and how it's all going to work.
Now, there's a story about a guy who saw how all of this technology works in the future. It's in the Bobiverse series by Dennis Taylor. If you haven't read the Bobiverse, go read it. Now, this guy is a software developer. He gets run over by a car. He wakes up 100 years later, and apparently his brain was preserved and copied onto some super-advanced computer. That guy saw the future. I don't want to see it that way. I'm taking a different approach. I'm going to talk to people who are working on future technology. We will try to dissect it and reverse engineer it and explore how we can go from here to there and we will use our imagination. (Thank you, Josh Groban!)
We will also talk science and math and technology and psychology and other subjects whose classes I may or may not have skipped in college. The first piece of future technology we will look at is autonomous vehicles. In this episode, we are going to go breathe first and get a sense of everything that needs to happen for autonomous vehicles to roam our roads. In later episodes, we will explore more specific areas in some depth. Since we're talking about the breadth of the autonomous vehicle landscape in this episode, let's start with someone who actually sees the breadth of the industry from his position as the head of software-defined vehicles. At ETAS a Bosch Company. Please welcome my first guest, Joerg Seitter. Joerg, would you like to introduce yourself?
Joerg Seitter: Yes. Hello, my name is Joerg Seitter. I'm working at a company named ETAS. ETAS is a subsidiary of the well-known company, Bosch. I have the role of the head of advanced engineering where we are looking into three to five years in the future of radical technology.
Prasanna: Yes, great. This podcast is exactly about that — it's the architecture of future technology, and we are talking about autonomous vehicles. So, let's say we are in the future and we are looking back at today, at 2022, and looking at what are all the things that had to happen between 2022 and this time in the future to make autonomous vehicles.
Let's start by looking around in the future and we see a lot of autonomous vehicles. What autonomous vehicles are there? Is there just “the” autonomous vehicle, or are there a lot of different types of autonomous vehicles?
Joerg: Oh, I'm pretty sure there will be lots of different types. Basically starting with that, we have aerial vehicles that probably will fly delivery drones, things like that. Then classical vehicles, taxis, robot taxis, maybe, delivery vehicles, autonomous ships, autonomous trucks. I think there's a lot of ways how autonomous vehicles can look like.
Prasanna: Is there a way that the industry classifies them?
Joerg: I'm looking mainly from the automotive market on that. In automotive, we have different systems of classification, of vehicle types — like passenger cars, like trucks and so on. Also, that grade of autonomy is defined. I'm not exactly sure how in other domains the definitions are done.
Prasanna: In this future that we got to with a lot of different autonomous vehicles all over the place. Was there one big problem that needed to be solved or were there a lot of different problems in each different type of vehicle?
Joerg: Yes, I think there were, and they are probably in the future. Hopefully, we will have solved them all now. Even that, I'm not sure. It's a lot of different problems. Basically at the absolute core is always the question of safety. When we build such a vehicle, how can we ensure that it does not harm anyone, structures or people?
Then also, to come to this goal, there are a lot of technological problems, on the way, to be solved, what technologies to use, how to make them safe. Also, this question of having failed safe systems versus failed operational systems — systems that have to continue to work even if things go wrong. That's plenty, plenty of problems that are there.
Prasanna: What are the different limitations that each type of vehicle presents? Is there anything that is uniquely different in some of these?
Joerg: Yes. When we take it from a product view and probably for each type of vehicle we have use cases, what this vehicle shall do, and coming from these use cases, then different constraints apply. What you already mentioned, technical constraints on the size of the vehicle gives me a constraint of how big can the technology be that I want to install there — a big truck gives us much more space to install things, while a passenger car has much more limitations.
Also from the use case point of view, if I want to build an automated delivery truck, which drives a certain route — let's say through the US east coast, west coast, I can say the constraint is it will always use the same streets, that's also a constraint that you can use when you build the system and say, "Okay, I don't have to train it for other use cases than that one." These kinds of possibilities are there.
Prasanna: Before we get to autonomy, I presume that from a technology perspective, there are a lot of other things that need to come as building blocks before autonomy really exist. Can you talk about what are the things that are happening now that are going to enable autonomous vehicles in the future?
Joerg: Yes, basically when we go back from the future in a little bit of the historic timeline, looking back in the '80s, early '90s — or even the late '70s — when we moved from pure mechanical systems to electronic-based systems. Fuel injection as one of the first things that came up and here, always the question, came up about how can we ensure that this is reliable, that this is safe, and so on.
One remarkable point I think here was in the '90s when we switched from the mechanical accelerator pedal to the electronic accelerator pedal because At that point in time, there was no way that you could mechanically limit the power of the vehicle. That means in the worst case, the vehicle could apply full power to the wheels and, depending on how much power this is, the driver had a more or less good chance to cope with that situation. So while you had a mechanical safety system where you could say, if you take off the foot from the pedal, then the spring load will pull the throttle too close, and then the power will be gone.
This had then been replaced by pure electronic means. These electronic means enabled a lot of innovation. From that point on, we could build a new generation of combustion engines because we now had more control over the parameters and so on. I think this is a very interesting element here to understand coming from the history that we build slides by slides or we build on puzzle pieces and build on this safe method to control the power, enabled us then to develop, for instance, the cruise control in a new way. From the cruise control added with radar, we could come to adaptive cruise control.
So everything builds on previous steps. And I think this is what will continue for autonomous vehicles that we have building blocks that we pile upon each other to then finally come to the solution of the autonomous vehicle and this works already quite well for certain constrained use cases and it's a continuous hunt in the future to improve, to enhance these use cases and to get more and more into the full autonomous picture here.
Prasanna: That's interesting that we are essentially standing on the shoulders of giants, but as we develop newer technology there are a lot of things that have happened in the past that we are building on top of, and there's a lot of incremental innovation that's happening. You talked about electronic control over the power that a vehicle generates and how getting that control through electronic means is one of the building blocks to autonomous vehicles. But today, they're drive by wire cars — actually not today, probably for the last decade or more — these cars are completely controlled by wire. Is that something that you see as a prerequisite in the sense that the drive-by-wire cars today are not the cheapest models out there, right? If autonomous vehicles are to become universal, then the cost and all of those things need to be driven down to make some of these technologies available to every car. Are there advances being made in that as well?
Joerg: Yes, I think that's what we see as you mentioned with the steering. Today, most of the cars still have a classical steering column, but we have an actuator sitting on top of it, which can then control the steering angle. You could imagine that you could even get rid of the steering column and have then a system which has no steering wheel at all and just actuators directly on the wheels. Which probably is one of the visions when we look at these future taxis, which have basically only passenger seats and no driver seat at all anymore. The way forward is basically that at the point we are, especially in the automotive world, we still have to keep the driver in the game.
That means also certain cost is involved to have this more complex steering. Then usually, it trickles the way down. Most of the OEMs [original equipment manufacturers] introduce new technology on their high-priced cars, and then over time, it will trickle down to the smaller models. That's also because of your volume effects. By producing it, by making more and more volume, you get it cheaper. That's probably the principle we see here that will also happen for the assistance systems. Basically, we've seen this already. When we look back in the '80s, the ABS anti-locking system for brakes, there was the first introduction system only for higher segment cars.
Today, it's a completely obvious system that every car has and the same we see with driver assistance systems currently. The other thing is that also when we switch to, let's say electric vehicles, things would probably get cheaper because of the powertrain that we save money because we get a much simpler powertrain. But then still have to invest that money probably in the development of these advanced assistance systems.
Prasanna: All of the stuff we talked about is inside the car. Are there things that are happening outside the vehicle which will help with getting us to an autonomous future?
Joerg: That's basically what my team is currently looking into. That's the of these reliable distributed systems, how we can build reliable, safe systems that are composed of probably a vehicle and an environment. Very popular example here is called infrastructure-assisted driving. Bosch just released a system called automated valet parking where you can just put the car at the location and then tell the parking garage, please park my car.
This is such a system. We see when we look into research papers that there's lots of things ongoing with so-called roadside devices that are basically sensors that can provide extended information to the vehicles. It's likely that for certain use cases, we also see this combination of in-vehicle technology plus external technology as an accelerating measure for autonomy.
Prasanna: Excellent. We see a lot of changes happening inside the vehicle as well as now outside the vehicle. Where's the brain? We talked about a lot of these things requiring intelligence — in your mind, where's the brain of this thing? Where does the brain sit? What does it do? What's the kind of compute that we're going to need to have these autonomous vehicles running around on the streets?
Joerg: The brain. When we say the word brain, we quickly come to the term of KI and or AI. It's then usually neural networks or things like that. I think this is one central element. AI technology, machine learning is one element to get this realized. But it has a lot of different aspects, in fact.
The biggest thing, I guess, is that one question is about the functionality. What do I have? What do I need as a brain to achieve a certain functionality — drive along the lane? We are quite good at that already. The second big question that comes up is, how does the system find out that it's still doing the right thing? Is the system aware of itself, if it sees the right data, or if the data is corrupted, and can then act also on this?
The scope of what this brain has to do? It's not only being aware of doing the right thing, it's also being aware of what's going wrong and can I still do the right thing even when these things are going wrong, in a very abstract way. What we see on a technology space, we see a lot of these deep learning topics acceleration when we do neural networks, certain types of neural networks, and we run them on general compute platforms and we see it's power-intensive, so we try to build special accelerators.
With these special accelerators, we can then get the power consumption down. And so this is currently one of the things for this, let's say brain development, that you need to ask the right technology to do the functional stuff, to the right technology to do the, let's say non-functional and safety elements, and then also be in the constraints of power and size for the whole system because on a vehicle that runs on batteries, you want to use not most of the battery power for the computation. You still want to use that for the powertrain.
This is a multi-dimensional topic here in designing this and very challenging. That's exactly what the industry currently faces as the main thing here when we're moving forward to get these systems really defined and continuously. It's a continuously evolving discussion currently.
Prasanna: Right. If you look at the brain of these things, we talked about a lot of these decisions that are being made inside the car. But I presume some of these decisions can be made outside the car. You'll have some parts of, for example, the planning or the mapping that happens outside, then you'll have some parts that locally are optimizing for short-term planning, and so on.
You have computers, then you talked about one of the phrases that we used earlier is distributed computing. In a lot of this decision-making architecturally, it looks like a lot of different compute units that are collaborating and trying to get us to a safe outcome rather than the one big brain that's deciding everything. Would that be a fair assumption?
Joerg: Yes. Speaking from that brain, what we currently talked about with a pure in-car focus, the reality you're absolutely right. It's that it’s not only what we have in the car, it's also the surrounding, and then again, in what scenario in what use case we are doing this. There are scenarios that you can imagine where a vehicle can be autonomous within our constraints, and you don't need much of an outside interaction but the closer you get to a fully autonomous vehicle that can act completely on its own, then there is the question, can you achieve this with all the data that you just gather with the vehicle itself?
It's basically a question of what information you are able to acquire. When we imagine a vehicle that has RADAR and LiDAR and vision, then with these inputs you can get certain amounts of data from that point where are they located but when you add the infrastructure, then you can give the vehicle much more data because the infrastructure knows much more about the environment. I think this makes it quite plausible that when we want to reach full autonomy in all scenarios, especially in crowded urban driving, then it's very likely that we are looking more in a distributed scenario where the vehicle is one element of a more complex overall system.
Prasanna: There's not only distributed computing problems, but you're also talking about a deluge of data that is coming at the systems to not only make decisions in real-time but also through the training process. Can you talk about what does that loop look like in terms of — let's start with the in-vehicle loops, in terms of how it reacts to data, that's all of this data coming in, and then we can zoom out to the training.
Joerg: Basically, I think we have to differentiate between those things that we can solve without having the need for machine learning. There are several technologies, for example, LiDAR, if we want to know the distance from an object LiDAR works quite well under certain conditions, and here we could work even without machine learning for simple approaches here.
But then when we come to Vision Pro processing, so that's where it really gets interesting. At the end, the vehicle will be able to get the data through the sensor of the camera, and then you have to run through a lot of different steps of post-processing. One thing is basically to identify objects on that picture but you also have to first find out if you can trust that picture at all.
Let's assume it's raining and you have this camera mounted in the windshield, so now you have raindrops in front of the camera, so now you get nice effects on your pictures. That's also important that the camera now recognizes, "Oh, I cannot trust these pictures anymore. I'm not allowed to do the calculation on that." That's usually why it's important that you have a wiper that cleans the camera and so on.
Or let's take, other things of dirt coming up and — first steps is really finding out if I can trust the data that I have. There is this really processable data, and then you go to the, through the post-processing where you basically then analyze what's in the picture and so on and so on and so forth. Then getting all these different sensor data through sensor fusion aligned with each other.
Which is also quite tricky because, all of those from computer science know, distributed acquisition of information, you must time synchronize then all these systems and to be able to align all the data with each other, and then you can finally come out with a result. That's the minimum loop to, let's say follow a lane would be then taking the data fusion, coming to a planning decision, and then steering or cross in the right way to keep the lane.
Especially when we talk about wish processing you have a real long tail in the chart of possible scenarios. There's so many scenarios that could be out there. That makes it an important that you can have development tools that enable you to continuously improve the vehicle. We leave the vehicle space, so we have to submit data to the cloud, for instance, of certain situations, to be able to analyze them further and probably retrain the networks that we have to cope with that situation. Then it comes back that we have to update the vehicle at some point with this updated information to be able to cope with the situation.
Then it's let's say already quite advanced. We call it the data loop development environment. That's exactly what ETAS as a company is working on such development tools that support the developers for developing these systems with data acquisition, data analytics, and also then continuous development and deployment into the vehicle. Let's say that's the starting point of this huge topic.
Prasanna: Absolutely. It sounds like just thinking about what are all the things that need to happen. A lot of them happen in loops. You have the sensor actor loop, we talked about the powertrain loop between, positive momentum, and negative momentum in terms of the adaptive cruise control that you talked about, that's how you would control whether you are to move forward or backward. Plus the data loop and the development loop and the feedback loop and so on.
There is a lot of loops. I find that quite apt for the vehicle where you have the tires moving forward all the time. If you have all of these loops moving us forward in this context. One of the things you touch upon early was safety. We know that safety in this whole world of automotive is absolutely critical. Not just because of we all want to go in these cars and we don't want to do something that risks our own lives but also from a compliance perspective, you cannot actually sell a car unless it complies with certain regulations, ISO 26062 or other-- Can you talk a little bit about what are the things that are being done now to make these autonomous vehicles safe and compliant?
Joerg: Probably it's the biggest topic overall when coming to this world of automation and electronics. At the foundation it's always the idea that you have to argument why it is safe. When you develop such a system, usually the idea from a regulation point of view is to get an acceptance, you have to provide an evidence chain on how you made the safety guarantees. Let's probably start with a very simple example from the '90s.
When you have a car with an automatic gearbox, which is also electronically controlled, then there could be probably a situation where the electronic malfunctioned and triggered the different valves in a way that you had blocking wheels. Then the question comes up, okay, if this can happen, what are possible outcomes? Blocking wheels, you have to differentiate if it's the front wheels blocking or the rear wheels blocking.
At the end, you come to a reaction time. You have to resolve the critical situation in a certain time span and then you have to build the system and the evidence chain to show that given that analysis that this is the scenario that can happen. You have done these steps to ensure that it cannot happen and if it happens, that it's resolved within the reaction time so that you are still safe. I think in blocking rear wheels, it's less than 500 milliseconds or so that you have to resolve the situation, which basically means you have to bring the gearbox in the neutral gear.
Then you see it's not only a software problem, it's a hardware-software problem because also the hardware designers have to take into account. The neutral state must always be reachable also when the electronics fail and so on and so on, so you come into a system view. That's a very important understanding. It's not only software that makes things safe, it's always the system that must be safe.
That's hardware, software altogether that must work here and when we go on for autonomous vehicle, we can just continue what's about the steering angle, how you can protect that there are no malfunctioning inputs that you get a 90-degree angle suddenly. Here also you have certain analyses that say what rate of steering is allowed and acceptable and you can implement monitoring to ensure that the software doesn't output levels about it or react when it's happening and then get into a fail day mode and so on and so on.
Interesting it gets when you come to this fail operational question so that you have then to ensure that even when such things happen, you are still able to continue. Then we are talking about redundancy and these topics that we have the need that systems are multiple times available. Complexity explodes really when you go into that because when you go from this redundancy question, then it meets-- Okay, you need two times the computation, two times the power supply, two times the buses, so you need to at least double the effort, and so on.
That's some aspects on how this works to make it safe. I know the guys in my team who focus on safety would now have thousands of more arguments and details here. I can only abstract it, but I think the main point is a very structured and systematic way to develop these systems. I think that's the main point because at the end, I'm coming myself also from the IT, from the software world, not being born native into the embedded world and I know in the IT world, we are very pragmatic. We quote first, we just write something, and then we see how it works. In the safety world, you always have to think about, "Yes, I write now something but how do I argument why it's written that way? If I do that at the end of the development chain, it will be very very painful." Developing autonomous vehicles really is about also a discipline and systematic approach on writing software or developing hardware all in the same way.
Prasanna: Yes, absolutely. One of the things that I find personally very fascinating about these safety things, this is one I think would be very entertaining to be in but not very fun to have to be in, and it's the limp home mode. I find that to be very fascinating because it would be very interesting to be in a car which is just limping home because that's like you are out at a pub, you want to come back, and as a driver, you're essentially just trying to limp home, but there are situations where the car is trying to limp home.
I find that to be very interesting even if you have to limp home, you still have to get home. There are a lot of things that the vehicle still needs to do, but it's obviously in a disabled state for some reason. When you talk about fault tolerance and redundancy and all of these things, at some point the vehicle needs to continue to operate to some level of not only safety but functionality even when all kinds of things have gone wrong. Maybe somebody's come in and hit your car, maybe the road conditions are unsafe, whatever other reasons there may be. Would you say that is that a harder thing to do than when everything else is working fine or is that as a fallback mechanism is actually simpler?
Joerg: Basically the idea of a limp home is always an idea of degradation, so you have not the full system functionality. We even have that today on combustion engines if things go wrong, then they go in a power-degraded mode and you can barely drive 50 kilometers per hour and then come home. For autonomous systems, I think it's also quite complex because, on one hand, you have to decide at a point when it's time for the limp home mode, and there are multiple cases.
You mentioned there could be a crash. It could be a failure on the system itself, it could be lots of redundancy, and so on, and then you have to analyze the situation that you are in. Let's say if you're on a motorway, then probably the limp home could also be an emergency stop on the emergency lane so that the vehicle is able to go to the emergency lane and then stop there with the warning lights on and things like this.
The other question is what is when you are in a really complex situation? Let's say you are on a crossing and your car has been hit and probably rotated, so now the system has to find itself in a completely new situation and find a safe way out. Here it's about the question, "Okay, what is the safety scenarios," so what is the safe scenario that you are heading there? Probably after a crash you won't drive that much anymore, you probably only go to a location where you are more safe.
I think it's a lot of design questions that are still open here on how this can be solved. Let's say we are learning this information also from vehicles in the field that provide that data that have crashes and where we can see how a crash behaves, what happens, and then it's also a lot about probability what are probable scenarios that may happen very often, what are improbable scenarios or very less likely scenarios that happen only one in thousand, one in a hundred thousand, one in a million times, and then also have the right effort for the right topics probably what happens most often you have to have the solution for you.
Prasanna: We talked about several different types of things within autonomy, we talked about being in the same lane, we talked about adaptive cruise control, and then we talked about going across the East Coast or east to West in the US or parking. We talked about the limp home mode and all of the things that happen. How do you drive autonomously in a complex situation that you never encountered before? Can you help put that in a framework? How do you think, because all of these are not equal, are there ways to classify them? Does the industry have a standard way to classify these things and say that this is what we mean by autonomy in this, at this level?
Joerg: Basically, the SAE has defined these levels of autonomy, which have become the De facto standard, I think, around the world for the classification model. Where we have these level zero to five, to classify the different types of systems where, for instance, level zero means basically you have no automation on the driving at all. You may have warning systems like emergency braking or lane departure warning, things like that. Starting with level one, then you have really driver assistance where the driver gets assistance from a system, but it's clearly defined — the driver is driving the car, not the computer. Then it goes up to level two where you have already partial driving automation. That means you could have an adaptive cruise control plus a lane-keep assistant.
If you run those at the same time on a motorway, then you basically can take your feet off the pedals and you can take off the hands off the steering wheel, and then the car will seem to drive autonomously. Basically, it's just keeping the lane and just adjusting the power that you keep the distance to the front car. It will not be able to cope with certain situations. At that point, also you as the driver are still fully responsible.
That's usually where also the [unclear] ensure that if you take off the hands from the steering wheel for too long, they will give you signals, and then they will also leave the automated or this partial driving automation mode because you are still the driver. Up to level two, this definition says there's a driver and the driver is driving the car. Then starting with level three, we enter the world of autonomous driving. Let's say automated driving. It's not fully autonomous.
It has driving automation. Where in level three, the vehicle can still give back the control to you with a certain timespan. I think it's around 10 seconds or so. The vehicle has to tell you, "I'm not able to continue, please take over again as a driver," and then it hands back the control to you. That means the system at that point must be able to find out when it's not able to fulfill its purpose anymore and then actively hand back the control to the driver.
Then starting with SAE level four, it's basically that the vehicle must cope with the situations with certain constraints. You could say, "Yes, it's only working on motorways and nothing else." That you could fully automate it run on motorways, and then when we go to level five, then it must be able to cope with all situations. That means up to level three, the driver must always be ready to take over, starting with level four.
Then we talk really about systems where you could do other things, don't have to take control. With level five, we are then at this robot taxi level where you just sit in the car and the car brings you from A to B and you don't have to interfere at all with the vehicle. This is the standard in the industry that we use to classify the level of autonomy that a vehicle has and where you classify most of the vehicles that are on the market today into.
Prasanna: Where are we today? What level do you see most of the cars that are on the road today are-- or at least the most advanced cars on the road today?
Joerg: I think the most advanced cars are between level two and level three. As general-purpose passenger cars. There may be higher levels with, with bigger constraints. There's automatic robot taxi services but they are geo constrained. They are only working in certain areas. That's then higher level. It's with the constraint and it's only for a certain purpose, it's then a robot taxi. When you ask me for passenger cars that you can buy today as a driver, then we are between level two and level three. I think there's not so many certified level 3 systems available that are really certified. I see a lot of level two systems that try to have all the features that a level three system has, but they are not yet fully certified. My car, for instance, has a lot of the level three features, but it can help me back the control within less than a second. It does not tell me, "Oh, could you please in the next 10 seconds, take over the control." It just says, "Now you need to take back control." That makes it a level two car.
Even if I can drive for certain distances without driving at all. When I go home from work drives a lot on itself, but in certain conditions, you really see then the system is not capable for all the situations and then really tells you so now I'm not knowing how to handle the situation and I hand back control to the driver.
Prasanna: Well, yes. Being inside the industry, you're in the middle of making all of these things, which level do you think is going to be the hardest level to get to? Which jump? Two to three, three to four, or four to five, which jump do you think it will be the hardest?
Joerg: I think it's currently the two to three jump, which is currently ongoing, and were in the last five years, I think we saw a lot of progress. I think that's really the big jump, because on the one hand, the understanding on how to build these systems. It's let's say, possible to build a system when you get enough compute power. You take racks of computers, and then say, now I have a level three system.
Then you have to integrate it into your car. Then you find out, oh, all these wrecks are not fitting into my car. That's a problem. That means that probably technologically, we have already some things reached but then the integration process into the vehicle has also to be fulfilled. Then you have to certify the vehicle at the end. You cannot tell the certification, "Look, that's the vehicle, look, that's the wreck, please certify me the wreck, and trust me I integrate it later in the vehicle and it will look exactly the same yet that does not work that way." I think that's where we are currently at this champ. That's probably the most interesting time currently, to see this evolution currently ongoing.
Prasanna: I love that picture. Where we are today if there is a car, and then there is a big rack of compute that's sitting outside that can make the car go autonomous. Now the problem that we have is how do we take this rack and fit it inside this car, [laughs] and still get it to do its things and get certified. As we move into the future, we describe kind of the two to three jump as being where we are, what needs to happen? What needs to happen between where we are now to the future? Are there are some meta themes that in your mind are the things that we as a community need to do going forward?
Joerg: Yes, the big question that I always ask myself is what use cases are the most important ones currently? I think to have an autonomous vehicle to buy it and have it, it's one aspect. I see a lot of potential, for instance, in the logistics area where we could-- you probably read the news. We have a shortage on truck drivers. It's one of the things where I say, okay, one thing is we get more truck drivers, the other thing could be we have autonomous trucks faster so that we could automate that part of the logistics chain.
The question is really what topics are the interesting ones that we need to address first? Very interesting I think also, it's the agricultural area. When you have large fields that you have to process and here we already saw over the last year, so a lot of progress because the safety is not as complicated when you have quite slow driving systems. Agricultural automation, I think it's also a very important use case from a community point of view.
I think that's the question when we call for self-driving passenger cars, when we look here in Germany, for instance, I think about we have this demographic change, so people getting older and older and we have less young people. Then it's probably interesting question, would it help older people to have, let's say more autonomous life, because they are still able to get from A to B without, outside help because they can just use an automated, system, a mobile taxi or so. I have to be careful because all the taxi drivers in the world would probably not like the idea that they are not needed anymore.
You see that's from my point of view, the interesting question about the community. What are the real use cases from the benefit and when we look from a commercial point of view, what are the use cases then at the end where people are willing to invest because they see there's, the return of invest at the end. Yes, I think that's, from my point of view, the big question around, autonomous, technology at all. What does it help? Where does it help and who's wanting to implement that?
Prasanna: Great. Thank you so much for taking the time. With that, we come to the end of episode one of the Architecture of Future Technology podcast. I'll leave the audience with a wonderful image that your presented us, which is, if you're driving an autonomous car today, there's probably a big rack sitting outside. Now the challenge is how do you fit that rack inside your vehicle? Your's answer the initial, stages of this. Maybe you should be driving a big truck or an agricultural equipment in which you do have the space to fit in the compute. I think I really like the image and the kind of immediate next steps — rhat seems feasible. Thank you, Joerg.
Joerg: Thanks for having me.
[END OF AUDIO]