Editor's Note: This transcript was automatically transcribed, so mistakes are inevitable. You can contribute by proofreading the transcript or highlighting the mistakes. Sign up to be amongst the first contributors.
The following is a conversation with Chris Urmson. He was the CTO of the Google self-driving car team, a key engineer and leader behind the Carnegie Mellon University autonomous vehicle entries in the DARPA grand challenges and the winner of the Dorper Urban Challenge. Today, he's the CEO of Aurora Innovation and the autonomous vehicle software company. He started with Sterling Anderson, who is the former director of Tesla Autopilot pilot Andrew Bagnell Hubers for more autonomy and perception lead. Chris is one of the top roboticists and autonomous vehicle experts in the world and a longtime voice of reason in a space that is shrouded in both mystery and hype.
He both acknowledges the incredible challenges involved in solving the problem of autonomous driving and is working hard to solve it. This is the artificial intelligence podcast, if you enjoy, subscribe on YouTube, give it five stars and iTunes supported on Patreon. I simply connect with me on Twitter. Lex Friedman spelled F.R. Idi Amin. And now here's my conversation with Chris Urmson. You were part of both the a grand challenge and an urban challenge teams at CMU with Red Whittaker.
What technical or philosophical things have you learned from these races?
I think that the high order bit was that it could be done. I think that was the thing that was incredible about the first of the grand challenges that I remember. You know, I was a grad student, Carnegie Mellon, and there we was, kind of this dichotomy of.
It seemed really hard, so that would be cool and interesting, but, you know, at the time we were the only robotics institute around. And so, you know, if we went into it and fell on our faces, that would that would be embarrassing. So I think, you know, just having the will to go do it to try to do this thing that at the time was marked is, you know, darn near impossible. And and then after a couple of tries be able to actually make it happen, I think that was that was really exciting.
But at which point did you believe it was possible that you from the very beginning, did you personally, because you're one of the lead engineers, you actually had to do a lot of the work?
Yeah, I was the technical director there and did a lot of the work, along with a bunch of other really good people. Did I believe it could be done? Yes, of course. Like, why would you go do something you thought was impossible? Completely impossible. We thought it was going to be hard. We didn't know how we're going to be able to do it. We didn't know if we'd be able to do it the first time.
Turns out we couldn't. That, yeah, I guess you have to I think there's a certain benefit to naivete, right, that if you don't know how hard something really is, you try different things and, you know, gives you an opportunity that others who are wiser maybe don't don't have.
What were the biggest pain points? Mechanical sensors, hardware, software, algorithms for mapping, localization, general perception and control. What the hardware, software?
First of all, I think that's the joy of this field, is that it's all heart and that you have to be good at each part of it.
So for the first four, the urban challenges, if I look back at it from today, it should be easy to de. That it was a static world, there weren't other actors moving through it is what that means, it was out in the desert, so you can't really get GPS. You know, so that that when we could map it roughly. And so in retrospect, now, it's you know, it's within the realm of things we could do back then.
Just actually getting the vehicle and the you know, there's a bunch of engineering work to get the vehicle so that we could control drive it. That's you know, that's still a pain today, but it was even more so back then and then the uncertainty of exactly what they wanted us to do was was part of the challenge as well.
You didn't actually know the track heading and you knew approximately we didn't actually know the route that is going to be taken.
That's right. We didn't know the route. We didn't even really the way the rules had been described, you had to kind of guess.
So if you think back to that challenge, the idea was to that the government would give us that that would give us a set of waypoints and kind of the width that you had to stay within between the line that went between each of those waypoints. And so the most obvious thing they could have done is set a kilometer wide corridor across a field of scrub brush and rocks and said, you know, go figure it out. Fortunately, it really it turned into basically driving along a set of trails, which is much more relevant to to the application they were looking for.
But no, it was it was a hell of a thing back in the day.
So the legend read was kind of leading that effort in terms of just broadly speaking. So you're a leader now. What have you learned from writing about leadership? I think there's a couple of things.
One is you go and try those really hard things that that's where there is an incredible opportunity.
I think the other big one, though, is to see people for who they can be, not who they are.
It's one of the things that I actually one of the deepest lessons I learned from read was that he would look at undergraduates or graduate students and empower them to be leaders to to, you know, have responsibility to do great things that I think.
Another person might look at them and think, oh, that's just not a graduate student, what could they know? And so I think that that you can trust but verify I have confidence in what people can become, I think is a really powerful thing.
So through that, that's just like fast forward to the history. Can you maybe talk through the technical evolution of autonomous vehicle systems from the first two grand challenges to the urban challenge to today? Are there major shifts in your mind or is it the same kind of technology just made more robust? I think there's been some big, big steps. So the for the grand challenge, the real technology that unlocked, that was.
HD mapping prior to that, a lot of the off road robotics work had been done without any real prior model of what the vehicle was going to encounter. And so that innovation, that the fact that we could get, you know, decimeter resolution. Models was really a big deal, and that allowed us to to kind of bound the complexity of the driving problem the vehicle had and allowed it to operate at speed because we could assume things about the environment that it was going to encounter.
So that was a that was one of the that was the big step there.
For the urban challenge. You know, one of the big technological innovations there was the multibeam Ladar and being able to generate high resolution, mid to long range 3D models the world and use that for, you know, for understanding the world around the vehicle. And that was really kind of a game changing technology. In parallel with that, we saw a bunch of other technologies that had been kind of converging have their day in the sun, so Bayesian estimation had been, you know, had been a big field.
In robotics, you would go to a conference a couple of years before that in every paper would effectively have slammed somewhere in it. And so seeing that, you know, that looks Bayesian estimation techniques. You know, play out on a very visible stage. I thought that was that was pretty exciting to see. And most them was done based on Ladar at that time. Well, yeah, and in fact, we weren't really doing slam persay, you know, in real time because we had a model ahead of time.
We had a road map, but we were doing localization and we were using the Ladar or the cameras, depending on who exactly was doing it, to the localized to a model of the world. I thought that was there was a big step from kind of naively trusting science before that.
And again, like lots of work had been going on in this field. Certainly this was not. Doing anything particularly innovative in Islam or in localization, but it was seeing that technology necessary in a real application on a big stage, I thought was very cool.
So for the urban challenge, those already maps constructed offline. Yes, in general. OK. And did people do that in the individual teams do it individually. So they had their own different different approaches there, or did everybody kind of. Share that information, at least intuitively so so the Dapo gave all the teams a model of the world, a you know, a map, and then, you know, one of the things that we had to figure out back then was and it's still one of these things that trips people up today is actually the coordinate system.
So you get a latitude, longitude and, you know, to so many decimal places, you don't really care about kind of the ellipsoid of the earth that's being used.
But when you want to get to 10 centimeters centimeter resolution, you care whether the court system is you nads 83 or 84 or you know, these are different ways to describe both the kind of non-spiritual ness of the earth, but also kind of in I think I can't remember which one of the tectonic shifts that are happening and how to transform the global data as a function of that. So you're getting a map and then actually matching it to reality, to centimeter resolution.
That was kind of interesting and fun back then.
So how much work was the perception doing there? So how how how much were you relying on localization based on maps without using perception to register to the maps? And how? I guess the question is how advanced was perception at that point?
Yeah, it's certainly behind where we are today. We're more than a decade since the of the urban challenge, but the core of it was there. That we were tracking vehicles, we had to do that at 100 plus meter range because we had to merge with other traffic, we were using Bagian again, Bayesian estimates for four state of these vehicles. We had to deal with a bunch of the problems that you think of today of predicting what that where that vehicle is going to be a few seconds into the future.
We had to deal with the fact that there were multiple hypotheses for that because a vehicle at an intersection might be going right or it might be going straight or it might be making a left turn. And we had to deal with the challenge of the fact that our behavior was going to impact the behavior of that other operator. And, you know, we did a lot of that and relative night relatively naive ways, but it still had to have some kind of situation that was.
And so where does that 10 years later, where does that take us today from that artificial city construction to real cities to the urban environment? Yeah, I think the the biggest thing is that the you know, the the actors are truly unpredictable.
That most of the time, you know, the drivers on the road, the other road users are out there. Behaving well, but every once a while they're not. The variety of of other vehicles is, you know, you have all of them in terms of behavior, in terms of perception or both that we have, you know, back then we didn't have to deal with cyclists. We didn't have to deal with pedestrians, didn't have to deal with traffic lights.
You know, the scale over which that you have to operate is now as much larger than the air base that that we were thinking about back then.
So what is the question? What do you think is the hardest part about driving? Easy question.
Yeah, no, I'm joking. I'm sure no, nothing really jumps out at you as one thing. But in in the jump from the urban challenge to the real world, is there something that the particular you foresee is very serious, difficult challenge?
I think the most fundamental difference is. That we're doing it for real and that in that environment, it was both a limited complexity environment because certain actors weren't there, because the roads were maintained, there were barriers keeping people separate from from robots at the time. And it only had to work for 60 miles, which looking at it from 2006, it had to work for 60 miles. Yeah, right. Looking at it from now, you know, we want things that will go in, drive for half a half a million miles.
And, you know, it's just a different game.
So how important. He said Lider came into the game early on and it's really the primary driver of autonomous vehicles today as a sensor. So how important is the role of Whydah in the sense of safety in the near term?
So I think it's I think it's essential. You know, I believe, but I also believe that cameras are essential and I believe the radar is essential. I think that you you really need to use the composition of data from from these different sensors if you want the thing to to really be robust.
The question I want to ask, let's see if we can untangle is what are your thoughts on the Elon Musk provocative statement that Ladar is a crutch that is a kind of, I guess, growing pains and that much of the perception task can be done with cameras.
So I think it is undeniable that people walk around without lasers in their foreheads and they can get into vehicles and drive them and. And so there's an existence proof that you can drive using, you know, passive, efficient. No doubt can't argue with that in terms of sensors. Yeah, so there's different sensors, right? Like, there's there's an example that we we all go do it. Many of us every day.
In terms of the latter being a crutch, sure, but but, you know, in the same way that, uh, you know, the combustion engine was a crutch on the path to an electric vehicle, the same way that, you know, any technology ultimately gets. Replaced by some superior technology in the future. And really, what the way that I look at this is that the way we get around on the ground, the way that we use transportation is broken and that we have this this you know, what was I think the number I saw this morning, 37000 Americans killed last year on our roads.
And that's just not acceptable. And so tech, any technology that we can bring to bear that accelerates this technology, self-driving technology coming to market and saving lives. Is technology we should be using and it feels just arbitrary to say, well, you know, I'm not OK with using lasers because that's whatever, but I am OK with using an eight megapixel camera or 16 megapixel camera. It's just these are just bits of technology. And we should be taking the best technology from the Toubon that allows us to go.
And you and solve a problem. The question I often talked of, well, obviously you do as well to sort of automotive companies and you know, if there's one word that comes up more often than anything is cost and drive costs down. So while it's true that it's a tragic number, the thirty seven thousand, the question is what? And I'm not the one asking this question questions. I hate this question.
But we have we want to find the cheapest sensor suite that creates a safe vehicle in that uncomfortable trade off.
Do you foresee light are coming down in cost in the future, or do you see a day where level four autonomy is possible without lighter?
I see both of those, but it's really a matter of time. And I think really maybe the I would talk to the question you asked about the cheapest sensor.
I don't think that's actually what you want. What you want is a sensor suite that is economically viable. And then after that, everything is about margin and driving cost out of the system. What you also want is a suite that works.
And so it's great to tell a story about how, you know, how it'd be better to have a self-driving system with a fifty dollar sensor instead of a five hundred dollar answer.
But if the five hundred dollar sensor makes it work in the fifty dollar sensor doesn't work, you know, who cares?
So long as you can actually, you know, have an economic offer, you know, there's an economic opportunity there and the economic opportunity is important because that's how you actually have a sustainable business and that's how you can actually see this come to scale and be out in the world. And so when I look at Ladar. I see a technology that has no underlying fundamentally, you know, expense to it, fundamental expense to it, it's it's going to be more expensive than an imager because, yes, you must processes are fair.
Processes are are dramatically more scalable than mechanical processes. But we still should be able to drive costs down substantially on that side, and then I also do think that with the right business model, you can absorb more certainly more cost on the materials.
Yeah, if the census works, extra value is provided. Thereby you don't need to drive costs down to zero as the basic economics you've talked about your intuition, that level to autonomy is problematic because of the human factor of vigilance that command complacency over trusting scientists, thus being human.
We trust the system will start doing even more so partaking in the secondary activities like smartphone and so on.
Have your views evolved on this point in either direction? Can you can you speak to it?
So and I want to be really careful because sometimes this gets twisted in a way that's the that I certainly didn't intend so. Active safety systems are a really important technology that we should be pursuing and integrating into vehicles. And there's an opportunity in the near term to reduce accidents, reduce fatalities, and that's and we should be we should be pushing on that. Level two systems are systems where the vehicle is controlling two axes, so breaking and breaking enthralls steering.
And I think there are variants of level two systems that are supporting the driver that absolutely like we should we should encourage to be out there. Where I think there's a real challenge is in the human factors part around this and the misconception from the public around the capability set that that enables and the trust that they should have it it.
And that is where I you know, I kind of I am actually incrementally more concerned around level three systems and you know, how exactly a level two system is marketed and delivered and how people how much effort people have put into this human factor. So I still believe several things are. This one is people will over trust the technology we've seen over the last few weeks, you know, a spate of people sleeping in their Tesla.
You know, I watched an episode last night of Trevor Noah talking about this, and, you know him, this is a smart guy who's has a lot of resources that is disposal describing a Tesla is a self-driving car and that why should people be sleeping in their Tesla is like, well, because it's not a self-driving car and it is not intended to be.
And, you know, these people will almost certainly, you know, die at some point. Or or hurt other people, and so we need to really be thoughtful about how that technology is described and brought to market. I also think that because of the economic issue know economic challenges, we were just talking about that that technology path. Will these level two driver assistance systems, that technology path will diverge from the technology path that we need to be on to actually deliver truly self-driving vehicles, ones where you can get at it and sleep and have the equivalent or better safety, then, you know, a human driver behind the wheel because the again, the economics are very different in those two worlds.
And so that leads to divergent technology.
So you don't see the economics of gradually increasing from level two and doing so quickly enough to where it doesn't cost safety critical safety concerns. You believe that it needs to diverge at this point to different basically different routes.
And really that comes back to what are those L2 and L1 systems doing? And they are Triva systems functions where the the the people that are marketing that responsibly are being very clear and putting human factors in place such that the driver is actually responsible for the vehicle and that the technology is there to support the driver and the safety cases that are built around. Those are dependent on that driver attention and attentiveness.
And at that point, you you can kind of give up to some degree for economic reasons. You can give up on, say, false negatives. And so and the way to think about this is for a foreclosure mitigation braking system. If it half the times the driver missed the vehicle in front of it, it hit the brakes and brought the vehicle to a stop, that would be an incredible, incredible advance in safety on our roads.
Right. That would be equivalent to seatbelts. But it would mean that if that vehicle wasn't being monitored, it would hit one out of two cars.
And so economically, that's a perfectly good solution for a driver assistance system, what you should do at that point, if you can get it to work 50 percent of the time, is drive the cost out of that so you can get it on as many vehicles as possible. But driving the cost out of it doesn't drive up performance on the false negative case. And so you'll continue to not have a technology that could really be available for for a self driven vehicle.
So clearly the communication and this probably applies to all four vehicles as well, the marketing and communication of what the technology is actually capable of, how hard it is, how easy it is, all that kind of stuff is highly problematic. So say everybody in the world was perfectly communicated and were made to be completely aware of every single technology out there, what they what it's able to do.
What's your intuition? And now we're maybe getting into philosophical ground. Is it possible to have a level two vehicle where we don't over trust that?
I don't think so if people truly understood the risks and internalized it, then then, sure, you could do that safely. But that that's a world that doesn't exist, that people are going to they're going to you know, if the facts are put in front of them, they're going to then combine that with their experience. And, you know, let's say they're they're using an L2 system and they go up and down the one on one every day.
And they do that for a month. And it just worked every day for a month. Like, that's pretty compelling at that point. You know, just even if you know the statistics, you're like, well, I don't know, maybe there's something funny about those. Maybe they're driving in difficult places. Like I've seen it with my own eyes. It works. And the problem is that that sample size that they have, so it's 30 miles up and down, 60 miles times, 30 days.
Sixty eight hundred eighty one thousand eight hundred miles. That's that's a drop in the bucket compared to the one, what, 85 million miles between fatalities. And so they don't really have a true estimate based on their personal experience of the real risks. But they're going to trust it anyway because it's hard not worked for a month with what's going to change.
So even if you start a perfect understanding of the system, your own experience will make it drift. I mean, that's a big concern over a year, over two years, even if it doesn't have to be months.
And I think that as this technology moves from.
What I would say is kind of the more technology savvy ownership group to, you know, the mass market. You may be able to have some of those folks who are really familiar with technology, they may be able to internalize it better. And, you know, you're kind of immunization against this kind of false risk assessment might last longer. But as folks who are who aren't as savvy about that, you know, read the material and they compare that to the personal experience, I think there that, you know, it's going to it's going to move more quickly.
So your work, the program that you've created a Google now at or is focused more on the second path of creating full autonomy.
So it's such a fascinating I think it's one of the most interesting eye problems of the century. It's I just talked to a lot of people, just regular people. I don't know my mom about autonomous vehicles. And you begin to grapple with ideas of giving your life control over to a machine is philosophically interesting. It's practically interesting. So let's talk about safety.
How do you think we demonstrate you spoken about metrics in the past. How do you think we demonstrate to the world that an autonomous vehicle and a reward system is safe? This is one where it's difficult because there isn't a soundbite answer that we have to show a combination of, uh, work that was done diligently and thoughtfully.
And this is where something like a functional safety process is.
Part of that is like here's here's the way we did the work. That means that we were very thorough. So, you know, if you believe that we what we said about this is the way we did it, then you can have some confidence that we were thorough in in the engineering work we put into the system. And then on top of that, the you know, to kind of demonstrate that we weren't just thorough, we were actually good at what we did.
There'll be a kind of a collection of evidence in terms of demonstrating that the capabilities work the way we thought they did, you know, statistically and to whatever degree we can, we can demonstrate that both in some combination of simulation, some combination of of unit testing and decomposition testing, and then some part of it will be on raw data. And and I think the way were will ultimately convey this to the public is there will be clearly some conversation with the public about it, but we'll invoke the kind of the trusted nodes and that we'll spend more time being able to go into more depth with folks like like Nizza and other federal and state regulatory bodies.
And kind of given that they are operating in the public interest and they're trusted. That if we can show enough work to them that they're convinced, then I think we're in a in a pretty good place.
That means you work with people that are essentially experts at safety to try to discuss. And so do you think the answer is probably no. But just in case, do you think there exists a metric? So currently people have been using a number of these engagements. Yeah. And it quickly turns into a marketing scheme to just sort of you alter the experiments you run to. I think you've spoken that you don't like it.
No. In fact, I was on the record telling DMV that I thought this was not a great metric.
Do you think it's possible to create a metric, a number that that could demonstrate safety outside of fatalities? So so I do.
And I think that it won't be just one number. So as we are internally grappling with this and at some point will be will be able to talk more publicly about it, is how do we think about human performance in different tasks, say detecting traffic lights or safely making a left turn across traffic? And what do we think the failure rates are for those different capabilities for people and then demonstrating to ourselves and then ultimately folks the regulatory role and then ultimately the public that we have confidence that our system will work better than that.
And so these these individual metrics will kind of tell a compelling story ultimately.
I do think at the end of the day, what we care about in terms of safety is life saved and injuries reduced. And then and then ultimately, you know, kind of casualty dollars. The people aren't having to pay to to get their car fixed, and I do think that you can you know, in aviation, they look at a kind of an event pyramid where, you know, a crash is at the top of that. And that's the worst event, obviously.
And then there's injuries and, you know, near miss events and whatnot and and, you know, violation of operating procedures.
And you kind of build a statistical model of of the relevance of the the low severity things of the high speed things. And I think that's something where would be able to look at as well, because, you know, an event per eighty five million miles that, you know, statistically a difficult thing, even at the scale of the US, um, to to to kind of compare directly. And that event, a fatality that's connected to an autonomous vehicle is significantly at least currently magnified and the amount of attention it gets.
So that speaks to public perception. I think the most popular topic about autonomous vehicles in the public is the trolley problem formulation right here, which has let's not get into that too much, but is misguided in many ways.
But it speaks to the fact that people are grappling with this idea of giving control over to a machine. So how do you win the the hearts and minds of the people? That autonomy is something that could be a part of their lives, how you let them experience it? Right.
I think it's I think I think it's right. I think people should be skeptical. I think people should ask questions. I think they should doubt.
Because this is something new and different. They haven't touched it yet, and I think it's perfectly reasonable and but at the same time, it's clear there's an opportunity to make the world safer. It's clear that we can improve access to mobility. It's clear that we can reduce the cost of mobility. And that once people try that and are, you know, understand that it's safe and are able to use in their daily lives, I think it's one of these things that will, well, just be obvious.
And I've seen this practically in demonstrations that I've given where I've had people come in and, you know, they're very skeptical. They get in the vehicle. My favorite one is taking somebody out on the freeway and we're on the one on one driving at 65 miles an hour. And after ten minutes, they kind of turn and ask, is that all it does? And you're like, yeah, it's self-driving car. You're exactly what I thought it would do.
Right. But they you know, it becomes mundane, which is which is exactly what you want a technology like this to be. Right. We don't really want I turn the light switch on in here. I don't think about the complexity of those electrons, you know, being pushed down a wire from wherever it was and being generated. Like it's just it's like I just get annoyed if it doesn't work. Right. And and what I value is the fact that I can do other things in this space.
I can see my colleagues, I can read stuff on a paper.
I can, you know, not be afraid of the dark. And I think that's what we want this technology to be like, is it's it's in the background and people get to have those most life experiences and do so safely.
So putting this technology in the hands of people speaks to a scale deployment. And so what do you think the the dreaded question about the future?
Because nobody can predict the future, but just maybe speak poetically about when do you think we'll see a large scale deployment of autonomous vehicles, 10000 of those kinds of numbers?
We'll see that within ten years. I'm pretty confident that we.
What's an impressive scale? What moment? So you've done the challenge with this one vehicle. At which moment does it become? Wow, this is serious scale.
So so I think the moment it gets serious is when we really do have a driverless vehicle operating on public roads and that we can do that kind of continuously without a safety drive, without a safety driver in the vehicle.
I think at that moment we've we've kind of crossed the 021 threshold. And then it is about how do we continue to scale that, how do we. Build the right business models, how do we build the right customer experience around it so that it is actually a useful product out in the world?
And I think that is really at that point, it moves from a, you know, what is this kind of mixed science engineering project into engineering and commercialization and really starting to deliver on the value that we all see here and actually making that real in the world.
What do you think that deployment looks like? Where do we first see the inkling of those safety driver, one or two cars here and there? Is it on the highway? Is it in specific routes in the urban environment?
I think it's going to be urban suburban type environments with Aurora. When we we thought about how to tackle this is kind of in vogue to think about trucking as opposed to urban driving. And and the again, the human intuition around this is that freeways are easier to drive on. Because everybody's kind of going in the same direction and, you know, the lanes are wider, et cetera, and I think that that intuition is pretty good, except we don't really care about most of the time we we care about all of the time.
And when you're driving on a freeway with a truck, say, 70, 70 miles an hour and you got seventy thousand pound load with you, that's just an incredible amount of kinetic energy. And so when that goes wrong, it goes really wrong. And that those those challenges that you see occur more rarely, so you don't get to learn as as quickly and they're, you know, incrementally more difficult than urban driving, but they're not easier than urban driving.
And so I think this happens in moderate speed urban environments because they're you know, if if two vehicles crash at 25 miles per hour, it's not good. But probably everybody walks away.
And those those events where there's the possibility for that occurring happened frequently. So we get to learn more rapidly. We get to do that with lower risk for everyone. And then we can deliver value to people that need to get from one place to another, and then once we've got that solved, then they can have the freeway driving. Part of this just falls out. But we were able to learn more safely, more quickly in the urban environment.
So 10 years and then scale 20, 30 years. I mean, who knows if it's sufficiently compelling experiences created, it can be faster.
And so do you think there could be breakthroughs? And what kind of breakthroughs might there be that completely change that timeline? Again, not only am I asking to predict the future.
Yeah, I'm asking you to predict breakthroughs that haven't happened yet. So what's the I think another way to ask that was would be if I could wave a magic wand, what part of the system would I make work today to accelerate it as quick as possible, as quickly as possible?
Don't say infrastructure, please don't say infrastructure. No, it's definitely not infrastructure. It's really that that perception forecasting capability. So if if if tomorrow you could give me a perfect model of what's happened, what is happening and what will happen for the next five seconds. Around the vehicle, on the roadway, that would accelerate things pretty dramatically. Are you staying up at night? Are you mostly bothered by cars, pedestrians or cyclists?
So I, I worry most about the vulnerable road users, about the combination of cyclists and cars, just cyclists and pedestrians, because they're not in armor.
You know, the cars, they're bigger, they've got protection for the people, and so the ultimate risk is is lower. They're. Whereas a pedestrian or cyclist, they're out in the road, they they don't have any protection. And so we need to pay extra attention to that.
Do you think about a very difficult technical challenge, uh, of the fact that pedestrians, if you try to protect pedestrians by being careful and slow, they'll take advantage of that. So the game theoretic dance. Yeah.
Does that worry you of how from a technical perspective how we solve that? Because as humans, the way we solve that, it's kind of nudge our way through the pedestrians, which doesn't feel from a technical perspective is a appropriate algorithm. Uh, what do you think about how we solve that problem? Yeah, I think I think there's there's nothing that was actually there's two different concepts there.
So one is, am I worried that because these vehicles are self driving, people kind of step on the road and take advantage of them?
And I've heard this and I don't really believe it, because if I'm driving down the road and somebody steps in front of me, I'm going to stop.
Right, like even if I'm annoyed, I'm not going to just drive through a person still on the road. And so I think today people can take advantage of this and and you do see some people do it. I guess there's an incremental risk because maybe they have lower confidence that I'm going to see them than they might have for an automated vehicle. And so maybe that shifts that a little bit. But I think people don't want to get hit by cars.
And so I think that I'm not that worried about people walking out of the want to want and creating chaos more than they would today.
Regarding kind of nudging through a big stream of pedestrians, leaving a concert or something, I think that is further down the technology pipeline. I think that you're right. That's tricky.
I don't think it's necessarily I think the algorithm people use for this is pretty simple, right? It's kind of just move forward slowly. And if somebody really close and stop. And I think that that probably can be replicated pretty, pretty easily, and particularly given that it's you don't do this at 30 miles an hour, you do it at one that even in those situations, the risk is relatively minimal. But I you know, it's not something where we're thinking about in any serious way.
And probably the that's less an algorithm problem and more creating a human experience. The HCI, people that create a visual display that you're pleasantly as a pedestrian nudged out of the way. Yes. That's that's a yeah. That's an experience problem, not an algorithm problem.
Who's the main competitor to our world today and how do you outcompete them in the long run?
So we really focus a lot on what we're doing here. I think that. You know, I've said this a few times, that this is a huge, difficult problem and it's great that a bunch of companies are tackling it because I think it's so important for society that somebody gets their. So we you know, where we don't spend a whole lot of time thinking tactically about who's out there and how do we beat that, that that person individually, what are we trying to do to to go faster ultimately?
Well, part of it is the leadership team we have has got pretty tremendous experience. And so we kind of understand the landscape and understand where the cul de sacs are to some degree. And we try and avoid those. I think there's a part of it, just this great team, we've built people this is a technology and a company that people believe in the mission of. And so it allows us to attract just awesome people to go work. We've got a culture, I think that people appreciate that allows them to focus, allows them to really spend time solving problems.
And I think that keeps them energized. And then we've invested hard and invested heavily in the infrastructure and architectures that we think will ultimately accelerators. So because of the folks we're able to bring in early on, because the great investors we have, you know, we don't spend all of our time doing demos and kind of leaping from one day to the next. We've been given the freedom to invest in. Infrastructure to do machine learning, infrastructure to pull data from our own road, testing infrastructure to use that to accelerate engineering, and I think that that early investment and continuing investment in those kind of tools will ultimately allow us to accelerate and do something pretty incredible across the U.S. But it's a good place to end.
Thank you so much for talking today. Thank you very much. Really enjoyed it.