Transcribe your podcast
[00:00:00]

The following is a conversation with George Hotz. He's the founder of Karmi Machine Learning based vehicle automation company. He is most certainly an outspoken personality in the field of AI and technology in general. He first gained recognition for being the first person to carry on like an iPhone. Since then, he's done quite a few interesting things at the intersection of hardware and software. This is the Artificial Intelligence Podcast. If you enjoy, subscribe on YouTube, give it five stars on iTunes, supported on Patreon or simply connect with me on Twitter.

[00:00:34]

Allex Friedman spelled F.R. Idi Amin, and I'd like to give a special thank you to Jennifer from Canada for her support of the podcast. And Patrón. Merci beaucoup, Jennifer. She's been a friend and an engineering colleague for many years since I was in grad school. Your support means a lot and inspires me to keep this series going. And now here's my conversation with George Hotz.

[00:01:19]

Do you think we're living in a simulation? Yes, but it may be unfalsifiable. What do you mean by unfalsifiable? So if the simulation is designed in such a way that they did like a formal proof to show that no information can get in and out, and if their hardware is designed for the anything in the simulation to always keep the hardware and spec, it may be impossible to prove whether we're in a simulation or not.

[00:01:49]

So they've designed it, such as the closed system. You can't get outside the system.

[00:01:53]

Well, maybe it's one of three worlds where either in a simulation which can be exploited, we're in a simulation which not only can't be exploited, but like the same thing is true about VMS, a really well-designed VM. You can't even detect if you're in a VM or not. That's brilliant.

[00:02:09]

So we're it's. Yeah. So the simulation is running on a virtual machine, but now in reality, all VMS have way to the fact.

[00:02:15]

That's the point. I mean, is it you've done quite a bit of hacking yourself and so you should know that really any complicated system will have ways in and out. So this isn't necessarily true going forward. I spent my time away from Kharma. I learned Coq Dependently typed like it's a language for writing math proofs. And if you write code that compiles in a language like that, it is correct by definition, the typed check Checketts correct.

[00:02:50]

So it's possible that the simulation is written in a language like this, in which case. Yeah, but that can't be sufficiently expressive of language like that, we can it can be. Yeah, OK. Well so all right.

[00:03:05]

So the simulation doesn't have to be complete if it has a scheduled end date. Looks like it does actually with entropy.

[00:03:11]

I mean, I don't think that a simulation, as that results in something as complicated as the universe, would have a form of proof of correctness.

[00:03:23]

Right. As is possible.

[00:03:25]

Of course, we have no idea how good they're tooling us and we have no idea how complicated the universe computer really is. It may be quite simple. It's just very large, right?

[00:03:36]

It's very it's definitely very large, but the fundamental rules might be super simple.

[00:03:41]

Yeah. Conaway's come to life kind of stuff. Right.

[00:03:44]

So if you could hack. So imagine the simulation that is hackable, if you could hack it. What would you change about? How would you approach hacking a simulation? The reason I gave that talk, I by the way, I'm not familiar with the talk here, I just read that you talked about escaping the simulation or something like that. So maybe you can tell me a little bit about the theme and the message there, too.

[00:04:11]

It wasn't a very practical talk about how to actually escape a simulation. It was more about a way of restructuring an us versus them narrative if.

[00:04:24]

We continue on the path we're going with technology.

[00:04:28]

I think we're in big trouble like as a species and not just as a species, but even as me as an individual member of the species.

[00:04:36]

So if we could change rhetoric to be more like to think upwards. I like to think about that we're in a simulation and how we could get out already, we'd be on the right path. What you actually do once you do that, while I assume I would have acquired way more intelligence in the process of doing that. So I'll just ask that.

[00:04:56]

So the thinking upwards, what kind of ideas, what kind of breakthrough ideas do you think thinking in that way could inspire?

[00:05:04]

And what did you say, upwards, upwards into space? Are you thinking sort of exploration in all forms? This space narrative that held for the modernist generation doesn't hold as well for the postmodern generation. What's the space in there? We're talking about the same space, that dimensional space, like going in literally like a building like Elon Musk, like we're going to build rockets, we're going to go to Mars, we're going to colonize the universe.

[00:05:29]

And the narrative you're referring, I was born in the Soviet Union. You're referring to the race to space, the race to space explorer. That was a great modernist narrative. Yeah. It doesn't seem to hold the same weight in today's culture. I'm hoping for good postmodern narratives that replace it. So think let's think so. You work a lot with A.I. So there is one formulation of that narrative that could be also I don't know how much you do in VR and they are.

[00:05:58]

Yeah, that's another. I know less about it, but every time I play with it and our research is fascinating. The virtual world. Are you are you interested in the virtual world? I would like to move to a virtual reality. In terms of your work, I would like to physically move there. The apartment I can rent in the cloud is way better than the apartment I can rent in the real world.

[00:06:19]

Well, it's all relative, isn't it? Because others will have very nice apartments, too. So you'll be inferior in the virtual world.

[00:06:25]

That's not how I view the world, right? I don't view the world. I mean, it's a very like like almost zero sum ish way to view the world.

[00:06:32]

Say, like, my great apartment isn't great because my neighbor has gotten to know my great apartment is great because, like, look at this dishwasher, man. Yeah. You just touch the dish and it's washed.

[00:06:42]

Right. And that is great in and of itself. If I had the only apartment or if everybody had the apartment, I don't care.

[00:06:48]

So you have fundamental gratitude. The world first learned of Jowhar George Hotz in August 2007, maybe before then, but certainly in August 2007, when you were the first person to unlock on unlock an iPhone, how did you get into hacking? What was the first system you discovered vulnerabilities for and broke into? So. That was really kind of the first thing I had I had a book in in 2006 called Gray Hat Hacking, and I guess I realized that if you acquired these sort of powers, you could control the world.

[00:07:32]

But I didn't really know that much about computers back then.

[00:07:37]

I started with electronics. The first iPhone hack was physical card, where you had to open it up and pull an address line high. And it was because I didn't really know about software exploitation. I learned that all in the next few years and I got very good at it. But back then I knew about, like, how memory chips are connected to processors and stuff.

[00:07:55]

You knew about software and programming is didn't didn't know. I'll really see you in your view of the world.

[00:08:02]

And computers was physical was was hardware.

[00:08:05]

Actually, if you read the code that I released with that in August two thousand seven, it's atrocious what language was in a, C, C and in a broken sort of state machine C, I didn't know how to program math.

[00:08:20]

So how did you learn to program? What was your journey?

[00:08:24]

Because I mean, we'll talk about it. You've livestream, some of your programming, this chaotic, beautiful mess.

[00:08:30]

How did you arrive at that? Years and years of practice? I interned at Google after the summer after the iPhone unlock, and I did a contract for them where I built hardware for for a street view and I wrote a software library to interact with it. And it was terrible code. And for the first time, I got feedback from people who I respected saying no, like don't write code like this. Now, of course, just getting that feedback is not enough.

[00:09:02]

The way that I really. Got good was I wanted to write this thing like that could emulate and then visualize like arm binaries because I wanted to hack the iPhone better and I didn't like that. I couldn't, like, see what I couldn't single step through the processor because I debugger on there, especially for the low level things like the bootloader. So I tried to build this tool to do it.

[00:09:27]

And I built the tool once, and it was terrible, I built the tool second time. It was terrible. I built to a third time. By the time I was at Facebook, it was kind of OK. And then I built the tool a fourth time when I was a Google intern again in twenty fourteen. And that was the first time I was like, this is finally usable. How do you pronounce this character? So it's essentially the most efficient way to visualize the change of state of the computer as the program is running.

[00:09:53]

That's what you mean by debugger? Yeah, it's a timeless debugger.

[00:09:58]

So you can rewind just as easily as going forward. Think about if you're using GDB, you have to put a watch on a variable if you want to see if that variable changes and you can just click on that variable and then it shows every single time when that variable was changed or accessed. Think about it like get for your computers that run like there's like a deep log of of the state of the computer as the program runs and you can rewind wise and that maybe it is maybe you can educate me.

[00:10:27]

Why isn't that kind of debugging used more often or because the ruling is bad?

[00:10:32]

Well, two things. One, if you're trying to debug chrome, Chrome is a two hundred megabyte binary that runs slowly on desktops. So that's going to be really hard to use for that. But it's really good to use for like Cardiff's and for ROMs and for small parts of code. So it's hard if you're trying to debug like massive systems.

[00:10:52]

Was the CTF and what's the Buuren about? RAM is the first code that executes the minute you give power to your iPhone. And CTF, where these competitions that I played capture the flag to capture the flag, was going to ask you about that. What are those look at I watched a couple of videos on YouTube. Those are fascinating.

[00:11:09]

What have you learned about maybe at the high level vulnerability of systems from these competitions? The like I feel like like in the heyday of ETFs, you had all of the best security people in the world challenging each other and coming up with new toy exploitable things over here and then everybody, OK, who can break it?

[00:11:31]

And when you break it, you get like there's like a file in the server called Flag and then there's a program running, listening on a socket that's vulnerable. You write and exploit, you get a shell and then you can flag and then you type the flag into like a Web based scoreboard and you get points.

[00:11:45]

So the goal is essentially to find exploit in a system that allows you to run Shell to run arbitrary code on that system.

[00:11:54]

That's one of the categories. That's like the ponytail category. Um, yeah.

[00:12:01]

Furnival's like, you know, you put on the program. It's a program. Yeah.

[00:12:05]

Uh, yeah. You know, I apologize. I'm going to I'm going to say this because I'm Russian, but maybe you can help educate me some video game like Misspelt Own way back in the day.

[00:12:19]

Yeah.

[00:12:19]

And there's just I wonder if there's a definition and I have to go to Urban Dictionary for it.

[00:12:24]

Um, OK, so what was the hardest, by the way. But was it what decade are we talking about.

[00:12:31]

I think like I mean, maybe I'm biased because it's the era that that I played, but like twenty eleven to twenty fifteen. Because the modern scene is similar to the modern competitive programming scene. You have people who like to drells, you have people who practice.

[00:12:52]

And then once you've done that, you've turned it less into a game of generic computer skill and more into a game of OK, you memory, you drill on these five categories. And then before that it wasn't it didn't have like as much attention as it had. I don't know. They were like I won thirty thousand dollars once in Korea for one of these competitions. Oh, crap. They were. They were.

[00:13:14]

So that means I mean, money is money, but that means those those probably good people there. Exactly. Yeah. Are the challenges human constructed or are they grounded in some real flaws in real systems.

[00:13:27]

Usually they're human constructed but they're usually inspired by real flaws. What kind of systems are imagined is really focused on mobile? Like what has vulnerabilities these days is it does primarily mobile systems like Android? Oh, everything does.

[00:13:43]

So yeah, of course the price is kind of gone up because less and less people can find them. And what's happened in security is now if you want to like jailbreak an iPhone, you don't need one exploit anymore. You need nine. Nine chained together. What? Yeah. Wow. OK, so that's really what's the what's the benefit of speaking higher level philosophically about hacking?

[00:14:04]

I mean, it sounds from everything I've seen about you, you just love the challenge and you don't want to do anything.

[00:14:11]

You don't want to bring that exploit out into the world and doing the actual let it run wild. You just want to solve it and then you go on to the next thing.

[00:14:21]

Oh, yeah. I mean, doing criminal stuff is not really worth it. And I'll actually use the same argument for why I don't do defense, for why I don't do crime. If you want to defend a system, say the system has 10 holes, right, if you find nine of those holes as a defender, you still lose because the attacker gets in through the last one. If you're an attacker, you only have to find one out of the ten.

[00:14:45]

But if you're a criminal, if you log on with a VPN nine out of the 10 times at one time, you forget you're done. Because you're caught, because you only have to mess up once to be caught as a criminal. Yeah, that's why I'm not a criminal.

[00:15:02]

But OK, let me because I was having a discussion with somebody just at a high level about nuclear weapons, actually why we're having blown ourselves up yet.

[00:15:12]

And my feeling is all the smart people in the world, if you look at the distribution of smart people, smart people are generally good.

[00:15:23]

And then so other person is talking to Sean Carroll, the physicist, and he was saying, no good and bad people are evenly distributed amongst everybody.

[00:15:30]

My sense was good hackers are in general good people and they don't want to mess with the world. What's your sense?

[00:15:38]

I'm not even sure about that.

[00:15:42]

Like. Have a nice life, crime wouldn't get me anything, but if you're good and you have these skills, you probably have a nice life to write, like you can use it for other things.

[00:15:56]

But is there an ethical is there some there a little voice in your head that says of, well, yeah, if you could hack something where you could hurt people.

[00:16:09]

And you could earn a lot of money doing it, though not hurt physically perhaps, but disrupt your life in some kind of way. It is in their little voice that says, what?

[00:16:20]

Two things. One, I don't really care about money like the money wouldn't be an incentive. The thrill might be it itself.

[00:16:27]

But when I was 19, I read Crime and Punishment, and that was another that was another great one that talked me out of ever really doing crime, because it's like that's going to be me.

[00:16:38]

I'd get away with it, but it just went through my head.

[00:16:41]

And even if I got away with it and then you do crime for long enough, you'll never get away with it. That's right. And that's a good reason to be good.

[00:16:49]

I wouldn't say I'm good. I just I'm not bad.

[00:16:51]

You're a talented programmer and a hacker in a good, positive sense of the word word. You've played around found vulnerabilities in various systems. What have you learned broadly about the design of systems and so on from that from that whole process?

[00:17:09]

You. Learn to not take things for what people say they are, but you look at things for what they actually are. Hmm, yeah, I understand that's what you tell me it is, but what does it do?

[00:17:29]

And you have nice visualization tools to really know what it's really doing.

[00:17:33]

Oh, I wish I'm a better programmer now than I was in twenty fourteen. I said, Kyra, that was the first tool that I wrote that was usable. I wouldn't say the code was great. I still wouldn't say my code is great.

[00:17:45]

So how was your evolution as a programmer except practice. You went, you started to see which one did you pick up Python. Because you're pretty big in Python now. Yeah, in, uh, in college, uh, I went to Carnegie Mellon when I was twenty two. I went back. I'm like, I, I'm going to take all your hardest courses, but we'll see how I do. Right. Like did I miss anything by not having a real undergraduate education.

[00:18:07]

Took operating systems compilers I and they're like uh freshman weeder math course. Um and spending some of those, some of those classes you mentioned.

[00:18:19]

Pretty tough actually. They were great at least of the 2012 circa twenty twelve operating systems and compilers were two of the best class I've ever taken my life. Because you write an operating system and you write a compiler.

[00:18:34]

I wrote my operating system in C and I wrote my compiler in Haskell, but, well, somehow I picked up Python that semester as well. I started using it for the ETFs. Actually, that's when I really started to get into ETFs. And after all, it's a race against the clock. So I can't write things and say, oh, there's a clock component.

[00:18:52]

So you really want to use the programming language. You can be fastest in forty eight hours poneys. Many of these challenges you can pound. You got a hundred points. A challenge, whatever team gets the most.

[00:19:02]

You were both the Facebook and Google for that. Yeah.

[00:19:07]

Or the project zero actually at Google for five months where you develop Cura. Well as Project zero about in general. What I'm just curious about the security efforts in these companies.

[00:19:21]

Well, Product Zero started the same time I went there.

[00:19:25]

What year are their 2015. 2015. That was right at the beginning of project. It's small.

[00:19:32]

It's Google's offensive security team. I'll try to give I'll try to give the best public face an explanation that I can. So the idea is basically these vulnerabilities exist in the world. Nation states have them. Some high powered bad actors have them. Sometime. People will find these vulnerabilities and submit them in bug bounties to the companies, but a lot of the companies don't really care, don't fix the bug. They didn't hurt for there to be a vulnerability.

[00:20:10]

Subprojects is like we're going to do a different we're going to announce a vulnerability and we're going to give them 90 days to fix it. And then whether they fix it or not, we're going to drop the drop the zero day.

[00:20:19]

Oh, wow. We're going to drop the weapon. That's so cool.

[00:20:22]

That is so cool. I love the deadlines that give them real deadlines. Yeah.

[00:20:28]

And I think it's done a lot for moving the industry forward.

[00:20:32]

I watched your coding sessions on the streamed online. You called things up the basic projects usually from scratch.

[00:20:41]

I would say sort of as a programmer myself, just watching you, that you type really fast and your brain works in both brilliant and chaotic ways. I don't know if that's always true, but certainly for the live streams. So it's interesting to me because I'm more a much slower and systematic and careful and you just move, I mean, probably in order of magnitude faster. So I'm curious, is there a method your madness is it is just who you are.

[00:21:09]

There's pros and cons, there's pros and cons to my programming style and I'm aware of them.

[00:21:16]

Like if you ask me to like like get something up and working quickly with, like an API, that kind of undocumented, I will do this super fast because I will throw things at it until it works.

[00:21:26]

If you ask me to take a vector and rotate it 90 degrees and then flip it over the X Y plane, I'll spam program for two hours and won't get it because it's something that you could do with a sheet of paper thing through design and then just you really just throw stuff at the wall and you get so good at it that it usually works.

[00:21:51]

I should become better at the other kind as well.

[00:21:53]

Sometimes I'll do things methodically.

[00:21:56]

It's nowhere near as entertaining on the twitch streams. I do exaggerated a bit on the twitch games as well. The twitch streams. I mean, what do you want to see a game or. I want to see action per APM for programming.

[00:22:04]

Yes. I recommend people go to I think I watched, I watched probably several hours if you like. I've actually left you programming in the background while I was programming because you made me you it was it was like watching a really good gamer. It's like energizes you because you're like moving so fast and so it's awesome. It's inspiring. And it's only made me jealous that, like. Because my own program is inadequate in terms of speed. Oh, I was like so I'm twice as frantic on the live streams as I am when I code without them.

[00:22:38]

Oh, it's super entertaining.

[00:22:40]

So I wasn't even paying attention to what you were coding, which is great.

[00:22:43]

It's just watching you switch windows and vem I guess is the most driven screen I've developed the workload Facebook and talk about how do you learn new programming tools, ideas, techniques these days. What's your like methodology for learning new things. So I wrote for Commer.

[00:23:03]

The distributed file systems out in the world are extremely complex, like if you want to install something like like like Sayef Saffy's, I think the like open infrastructure to shapefile system or there's like newer ones like seaweed fast.

[00:23:20]

But these are all like ten thousand plus line projects.

[00:23:23]

I think some of them are even hundred thousand line and just configuring them as a nightmare.

[00:23:27]

So I wrote, I wrote one, it's two hundred lines and it's, it uses like engine volume servers and has a little master server that I go to.

[00:23:38]

And the way I go this if I would say that I'm proud per line of any code I wrote, maybe there's some exploits that I think are beautiful. And then this, this is two hundred lines. And just the way that I thought about it I think was very good.

[00:23:51]

And the reason it's very good is because that was the fourth version of it that I wrote and I had three versions that I threw away. You mentioned you said go. I got to go. Yeah. And go. So that a functional language I forget what goes go is Google's language, right?

[00:24:04]

It's not functional. It's some it's like in a way it's C++, but easier.

[00:24:12]

It's it's strongly typed. It has a nice ecosystem around it. When I first looked at it, I was like, this is like Python.

[00:24:19]

But it takes twice as long to do anything. Yeah. Now that I hope the pilot is migrating to C, but it still has large python components. I now understand why Python doesn't work for large code bases and why you want something like.

[00:24:31]

Oh, interesting. So why, why doesn't Python work for so even most speaking for myself, at least like we do a lot of stuff, basically demo level work with autonomous vehicles and most of the work is Python. What is in Python work for logical basis because well, lack of type checking is a big errors creep in.

[00:24:55]

Yeah. And like you don't know, the compiler can tell you like nothing. Right. So everything is either, you know, like, like syntax errors. Fine. But if you misspell a variable in Python, the compiler will catch that there's like Linder's that can catch it some of the time. There's no type's this is really the biggest downside and then we'll put it on slow, but that's not related to it. Well, maybe it's kind of related to its lack of.

[00:25:21]

So what's what's in your toolbox these days is a python. What else do I need to move to something else, my adventure into dependently type languages? I love these languages. They just have like syntax from the 80s.

[00:25:34]

What do you think about JavaScript ESX?

[00:25:38]

Like the modern typescript JavaScript is? The whole ecosystem is unbelievably confusing. NPM updates a package from zero to two to zero to five and that breaks your Babille linta, which translates your S5 at six, which doesn't run on.

[00:25:55]

So why do I have to compile my JavaScript again, though?

[00:25:58]

It may be the future, though, if you think about I mean, I've embraced JavaScript recently just because just like I've continually embraced HP, it seems that these worst possible languages live on for the long as the cockroaches never die.

[00:26:13]

Yeah, well, it's in the browser and it's fast. It's fast. Yeah, it's in the browser and compute might become the browser. It's unclear what the role of the browser is in terms of distributed computation in the future. So. JavaScript is definitely here to stay. Yeah, it's interesting if autonomous vehicles will run and JavaScript one day, I mean, you have to consider these possibilities, all our debug tools or JavaScript, which actually just opensource them with a tool explorer, which you can annotate.

[00:26:44]

You're just engagement's and we talk about it, which lets you analyze the traffic from the car.

[00:26:49]

So basically, any time you're visualizing something about the log, using the web is the best UI tool kit by far.

[00:26:57]

So and then what you're coding in JavaScript, we have a guy.

[00:27:00]

He's got to react nice. Let's get into it. So let's talk autonomous vehicles. You founded Cormie. Let's at a high level. How did you get into the world of vehicle automation? Chaos is just for people who don't know. Tell the story of coming here.

[00:27:19]

So I was working at this startup and a friend approached me and he's like, dude. I don't know where this is going, but the coolest applied I problem today is self-driving cars. Well, absolutely. Do you want to meet with Elon Musk? And he's looking for somebody to build a vision system for auto. This is when they were still an app, one they were still using Mobileye. Elon back then was looking for a replacement. And he brought me in and we talked about a contract where I would deliver something that meets Mobileye level performance.

[00:27:58]

I would get paid 12 million dollars if I could deliver tomorrow and I would lose one million dollars for every month I didn't deliver. Yeah. So I was like, OK, this is great deal. This is super exciting challenge. You know what, even if it takes me 10 months, I get two million dollars, it's good. Maybe I can finish up in five, maybe I don't finish at all and I get paid nothing and I'll work for 12 months for free.

[00:28:17]

So maybe just take a pause on that. I'm also curious about this because I've been working on robotics for a long time and I'm curious to see a person like you just step in and sort of somewhat naive, but brilliant. Right. So that's that's the best place to be because you basically full steam take on a problem. How confident how from that time? Because you know a lot more now at that time. How hard do you think it is to solve all of the autonomous driving?

[00:28:42]

I remember I suggested to Elon in the meeting on putting a GPU behind each camera to keep the compute local. This is an incredibly stupid idea. I leave the meeting 10 minutes later and I'm like, I could spend a little bit of time thinking about this problem or I'll just send all your cameras to one big GPU.

[00:29:02]

You're much better off doing that. Oh, sorry. You said behind every camera you have a small chip.

[00:29:07]

I was like, oh, I'll put the first few layers of my comp there.

[00:29:10]

Oh, look what I say. That that's possible. It's possible, but it's a bad idea.

[00:29:15]

It's not obviously a bad idea, obviously, but whether it's actually a bad idea or not, I left that meeting with Elon, like beating myself up.

[00:29:21]

I'm like, why do I say something stupid? Yeah, you have. Like, you haven't thought through every aspect. Yeah. He's very sharp too. Like usually in life I get away with saying stupid things and then kind of course. Oh right right away he called me out about it. And like usually in life I get away with saying stupid things and then like people will, you know, people a lot of times people don't even notice and I'll correct it and bring the conversation back.

[00:29:44]

But with Elon, it was like, no, like, OK, well, that's not at all why the contract fell through. I was much more prepared the second time I met him.

[00:29:51]

Yeah, but in general, how hard did you think it is, like 12 months is. Oh, is a tough timeline.

[00:30:00]

Oh, I just thought I'd clone Mobli IQ three. I didn't think I'd solve level five self driving or anything.

[00:30:04]

So the goal there was to do lane keeping good good lane keeping.

[00:30:09]

I saw my friend showed me the outputs for Mobli and the output from Mobile. I was just basically two lanes at a position of a lead car like I can, I can gather data set and train this net in in weeks.

[00:30:19]

And I did well first time I tried the implementation of Mobileye in the test, I was really surprised how good it is. It's quite incredible because I thought it's just because I've done a lot of computer vision I thought would be a lot harder to create a system that that's stable. So I was personally surprised.

[00:30:39]

You know, I have to admit it because I was kind of skeptical before trying it, because I thought it would go in and out a lot more, would get disengaged a lot more, and it's pretty robust.

[00:30:52]

So what how how how hard is the problem when you when you tackled it?

[00:30:58]

So I think App one was great. Like Elon talked about disengagement on the four or five. Down in L.A. with like landmarks are kind of faded and the Mobileye system would drop out like I had something up and working that I would say was like the same quality in three months. Same quality, but how do you know, you say stuff like that, yeah, but you can't and I love it, but well, the question is you can't. You're kind of going by feel because you're going.

[00:31:31]

Absolutely, absolutely. Like like I would take I had I borrowed my friend's Tesla. I would take AP one out for a drive and then I would take my sister out for a drive and seems reasonably like the same.

[00:31:42]

So the four or five, how hard is it to create something that could actually be a product that's deployed? I mean, I've read an article or Elon responded, said something about you saying that to build the autopilot is is more complicated than a single George HODs, a level job. How hard is that job to create something that would work across the globe? What are the global is the challenge. But Elon followed that up by saying it's going to take two years and a company of 10 people.

[00:32:21]

Yeah. And here I am four years later with a company of 12 people. And I think we still have another two to go do years. Yeah. So what do you think what do you think about how does this progressing with autopilot be two of three? I think we've kept pace with them pretty well.

[00:32:40]

I think navigating autopilot is terrible. We had some demo features internally of the same stuff and we would test it and I'm like, I'm not shipping this even as like open source software to people.

[00:32:51]

What do you think is terrible?

[00:32:53]

Consumer Reports does a great job of describing it. Like when it makes a lane change, it does it worse than a human.

[00:33:00]

You shouldn't ship things like autopilot open pilot. They lahn keep better than a human right. If you turn it on for a stretch of highway like an hour long, it's never going to touch a Lenglen. Human will touch probably leyline twice. You just inspired me, I don't know if you're grounded in data on that. I read your paper. OK, well, no, but that's interesting. I wonder actually how often we touch lines in general. Like, a little bit, because it is the I could answer that question pretty easily with the commentators.

[00:33:32]

Yeah, I'm curious. I've never answered it. I don't know. I just too is like my personal.

[00:33:36]

It feels it feels right. Well, that's interesting because every time you touch a link, that's a source of a little bit of stress and kind of Linkoping is removing that stress. That's ultimately the big the biggest value add, honestly, is just removing the stress of having to stay in line.

[00:33:52]

And I think honestly, I don't think people fully realize, first of all, that that's a big value add, but also that that's all it is.

[00:34:01]

And that not only I find it a huge value add, I drove down when we moved to San Diego. I drove down in our Enterprise Rent-A-Car and I missed it. So I missed having the system so much. It's so much more tiring to drive without it. It's it is that lane centering.

[00:34:19]

That's the key feature.

[00:34:21]

Yeah. And in a way, it's the only feature that actually adds value to people's lives on autonomous vehicles today, wammo does not add value to people's lives. It's a more expensive, slower, slower Uber. Maybe someday it'll be this big cliff where it adds value. But I don't usually do this fast.

[00:34:36]

I haven't talked to because this is good. Because I haven't I have intuitively. But I think we're making it explicit now. I, I actually believe. That really good line keeping. Is a reason to buy a car, will be a reason to buy a car as a huge value add, I've never until we just started talking about it, haven't really quite realized that that I've felt with Ilan's chase of level four is not the correct chase. It was because you should just say Tesla has the best, as if from a test perspective, say Tesla has the best lane, keeping calm, I should say, coming as the best lane keeping.

[00:35:20]

And that is it. Yeah. Yeah, that's true. Do you think you have to do the longitudinal as well?

[00:35:26]

You can't just keep you have to do X, but X is much more forgiving than it, especially in the highway.

[00:35:33]

By the way, are you coming as Kemar only. Correct. Or do we use the radar.

[00:35:39]

We from the car you are able to get the OK we can do a camera only now it's gotten to the point but we leave the radar. There is like a it's fusion now.

[00:35:49]

OK, so maybe talk through some of the system specs on the hardware or what, what was the hardware side of where you're providing what's the capabilities and the software side with open pilot and so on.

[00:36:03]

So open pilot. As the box that we sell, that it runs on, it's a phone in a plastic case, it's nothing special. We sell it without the software. So you're like you buy the phone. It is just easy. It'll be easy set up, but it's sold with no software. Open pilot right now is about to be zero point six when it gets to one point. So I think we'll be ready for consumer product. We're not going to add any new features.

[00:36:28]

We're just going to make the lane keeping really, really good. So what do we have right now?

[00:36:33]

It's a Snapdragon 820. Say Sony IMX to ninety eight forward facing camera driver monitoring camera, a selfie cam on the phone and uh. I can transceiver a little thing called pandas, and they talk over USB to the phone and then they have three canvases that they talk to the car. One of those campuses is the right, our campus. One of them is the main car campus, and the other one is the proxy cammer campus. We leave the existing camera in place so we don't turn Abbi off.

[00:37:07]

Right now, we still turn it off if you're using our longitudinal. But we're going to fix that before 1.0.

[00:37:12]

You got it. That's cool. So and as can both ways. So how are you able to control vehicles?

[00:37:19]

So we proxy the vehicles that we work with already have a lane keeping assist system. So Lane keeping assist can mean a huge variety of things.

[00:37:30]

It can mean it will apply a small talk to the wheel after you've already crossed a lane line by a foot, which is the system and the older Toyotas versus like I think Tesla still calls it Lane keeping assist where it'll keep you perfectly in the center of the lane on the highway you can control like you with the joystick, the car, these so these cars already have the capability of drive by.

[00:37:54]

So is it is it trivial to convert a car that it operates with?

[00:38:01]

It is able to control the steering or a new car or a car that we so we have support now for four or five different makes of cars. What are what are the cars? Mostly Hondas and Toyotas. We support almost every Honda and Toyota made this year. And then a bunch of GM's bunch of Subaru's just have to be like a Prius.

[00:38:22]

It can be Corolla as well. Out of the twenty twenty is the best car with open pilot. It just came out there. The actuator has less lag than the older Corolla. I think I start watching a video with your I mean, the way you make videos is awesome.

[00:38:38]

Literally at the dealership streaming my friend video stream for an hour. Yeah.

[00:38:44]

And basically, like, if stuff goes a little wrong, you just, like, just go with it. Yeah, I love it. What's real? That's real. That's that's that's so beautiful. And it's so in contrast to the way other companies would put together a video like that. Kind of. I like to do it like that. Good. I mean, if you become super rich one day is successful, I hope you keep it that way because I think that's actually what people love, that kind of genuine.

[00:39:11]

Oh, it's all that has value to me. Yeah. Money has no if I sell out to make money, I sold out. It doesn't matter. What do I get ya.

[00:39:18]

I only got and I think Tesla is actually has a small inkling of that as well with autonomy. They, they did reveal more than I mean of course this marketing communications you can tell, but it's more than most companies will reveal, which is I hope they go towards that direction.

[00:39:37]

More other companies, GM, Ford, Tesla, Tesla is going to win level five. They really are.

[00:39:43]

So let's talk about it.

[00:39:44]

You think that you're focused on level two are currently currently we're going to be one to two years behind Tesla getting to level five.

[00:39:53]

OK, we're right. We're entering a year. And I'm just saying, once Tesla gets it, we're one to two years behind. I'm not making any timeline on when Tesla. That's right. You did. That's brilliant. I'm sorry. Tesla investors, if you think you're going to have an autonomous robot taxi fleet by the end of the year.

[00:40:08]

Yes, I'll bet against that.

[00:40:11]

So what do you think about this? The most level, four companies. Are kind of just doing their usual safety driver, doing full time and kind of testing and then testing does basically trying to go from lane keeping to full autonomy. What do you think about that approach? How successful would it be?

[00:40:34]

It's a ton better approach because Tesla is gathering data on a scale that none of them are. They're putting real users behind the guy behind the wheel, the cars. It's I think the only strategy that works, the incremental. Well, so there's a few components to Tesla that there's more than just the incremental suite we spoke with is the ones the software. So over the air software updates necessity. I mean, we have to have those two. Those aren't.

[00:41:04]

But those differentiate them from the automakers, know Lane keeping assist systems, have no cars or Linkoping system. Have that except Tesla. Yeah. And the other one is the data the other direction, which is the ability to query the data. I don't think they're actually collecting as much as people think, but the ability to turn on collection and turn it off.

[00:41:25]

So I'm both in the robotics world and the the psychology human factors world. Many people believe that level two autonomy is problematic because the human factor, like the more the task is automated, the more there's a vigilance decrement. You start to fall asleep, you start to become complacent, start texting more and so on. Do you worry about that? Because if we're talking about transition from Linkoping to full autonomy, if you're spending 80 percent of the time not supervising the machine, do you worry about what that means to the safety of the drivers?

[00:42:03]

One, we don't consider it to be one point until we have one hundred percent driver monitoring you. You can cheat. Right now, our driver monitoring system is a few ways to cheat it. They're pretty obvious. We're working on making that better. Before we ship a consumer product that can drive cars, I want to make sure that I have driver monitoring that you can't cheat. What's like a successful driver modeling system look like it's cheap, is it?

[00:42:25]

All of us? Keep your eyes on the road.

[00:42:27]

Um, well, a few things. So that's what we went with at first for driver monitoring. I'm checking. I'm actually looking at where your head is looking, but cameras are not that high. Resolution eyes are a little bit hard to get ahead.

[00:42:38]

Is this big? I mean, that is that is good.

[00:42:41]

And actually, a lot of it just is psychology wise to have that monitor constantly there. It reminds you that you have to be paying attention. But we want to go further. We just hired someone full time to come on to do the driver monitoring. I want to detect phone in frame and I want to make sure you're not sleeping. How much does the camera see of the body? This one. Not enough. Not enough. The next one.

[00:43:07]

Well, it's interesting, fish, because we have we're doing just data collection at real time. But Fisheye is a beautiful month.

[00:43:14]

Being able to capture the body and the smartphone is really like the biggest problem.

[00:43:19]

Also, I can show you one of the pictures from from our new system. Awesome, so you're basically saying the driver monitoring will be the answer to that?

[00:43:29]

I think the other point that you raise in your papers is good as well. You're not asking a human to supervise a machine without giving them the they can take over at any time. Right now, our safety model, you can take over. We disengage on both the gas or the brake. We don't disengage in steering. I don't feel you have to, but we disengage on gas for brake. So it's very easy for you to take over and it's very easy for you to re-engage.

[00:43:52]

That switching should be super cheap. The cars that require even autopilot requires a double press that's almost like that.

[00:44:00]

And then then the cancel to cancel in autopilot.

[00:44:04]

You either have to press cancel, which no one knows where that is. So they press the brake. But a lot of times you don't. You want to press the brake, you want fast gas. So you cancel gas for Wiggill the steering wheel, which is bad as well. Wow, that's brilliant.

[00:44:15]

I haven't heard anyone articulate that point, but this is all I think about the, um.

[00:44:21]

Because I think I think actually Tesla has done a better job than most automakers at making that frictionless. But you just described that it could be even better.

[00:44:33]

I love supergroups as an experience once it's engaged. Yeah. I don't know if you've used it, but getting the thing to try to engage.

[00:44:41]

Yeah, I've used to have driven supercars a lot. So what's your thoughts on the super system?

[00:44:45]

And you disengage super cruise and it falls back to X and Y cars like still accelerating. It feels weird.

[00:44:52]

Otherwise when you actually have super cruising engaged on the highway, it is phenomenal. We bought that Cadillac, we just sold it, but we bought it just to like experience this. And I wanted everyone in the office to be like, this is what we're striving to build. GM pioneering with the drive for monitoring. You like their driver monitoring system.

[00:45:11]

It hasn't bugs.

[00:45:12]

If there's a sun shining back here, it'll be blind you.

[00:45:18]

But overall, mostly, yeah, that's so cool that, you know, all of this stuff that I don't often talk to people that because it's such a rare car, unfortunately, we bought one that we thought we lost like by twenty five k the deprecation Butterfield.

[00:45:31]

So I thought I was very pleasantly surprised that our system was so innovative and really wasn't advertised much. Wasn't talked about much. Yeah. And I was nervous that it would die, that they would disappear when they put it on the wrong car they should put it on the ballot and not some weird Cadillac that nobody bought.

[00:45:53]

I think that's going to be they're saying at least is going to be into their entire fleet. So what do you think about it as long as we're on the driver monitoring? What do you think about Musk's claim that driver monitoring is not needed?

[00:46:08]

Normally, I love his claims that one is stupid, that one is stupid. And, you know, he's not going to have his level five fleet by the end of the year. Hopefully he's like, OK, I was wrong. I'm going to drive about it. Because when these systems get to the point that they're only messing up once every thousand miles, you absolutely need driver matter.

[00:46:30]

So let me play because I agree with you. But let me play devil's advocate. One possibility is that without driver monitoring, people are able to monitor the self, regulate, monitor themselves. You know that your idea is upsetting all the people sleeping in Brussels.

[00:46:50]

Yeah, well, I'm a little skeptical of all the people sleeping in Teslas because.

[00:46:58]

I've I've stopped paying attention to that kind of stuff because I want to see what there is too much glorified, it doesn't feel scientific to me. So I want to know, you know what? How many people are really sleeping in Testa's versus sleeping?

[00:47:11]

As I was driving here, sleep deprived in a car with no automation, I was falling asleep. I agree that it's hype.

[00:47:18]

It's just like, you know what?

[00:47:21]

If you want to drive monitoring, I think I ran into my last autopilot experience was I rented a model three in March and drove it around the wheel. Thing is annoying and the reason the wheel thing is annoying. We use the wheel thing as well, but we don't disengage on wheel for Tesla. You have to touch the wheel just enough to trigger the torque sensor to tell it that you're there, but not enough as to disengage it, which don't use it for two things.

[00:47:47]

Don't disengage on wheel. You don't have to that whole experience. Wow, beautiful. Put the all those elements. Even if you don't have driver monitoring, that whole experience needs to be a better driver.

[00:47:57]

Monitoring I think would make I mean, I think super cruise is a better experience once it's engaged over autopilot. I think Super Cruise is a transition to engagement and disengagement are significantly worse.

[00:48:11]

There's a tricky thing because if I were to criticize super cruises, it's a little too crude. And I think like six seconds or something. If you look off road, they'll start warning you it's some ridiculously long period of time. And just the way it I think it is basically it's a binary adapter. It yeah. It just needs to learn more about you and needs to communicate what it says about you more. I'm done testing shows. What is this about the external world.

[00:48:43]

It would be nice to Supercooled would tell us what it says about the internal world.

[00:48:47]

It's even worse than that.

[00:48:48]

You press the button to engage and it just says super cruise unavailable. Yeah. Y y yeah. That transparency's Yeah.

[00:48:56]

Is good.

[00:48:57]

We've renamed the driver monitoring packet to drive state to state. We have CA State packet which has the state of the car and driver state packet which is the driver. So what does it mean.

[00:49:08]

There must be a blood alcohol content. Do you think that's possible with computer vision. Absolutely.

[00:49:19]

It's to me it's an open question. I don't, I haven't looked into it too much actually quite seriously looked at the literature. It's not obvious to me that from the eyes and so on, you can tell you might need stuff in the car as well.

[00:49:29]

Yeah, you might need how they're controlling the car. Right. And that's fundamentally at the end of the day, what you care about. But I think especially when people are really drunk, they're not controlling the car nearly as smoothly as they would look at them walking right there. The car is like an extension of the body. So I think you could totally detect. And if you could fix people who are drunk, distracted, asleep, if you fix those three.

[00:49:49]

Yeah, that's that's huge. So what are the current limitations of open pilot? What are the main problems that still need to be solved?

[00:49:57]

Um, so we're hopefully fixing a few of them in 06.

[00:50:02]

We're not as good as autopilot at stop cars. So if you're coming up to a red light at like fifty five. So it's the right our stop car problem which is responsible to our pilot accidents, it's hard to differentiate a stopped car from a like signpost and static object. So you have to fuse, you have to do this visually.

[00:50:24]

There's no way from the radar data to tell the difference. Maybe you can make a map, but I don't really believe in mapping at all anymore.

[00:50:30]

So what, you don't believe in mapping now?

[00:50:33]

So you basically the open solution is saying react to the environment just like human beings.

[00:50:41]

And then eventually when you want to do navigate on autopilot, I'll train the net to look at ways of on ways in the background that are using GPS.

[00:50:49]

So we use it to countries with very carefully ground truth of paths.

[00:50:54]

And we have a stack which can recover relative to 10 centimeters over one minute. And then we use that to ground truth exactly where the car went in that local part of the environment. But it's all local.

[00:51:05]

How are you testing in general just for yourself, like experiments, stuff right where you were you located? San Diego. San Diego, yeah. OK, what so you basically drive around there and collect some data and watch on wave simulator now and we have our simulators really cool. Our simulator is not it's not like a unity based simulator. Our simulator lets us load in real estate.

[00:51:29]

We mean we can load in a drive and simulate what the system would have done on the historical data. Oh, nice. Interesting. So right now, we're only using it for testing, but as soon as we start using it for training, that's it. That's it for testing. What's your feeling about the real world versus a simulation like simulation for training if this moves to training?

[00:51:52]

So we have to distinguish two types of simulators. Right. There's a simulator that like is completely fake. I could get my car to drive around and I feel that this kind of simulator is useless. You're never there.

[00:52:09]

There's so many. My analogy here is like, OK, fine, you're not solving the computer vision problem, but you're solving the computer graphics problem. Right.

[00:52:19]

And you don't think you can get very far by creating ultra realistic graphics now because you can create ultra realistic graphics of the road now, create ultra realistic behavioral models of the other cars.

[00:52:31]

Oh, well, I'll just use my self driving. No, you won't. You need real you need actual human behavior because that's what you're trying to learn. The driving does not have a spec. The definition of driving is what humans do when they drive whatever way MO does.

[00:52:47]

I don't think it's driving right.

[00:52:49]

Well, I think actually you way more and others, if there's any use for reinforcement learning, I've seen it used quite well. I study pedestrians a lot too, is try to train models from real data of how pedestrians move and try to use reinforcement learning models to make pedestrians moving human like ways.

[00:53:06]

By that point, you've already gone so many layers. You detected a pedestrian. Did you did you hand code the feature vector of their state?

[00:53:16]

Did you guys learn anything from computer vision before deep learning? Well, OK.

[00:53:22]

Know, I feel like this is a perception to you. Is is the sticking point is, I mean what's what's the hardest part of the stack here?

[00:53:30]

There is no real human understandable feature vector separating perception and planning. That's the best way I can.

[00:53:40]

I can put that there is no it's all together and it's it's a joint problem.

[00:53:46]

So you can take localization, localization and planning. There is a human understandable feature vector between these two things. I mean, OK, so I have like three degrees position, three degrees orientation and those derivatives, maybe those second derivatives. Right. That's human understandable.

[00:53:59]

That's physical the between perception and planning. So like wammo has a perception stack and then a planner. And one of the things my mother's right is they have a simulator that can separate those two.

[00:54:16]

They can like replay their perception data and test their system, which is what I'm talking about, about like the two different kinds of simulators. There's the kind that can work on real data and it kind of can't work on real data.

[00:54:27]

Now, the problem is that I don't think you can handle code a feature vector. Right. Like like you have some list of like here's my list of cars on the scenes. Here's my list of pedestrians in the scene.

[00:54:37]

This isn't what humans are doing. What are humans doing global. Some some are saying that's too difficult to handle, Junior. I'm saying that there is no state vector given a perfect I could give you the best team of engineers in the world to build a perception system and the best team to build a planner. All you have to do is define the state factor that separates those two.

[00:55:00]

I'm missing the state vector that separates those two. What do you mean?

[00:55:05]

So what is the output of your perception system? I'll put it the system, it's OK. Well, there's several ways to do it with one. One is this, one is localization, the other driveable area, driveable space travel space. And then there's the different objects in the scene and different objects in the scene over time, maybe to give you input and then try to start modeling the trajectories of those objects.

[00:55:38]

Sure. That's it. I can give you a concrete example of something you missed. Was that so say there's a bush in the scene.

[00:55:45]

Humans understand that when they see this Bush that there may or may not be a car behind that Bush driveable area. And a list of objects does not include that. Humans are doing this constantly at the simplest intersections. So now you have to talk about occluded area, right? Right. But even that, what do you mean by occluded? OK, so I can't see it. Well, if it's the other side of a house, I don't care what's the likelihood that there's a car in that occluded area.

[00:56:11]

Right.

[00:56:11]

And if you say, OK, we'll add that I can come up with 10 more examples that you can't add.

[00:56:18]

Certainly included area would be something that simulator would have because it's simulating the entire you know, the occlusion is part of it or part of a vision stack, which is that what I'm saying is if you have a hand engineered, if your perception system output can be written in a spec document, it is incomplete.

[00:56:39]

Yeah.

[00:56:39]

I mean, certainly it's it's hard to argue with that because in the end, that's going to be true.

[00:56:46]

Yes. And I'll tell you what, the output of our perception system is that it's a thousand at a thousand twenty four dimensional vector.

[00:56:54]

Oh, you don't know that.

[00:56:55]

It's the thousand twenty four dimensions of who knows what because it's operating on real data.

[00:57:03]

And that's the perception. That's the perception stake. Right. Think about think about an auto encoder for faces. Right.

[00:57:10]

If you have an auto encoder faces and you say it has 256 dimensions of middle and I'm taking a face over here and projecting to a face over here. Yeah. Can you hand label 256 of those dimensions? Well, no, but it doesn't generate automatically.

[00:57:25]

But I but even if you tried to do it by hand, could you come up with a spec for your and between your encoder and your decoder.

[00:57:33]

No, no. Because that's it was in design. But there are no but if you could design it, if you could design a face, reconstruct her system, could you come up with a spec. No, but I think we're missing here a little bit.

[00:57:48]

I think the the you're just being very poetic about expressing a fundamental problem of simulator's that they're going to be missing so much that the feature vector will just look fundamentally different from in the simulated world than the real world.

[00:58:07]

I'm not making a claim about simulator's, I'm making a claim about the spec division between perception and planning and planning, even in your system, just in general, just in general, if you're trying to build a car that drives, if you're trying to hand code the output of your perception system, like saying like here's a list of all the cars in the scene, here's a list of all the people.

[00:58:28]

Here's a list of excluded areas. Here's a vector of drivable areas.

[00:58:31]

It's insufficient. And if you start to believe that you realize that what we monkeys are doing is impossible, currently what we're doing is the perception problem is converting the scene into a chessboard and then you reason some basic reasoning around that chessboard.

[00:58:49]

And you're saying that really there's a lot missing there.

[00:58:54]

First of all, why are we talking about this? Because isn't this a full autonomy? Is this something you think about?

[00:59:01]

Oh, I want to win self-driving cars. Seriously, it's your definition of when includes levels of four, five, level five.

[00:59:10]

I don't think Level four is a real thing.

[00:59:12]

I want to build I want to build the alpha goal of driving.

[00:59:17]

So so alpha go is really. And to end. Yeah. Is yeah.

[00:59:25]

It's end to end. And do you think this whole problem is that also kind of what you're getting at with the perception and the planning, is that this whole problem, the right way to do it is really to learn the entire thing.

[00:59:38]

I'll argue that not only is it the right way, it's the only way that's going to exceed human performance.

[00:59:44]

Well, it's certainly true for everyone who tried to handcar go things, built human inferior things. And then someone came along and wrote some ten thousand line thing that doesn't know anything about go that beat everybody is ten thousand points true in that sense.

[00:59:59]

The the open question then that maybe I can ask you is driving is much harder than go. The open question is how much harder.

[01:00:12]

So how.

[01:00:13]

Because I think the mosque approach here with planning and perception is similar to what you're describing, which is really turning into not some kind of modular thing, but really do formulators the learning problem and solve the learning problem of scale.

[01:00:29]

So how many years have put one is how many years would it take to solve this problem or just how hard is this fixing problem?

[01:00:38]

Well, the cool thing is. I think there's a lot of value that we can deliver along the way. I think that you can build. Lane keeping assist, actually, plus adaptive cruise control, plus, OK, looking at ways extends to like all of driving. Yeah, most of the driving, right. Oh, your adaptive cruise control treats red lights like cars.

[01:01:07]

OK, so let's jump around with you mentioned that you didn't like navigating autopilot. Yeah. What advice. How would you make it better do you think is a feature that if it's done really well, it's a good feature?

[01:01:18]

I think that it's too reliant on like hand coded hacks for like how does navigate an autopilot do a lane change? It actually does the same lane change every time and it feels mechanical. Humans do differently changes. Humans sometimes will do a slow one, sometimes do a fast one, navigate in autopilot.

[01:01:36]

At least every time I used it, it'd be identical language.

[01:01:39]

How do you learn? I mean, this is a fundamental thing actually. Is the braking in an accelerating something that's still a Tesla probably does better than most cars, but it still doesn't do a great job of creating a comfortable natural experience and navigate on autopilot.

[01:01:58]

Just lane changes is an extension of that. So how do you learn to do natural lane change?

[01:02:05]

So we have it and I can talk about how it works. So I feel that we have the solution for lateral. We don't yet have the solution for longitudinal. There's a few reasons. Longitudinal is harder than lateral.

[01:02:19]

The lane change component, the way that we train on it very simply is like our model has an input for whether it's doing a lane change or not. And then when we train the end to end model, we handleable all the line changes because you have to. I struggled a long time about not wanting to do that, but I think you have to because all of the training data for the training data, we actually we have an automatic ground truth which automatically labels all the lane changes.

[01:02:47]

Was that possible to automatically enter and attack the lane?

[01:02:50]

I see when it crosses it right. And I don't have to get that that high percent accuracy, but it's like ninety five nothing.

[01:02:54]

OK, now I set the bit when it's doing the lane change in the end to end learning and then I set it to zero when it's not doing lane change. So now if I wanted to do a lane change it's time I just put the bit to a one and I'll do it.

[01:03:08]

Yeah but so if you look at the space of, you know, some percentage, not one hundred percent that we make as humans is not a pleasant experience because we missed some part of it up. Yeah, it's nerve racking to change. If the look at the seizure does accelerate, how do we label the ones that are natural and feel good? You know, that's the ultimate criticism.

[01:03:29]

The current navigation autopilot just doesn't feel good.

[01:03:33]

Well, the current navigation autopilot is a hand coded policy written by an engineer in a room who probably went out and tested it a few times on the 280, probably a more a better version of that.

[01:03:45]

But yes, that's how we would have written it to come. Yeah, baby. Tesla, Tesla, they tested it and there might have been two engineers.

[01:03:51]

Let's do it.

[01:03:52]

Yeah, no, but so if you learn the lane change, if you learn how to do a lane change from data, just like just like you have a label that says lane change and then you put it in when you want to do the lane change, it'll automatically do the lane change that's appropriate for the situation now to get at the problem of some humans do bad lane changes.

[01:04:13]

We haven't worked too much on this problem yet. It's not that much of a problem in practice. My theory is that all good drivers are good in the same way and all bad drivers are bad in different ways.

[01:04:25]

And we've seen some data to back this up while beautifully put. So you just basically, if that's true hypothesis, then the task is to discover the good drivers, the good drivers stand out because they're in one cluster and the bad drivers are scattered all over the place.

[01:04:41]

And you're Nettleton's the cluster.

[01:04:43]

Yeah, that's so you just learn from the good drivers and they're easy to cluster.

[01:04:50]

We learn from all of them and then that automatically learns the policy that's like the majority, but will eventually probably have to fail.

[01:04:54]

So if that is true, I hope it's true because the counter theory is there is many clusters, maybe arbitrarily many clusters of good drivers, because if there's one cluster of good drivers, you can at least discover a set of policies. You can learn a set of policies which would be good universally. Yeah, uh, that would be a nice that would be nice if it's true.

[01:05:21]

And you're saying that there is some evidence that, uh, let's say lane changes can be clustered into four clusters or there's a finite level?

[01:05:28]

I would argue that all four of those are good clusters.

[01:05:31]

All the things that are random are noise and probably bad. And which one of the four you pick or maybe it's ten or maybe it's 20, you can learn that it's context dependent.

[01:05:40]

It depends on the scene. And the hope is it's not too dependent on the driver. Yeah, the hope is that it all washes out. The hope is that there is that the distribution stopped by. The hope is that it's a nice gathering.

[01:05:55]

So what advice would you give to Tesla? How to fix, how to improve navigating autopilot, that the lessons you've learned from Cumia?

[01:06:04]

The only real advice I would give to Tesla is please put driver monitoring in your cars with respect to improving.

[01:06:11]

You can't do that anymore. I tried to interrupt, but, you know, there's a practical nature of many of hundreds of thousands of cars being produced that don't have a good driver facing camera.

[01:06:22]

The model three has a selfie cam. Is it not good enough? Did they not have priorities for night?

[01:06:27]

That's a good question. But I do know that the fisheye and its relatively low resolution. So it's really not that it wasn't it wasn't designed for drama.

[01:06:35]

You can hope that you can kind of scrape up and have something from it.

[01:06:40]

Yeah, but why didn't they put it in today, put it into that today?

[01:06:46]

Every time I've heard Carpathia talk about the problem and talking about like Software 2.0 and how the machine learning is gobbling up everything, I think this is absolutely the right strategy. I think that he didn't write and have a get on autopilot. I think somebody else did and kind of hacked in on top of that stuff. I think would Carpathia says, wait a second, why did we hanko this land change policy with all these magic numbers?

[01:07:04]

We're going to learn it from data. They'll fix it.

[01:07:06]

They already know what to do that. Well, that's that's Andre's job is to turn everything into a learning problem and collect a huge amount of data.

[01:07:14]

The reality is, though, not every problem can be turned into a learning problem in the short term. In the end, everything will be a learning problem.

[01:07:23]

The reality is, if you want to build elfy vehicles today, that will likely involve no learning.

[01:07:32]

And that's the reality is. So at which point does learning start? Is the crutched statement that later is a crutch? At which point will learning get up to part of human performance? It's all over human performance and image, not classification and driving is the question.

[01:07:50]

Still, it is a question. I'll say this. I'm I'm here to play for 10 years. I'm not here to try to I'm here play for 10 years and make money along the way. I'm not here to try to promise people that I'm going to have my L5 taxi network up and working in two years.

[01:08:04]

Do you think that was a mistake? Yes.

[01:08:07]

What do you think was the motivation behind saying that other companies are also promising Elfy vehicles with very different approaches in twenty twenty twenty twenty one, twenty twenty two.

[01:08:18]

If anybody would like to bet me that those things do not pan out, I will, I will bet even money. Even money. I'll bet you as much as you want.

[01:08:27]

So are you worried about what's going to happen? Because you and I are in full agreement on that. I was going to happen when. Twenty twenty two, twenty one come around and nobody has fleets of autonomous vehicles.

[01:08:39]

Now you can look at the history if you go back five years ago they were all promised by twenty, eighteen and twenty seventeen.

[01:08:46]

But they weren't that strong of promises.

[01:08:48]

I mean Ford really declared I think not many have declared as as definitively as they have now these dates.

[01:08:59]

Well, OK, so let's separate L4 and L5. Do I think that it's possible for wammo to continue to kind of like hack on their system until it gets to level four in Chandler, Arizona?

[01:09:10]

Yes. No safety driver. Chandler, Arizona. Yeah.

[01:09:16]

By which year are we talking about? Oh, I even think that's possible by like twenty, twenty, twenty, twenty one.

[01:09:22]

But level four, Chandler, Arizona, not level five, New York City, level four, meaning some very defined streets.

[01:09:32]

It wasn't really a very defined street. And then practically these streets are pretty empty. If most of the streets are covered in way, most women can kind of change the definition of what driving is.

[01:09:44]

Right. If your self driving network is the majority of cars in an area, they only need to be safe with respect to each other and all the humans will need to learn to adapt to them. Now, go drive in downtown New York.

[01:09:57]

Well, yeah, that's I mean, already you can talk about autonomy and like like on farms already works great because you can really just follow the gas line.

[01:10:07]

So what does success look like for me?

[01:10:11]

I what what are the milestones like where you can sit back with some champagne and say we did boys and girls?

[01:10:19]

Um, well, it's never over.

[01:10:22]

Yeah, but you'll be able to drink champagne every show. What is good.

[01:10:28]

What are some wins. A big milestone that we're hoping for are by mid next year is profitability of the company and. We're going to have to revisit the idea of selling a consumer product, but it's not going to be like the common one when we do it, it's going to be perfect. Open pilot has gotten so much better in the last two years.

[01:10:56]

We're going to have a few a few feature. We're going have one hundred percent driver monitoring. We're going to disable no safety features in the car. Actually, I think we really call what we're doing right now. I project this week is we're analyzing the data set and looking for all the triggers from the manufacturer systems with better data set on that than the manufacturers. How much? Just how many? Just Toyota have 10 million miles of real world driving to know how many times they're AEB triggered.

[01:11:21]

So let me give you because you asked. Right, financial advice. Yeah, because I work with a lot of automakers and one possible source of money for you, which I'll be excited to see you take on, is basically selling the data. So which is something that most people and not selling in a way we're here here automaker. But creating we've done this actually a remedy not for money purposes, but you could do it for significant money purposes and make the world a better place by creating a consortia where automakers would pay in and then they get to have free access to the data.

[01:12:03]

And I think a lot of people are really hungry for that. And we pay a significant amount of money for it. Here's the problem with that. I like this idea all in theory, be very easy for me to give them access to my servers. And we already have all open source tools to access this data in a great format.

[01:12:19]

We have a great pipeline, but they're going to put me in the room with some business development guy.

[01:12:25]

Mm hmm. And I'm going to have to talk to this guy, and he's not going to know most of the words I'm saying. I don't want to tolerate that. OK, Mick Jagger.

[01:12:35]

Yeah, no, no, no, no. But I think I agree with you. I'm the same way. But you just tell them the terms and there's no discussion needed. If I could if I could just tell them the terms.

[01:12:44]

Yeah. And then like. All right, who wants access to my data? I will sell it to you for, uh, let's say you want I want a subscription.

[01:12:54]

Also the four hundred came up and one hundred came out. Came out. I'll give you access to their subscription. Yeah. Yeah. I think that's kind of came up with that number off the top of my head. If somebody sends me like a three line email where it's like we would like to pay one hundred a month to get access to your data, we would agree to like reasonable privacy terms of the people who are in the data that I would be happy to do it.

[01:13:16]

But that's not going to be the email. The email is going to be, hey, do you have some time in the next month where we can sit down and we can? I don't have time for that. We're moving too fast.

[01:13:24]

Yeah, you could politely respond to that email by not saying I don't have any time for your bullshit. You say, oh, well, unfortunately, these are the terms. And so this is what we try to we we brought the cost down for you in order to minimize the friction, the communication. Here's the whatever it is, one, two million dollars a year. And you have X. And it's not like I get that email from, like, OK, am I going to reach out, I'm going to hire a business development person is going to reach out to the automakers.

[01:13:52]

No way. Yeah, OK, I got you.

[01:13:55]

If they reached into me, I'm not going to ignore the email. I'll come back with something. Yeah. If you're willing to pay for access to the data, I'm happy to to set that up. That's worth my engineering time.

[01:14:04]

But actually, quite insightful view. You're right. Probably because many of the automakers are quite a bit old school. Yeah. There will be need to reach out and they want it, but they will need to be some some communication. Right.

[01:14:16]

Mobileye circa twenty fifteen had the lowest R&D spend of any chip maker. Like Perper, and you look at all the people who work for them and it's all business development people, because the car companies are impossible to work with.

[01:14:31]

Yeah, so you're you have no patience for that. And you're you're legit. Android, huh? I have something to do.

[01:14:37]

Right. Like like it's not like it's not like I don't like I don't mean to be a dick and say like I don't have patience for that, but it's like that stuff doesn't help us with our goal of winning self-driving cars.

[01:14:47]

If I want money in the short term, if I showed off like the actual like the learning tech that we have, it's it's somewhat sad.

[01:14:56]

Like it's years and years ahead of everybody else's. Not maybe not hassles. I think Tesla has similar stuff to us actually. Yeah. I think Tesla similar stuff. But when you compare it to like what the Toyota Research Institute has, you're not even close to what we have no comment.

[01:15:10]

So but I also can't I have to take your comments. I intuitively believe you, but I have to take it with a grain of salt because, I mean, you are an inspiration because you basically don't care about a lot of things that other companies care about. You don't try to bullshit in a sense like make up stuff. So to drive a valuation, you're very, very real and you're trying to solve the problem. And I admire that a lot.

[01:15:38]

What I don't necessarily fully can't trust you on your respect is good. It is right. I can only but I also know how bad others are.

[01:15:48]

And so I'll say I'll say two things about trust but verify. Right. I'll say two things about that.

[01:15:54]

One is try getting a twenty twenty Corolla and try on zero point six. When it comes out next month. I think already you'll look at this and you'll be like them.

[01:16:06]

This is already really good.

[01:16:07]

And then I could be doing that all with hand labellers and all with, with like, like the same approach that like Mobileye uses.

[01:16:14]

When we release a model that no longer has the lanes in it, that only outputs a path.

[01:16:21]

Then think about how we did that machine learning and then right away, when you see and that's going to be an open pilot, that's going to be an open pilot before one point out, when you see that model, you'll know that everything I'm saying is true because how else did I get that model? Good. One of things to you about the simulator, right?

[01:16:36]

This super exciting. That's super exciting.

[01:16:38]

And, um, but like, you know, I listen to your talk with Kyle and Kyle is originally building the the aftermarket system.

[01:16:46]

And he gave up on it because of technical challenges. Yeah.

[01:16:51]

Because of the fact that he's going to have to support 20 to 50 cars. We support forty five. Because what is he going to do when the manufacturer ABS system triggers? We have alerts and warnings to deal with all of that and all the cars. And how is he going to formally verify it? Well, I got 10 million miles of data is probably better. It's probably better verified than the spec.

[01:17:09]

Yeah, I'm glad you're here. Talking to me, this is I remember this day, yes, it's interesting if you look at calls from from Cruise, I'm sure they have a large number of business development folks.

[01:17:23]

And you work with he's working in GM. You could work with AGOA. I worked on Ford. It's interesting because chances are that you fail business wise, like bankrupt or pretty high. Yeah. And and yet it's the underwear model is you're actually taking on the problem. That's really inspiring. I mean.

[01:17:44]

Well, I, I have a long term way for me to make money, too. And one of the nice things when you really take on the problem, which is my hope for auto pilot, for example, is things you don't expect ways to make money or create value that you don't expect will pop up.

[01:18:00]

Oh, I've known how to do it since kind of twenty. Seventeen is the first time I said it. But which part to not do how to do it.

[01:18:06]

Part our long term plan is to be a car insurance company.

[01:18:08]

Insurance. Yeah I love it. Yeah. Yeah. I make driving twice as safe.

[01:18:13]

Not only that, I have the best data set. You know who statistically is the safest drivers and oh we see you, we see you driving on safely.

[01:18:20]

We're not going to insure you and that that causes a like bifurcation in the market because the only people who can't get insurance are the bad drivers. Geico can insure them. Their premiums are crazy.

[01:18:30]

Higher premiums are crazy low when car insurance take over that whole market. OK, so if we win. If we win.

[01:18:37]

But that's I'm saying like, how do you turn karma into a ten billion dollar company? It's that. That's right. So you, Elon Musk, who else who else is thinking like this and working like this, in your view? Who are the competitors? Are there people seriously? I don't think anyone that I'm aware of is seriously taking on Lane, keeping, you know, like to cause a huge business that turns eventually to full autonomy that then creates.

[01:19:06]

Yeah, like that creates other businesses on top of it and so on, thinks insurance, thinks all kinds of ideas like that. Do you know who anyone else thinks like this?

[01:19:16]

Not really. That's interesting, I mean, my sense is everybody turns to that in like four or five years like Ford wants, the autonomy doesn't fall through.

[01:19:27]

But at this time, Ilan's, the IOC, by the way, he paved the way for the IOC. I would not be doing commentary today if it was not for those conversations with Elon and if it were not for him saying, like I think he said, like, well, obviously we're not going to use Leider.

[01:19:45]

We use cameras, humans use cameras. So what do you think about that?

[01:19:49]

How important is Lydda? Everybody else is using Whydah. What are your thoughts on his provocative statement? That is a crutch.

[01:19:57]

So sometimes he'll say dumb things like the driver monitoring thing, but sometimes will say absolutely completely, 100 percent, obviously true things.

[01:20:04]

Yeah, of course, lighter's a crutch. It's not even a good crutch. You're not even using it to using it for localization. Yeah. Which is good in the first place if you have to localize your car to centimeters in order to drive. Like, yeah, they're not driving currently, not doing much machine learning, I thought applied our data, meaning like to help you in the task of general task of perception.

[01:20:29]

The main goal of those lighters on those cars, I think is actually localization more than perception, or at least that's what they use them for. Yeah, that's true. If you want to localize to centimeters, you can't use GPS. The fastest GPS in the world can't do it, especially if you're under tree cover and stuff you can do pretty easily.

[01:20:44]

They really they're not taking on I mean, in some research, they're they're using it for perception, but and they're certainly not. Which said they're not using it. Well, with vision.

[01:20:55]

They do use it for perception.

[01:20:57]

I'm not saying they don't use it for perception, but the thing that they have vision based and radar based perception systems as well, you could remove the lighter and and and keep around a lot of the dynamic object perception. You want to get centimeter accurate localization. Good luck doing that with anything else. So what should cruise wammo do?

[01:21:19]

What would be your advice to them now?

[01:21:22]

I mean, when I was actually there, I mean, they're doing their serious wimoweh out of the ballroom.

[01:21:29]

Quite so serious about the long game.

[01:21:32]

If everybody fell five is a lot is requires 50 years, I think we will be the only one left standing at the end with the given the financial backing that they have their Buku Google Books.

[01:21:45]

I'll say nice things about both women and girls.

[01:21:49]

Let's do it. Nice is good. WAMMO is by far the furthest, along with technology, where has a three to five year lead on all the competitors. If that if the wammo looking stack works, hmm, maybe three year lead, if the wammo looking stack works, they have a three year lead.

[01:22:09]

Now I argue that wammo has spent too much money to recap a lot to try to gain back their losses in those three years. Also, self-driving cars have no network effect like that.

[01:22:19]

Yeah, Uber has a network effect. You have a market, you have drivers and you have riders, self-driving cars. You have capital and you have riders. There's no network effect. If I want to blanket a new city in self-driving cars, I by the off the shelf Chinese knockoff self-driving cars and I buy enough of from the city. I can't do that with drivers. And that's why Uber has a first mover advantage that no self-driving car company.

[01:22:38]

Well, can you disentangle that a little bit, Uber? You're not talking about Uber, the autonomous vehicle. No. You talk about the Uber car.

[01:22:47]

OK, yeah, I'm I open for business in Austin, Texas. Listen, I need to attract both sides of the market. I need to both get drivers on my platform and riders on my platform. And I need to keep them both sufficiently happy. Right. Riders aren't going to use it if it takes more than five minutes for an Uber to show up. Drivers aren't going to use it if they have to sit around all day and there's no riders.

[01:23:08]

So you have to carefully balance of market and whatever. You have to carefully balance a market. There's a great first mover advantage because there's a switching cost for everybody, right? The drivers and the riders would have to switch at the same time.

[01:23:20]

Let's even say that let's say Luber shows up and Luber somehow agrees to do things that are at a bigger you know, we're just going to we've done it more efficiently.

[01:23:33]

Right. Luber is only takes five percent of a cart instead of the ten percent that Uber takes. No one is going to switch because the switching cost is higher than that five percent. So you actually can in markets like that, you have a first mover advantage.

[01:23:45]

Yeah, autonomous vehicles of the level five variety have no first mover advantage.

[01:23:51]

If the technology becomes commodities to say I want to go to a new city, look at the scooters, it's going to look a lot more like scooters.

[01:23:58]

Every person with a checkbook can blanket a city and scooters.

[01:24:02]

And that's why you have ten different scooter companies. Which one is going to win? It's a race to the bottom.

[01:24:06]

It's terrible market to begin because there's no market for scooters and scooters don't get a say in whether they want to be bought, deployed to a city or not.

[01:24:15]

Right. So the yeah, we're going to entice the scooters with subsidies and deals.

[01:24:19]

And so whenever you have to invest that capital, it doesn't it doesn't come back and they can't be.

[01:24:26]

Your main criticism of the way my approach.

[01:24:28]

Oh, I'm saying even if it does technically work, even if it does technically work, that's a problem. Yeah, I don't know. If I were to say I would I would say you're already there. I haven't even thought about that.

[01:24:41]

But I would say the biggest challenge is the technical approach, the way most cruises and not just the technical approach, but of creating value.

[01:24:51]

I still don't understand how you beat Uber, the the human driven cars in terms of financially.

[01:25:01]

It doesn't it doesn't make sense to me that people want to want to get an autonomous vehicle. I don't understand how you make money in the long term. Yes. Like real long term. But it just feels like there's too much capital investment needed.

[01:25:16]

Oh, and they're going to be worse than Uber is because they're going to they're going to stop for every little thing everywhere.

[01:25:22]

I say nice thing about Cruise, that was my nice thing about 103 years ago. It was a nice oh that is three years technically ahead of everybody there. Tech Stack is great.

[01:25:31]

My nice thing about Cruise is GM buying them was a great move for GM for one billion dollars.

[01:25:38]

GM bought an insurance policy against Waymouth.

[01:25:43]

They put Cruise is three years behind wammo.

[01:25:47]

That means Google will get a monopoly on the technology for at most three years and the technology works.

[01:25:55]

So you might not even be right about the three years might be less, maybe less. Cruise actually might not be that far behind. I don't know how much wammo has waffled around or how much of it actually is just that long tail. Yeah, OK.

[01:26:06]

That's the best you could say in terms of nice things. It that's more of a nice thing for GM, that, that's the smart insurance policy. It's a smart insurance policy.

[01:26:16]

I mean I think that's how I can't see crews working out any other for crews to leapfrog.

[01:26:23]

Obama would really surprise me. Yeah, so let's talk about the underlying assumptions of everything. We're not going to leapfrog Tassell. Tesla would have to seriously mess up for us because you're OK, so the way you leapfrog, right, is you come up with an idea or you take a direction, perhaps secretly, that the other people aren't taking.

[01:26:48]

And so Cruz Wimoweh, even an Aurora Aurora Zuks, is the same stack as well.

[01:26:56]

They're all the same code base even, and they're all the same DARPA Urban Challenge Code Base.

[01:27:01]

So the question is, do you think there's a room for brilliance and innovation that will change everything like say, OK, so I'll give you examples. It could be if a revolution in mapping, for example, that allow you to map things, do H.T. maps of the whole world, all weather conditions somehow really well or.

[01:27:28]

Revolution in simulation to wear the all the what you said before becomes incorrect.

[01:27:36]

That kind of thing I wrote for Breakthrough Innovation. What I said before about how they actually get the whole thing, well, I'll say this about we divide driving into three problems and I actually haven't solved the third yet, but I have no idea how to do it. So there's the static. The static driving problem is assuming you are the only car on the road and this problem can be solved. One hundred percent with mapping and localization.

[01:28:00]

This is why farms work the way they do. If all you have to deal with is the static problem and you can statically schedule your machine, right.

[01:28:06]

It's the same as like statically scheduling processes. You can statically schedule your tractors to never hit each other on their paths because they know the speed they go at. So. So that's the static driving problem.

[01:28:16]

Maps only helps you with the static driving problem. Yeah.

[01:28:21]

The question about static driving, you've just made it sound like it's really easy sometimes you're lazy.

[01:28:28]

How easy. How well, because the whole drifting out of lane when when Tesla drifts out of lane is failing on the fundamental static driving problem.

[01:28:38]

Tesla is drifting out of lane. The static driving problem is not easy for the world.

[01:28:44]

The static driving problem is easy for one route, one route and one weather condition with one state of lane markings and like no deterioration, no cracks in the road.

[01:28:57]

I'm assuming you have a perfect localizer. So that solves for the weather condition and the the landmarking connection. That's the problem is how could you how you can build a perfect legalizers are not that hard to build.

[01:29:07]

OK, come on now with with with lighter. Lighter. Yeah. Would lighter ok.

[01:29:11]

Yeah. But you use lighter like use lighter, build a perfect localizer.

[01:29:15]

Building a perfect localizer without Lydda it's going to be, it's going to be hard. You can get 10 centimeters of that lighter, you can get one centimeter with lighter.

[01:29:23]

Maybe concerned about the one or 10 centimeters. I'm conservative every once in a while you just way off. Yeah.

[01:29:29]

So this is why you have to carefully make sure you're always tracking your position. You want to use our camera fusion, but you can get the reliability of that system up to.

[01:29:43]

One hundred thousand miles, and then you write some fallback condition where it's not that bad if you're way off, right? I think that you can get it to the point. It's like as old that your you're never in a case where you're way off and you don't know it. Yeah.

[01:29:55]

OK, so this is brilliant. So that's the static static we can, especially with light. Ah. And good HD maps. You can solve that problem.

[01:30:03]

Easy now is the static static. So very typical for you to say something. I got it. It's not as challenging as the other ones.

[01:30:13]

OK, well it's OK. Maybe it's obvious how to solve it. The third one is the hardest. So where do we get a lot of people don't even think about the third one and even to see it as different from the second one. So the second one is dynamic.

[01:30:22]

The second one is like, say there's an obvious examples, like a car stopped at a red light. Right.

[01:30:26]

You can't have that car in your map because you don't know whether that car is going to be there or not.

[01:30:31]

So you have to detect that car in real time and then you have to do the appropriate action. Right.

[01:30:38]

Also, that car is not a fixed object. That car may move and you have to predict with that car will do.

[01:30:44]

Right. So this is the dynamic problem. Yeah, so you have to deal with this, this involves, again, like you're going to need models of other people's behavior. Do you are you including in that I don't know what happens on the third one, but are you including in that your influence and that's the third one?

[01:31:04]

OK, that's the best. We call it the counterfactual. Yeah, I'd rather that. I just talk to Judea Pearl, who's obsessed with counterfactuals, in fact. Oh, yeah. Yeah.

[01:31:14]

Um, so the static and the dynamic are our approach right now for lateral will scale completely to the static and dynamic, the counterfactual. The only way I have to do it yet, the thing that I want to do once we have all these cars is I want to do a reinforcement learning on the world.

[01:31:33]

I'm always going to turn the exploiter up to Max. I'm not going to have them explore. But the only real way to get at the counterfactual is to do reinforcement learning because the other agents are humans.

[01:31:44]

So that's fascinating that you break it down like that. I agree completely.

[01:31:48]

I've spent my whole life thinking about this beautiful D'Urso and part of it because you're slightly insane because, um, not my life.

[01:31:58]

Just the last four years. No, no. You have, uh, some some none zero percent of your brain has a madman, and that's a really good feature.

[01:32:08]

But the there's a safety component to it that I think when there's served counterfactuals and so on, that would just freak people out. How do you even start to think about it in general? I mean, you've you've had some friction with Nizza and so on. I am frankly exhausted by safety engineers. The the prioritization on safety over innovation to a degree where kills, in my view, kills safety in the long term.

[01:32:42]

So the counterfactual thing, they just just actually exploring this world of how to interact with dynamic objects and so on. How do you think about safety?

[01:32:51]

You can do reinforcement learning without ever exploring. And I said that like so you can think about your own, like, reinforcement learning. It's usually called like a temperature parameter and your temperature parameter is how often you deviate from the max. I could always set that to zero and still learn. And I feel that you'd always want that set to zero on your actual system. Gotcha.

[01:33:11]

But the problem is you first don't know very much and so you're going to make mistakes. So the learning, the explosion happens like that.

[01:33:18]

Yeah, but OK. So the consequences of a mistake. Yeah. Open pilot and autopilot are making mistakes left and right.

[01:33:25]

Yeah we have, we have, we have seven hundred daily active users, a thousand weekly active users.

[01:33:30]

Open pilot makes tens of thousands of mistakes a week. These mistakes have zero consequences. These mistakes are. Oh it it. I wanted to take this exit and it went straight. So I'm just going to carefully touch the are humans, catch the humans, catch them. And the human disengagement is labeling that reinforcement learning as a completely consequence free way.

[01:33:53]

So driver monitoring is the way you ensure they keep. Yes, they keep paying attention. How is your messaging say I gave you a billion dollars, you would be scaling it now?

[01:34:04]

Oh, if I could, I couldn't go out with any amount of money. I'd raise money if I if I had way to scale you. You're not. I don't know. I don't know how to do it. Oh. Like, I guess I could sell it to more people but I want to make the system better. Better. But I don't know.

[01:34:15]

But what's the messaging here. I got a chance to talk to you on and and he he basically said that the human factor doesn't matter. You know, the human doesn't matter because the system will perform. There will be sort of a sorry to use the term, but like a singularity, like a point where it gets just much better. And so the human it won't really matter. But it seems like that human catching the system when it gets into trouble is like the thing which will make something like reinforcement learning work.

[01:34:49]

So how do you how do you think messaging for Tesla, for you should change for the industry in general should change?

[01:34:55]

I think my messaging is pretty clear, at least like our messaging wasn't that clear in the beginning. And I do kind of fault myself for that. We are proud right now to be a level two system. We are proud to be level two. If we talk about level four, it's not what the current hardware. It's not going to be just a magical upgrade. It's going to be new hardware. It's going to be very carefully thought out. Right now, we are proud to be level two and we have a rigorous safety model.

[01:35:20]

I mean, not like like, OK, rigourous. Who knows what that means? But we at least have a safety model and we make it explicit. It's in Safety Darmody and Open Pilot.

[01:35:28]

And it says, seriously though, it's the same damn deal as brothers. So Android so.

[01:35:35]

Well, this is this is this is the safety model.

[01:35:38]

And I like to have conversations like if like, you know, sometimes people will come to you and they're like, your system's not safe. OK, have you read my safety docs?

[01:35:47]

Would you like to have an intelligent conversation about this? And the answer is always no. They just like scream about it runs Python.

[01:35:54]

OK, so you're saying that that because pythons, not real time python not being real time never causes disengagement, disengagement are caused by the model is QM, but safety says the following.

[01:36:06]

First and foremost, the driver must be paying attention at all times.

[01:36:10]

I don't. I do.

[01:36:11]

I still consider the software to be Alpha software until we can actually enforce that statement. But I feel it's very well communicated to our users to more things.

[01:36:21]

One is the user must be able to easily take control of the vehicle at all times. So if you step on the gas or brake with open pilot, it gives full manual control back to the user or press. The one step to the car will never react so quickly. We define so quickly to be about one second that you can't react in time, and we do this by enforcing talk limits, breaking limits and acceleration limits.

[01:36:48]

So we have some like artwork on its way lower than. This is another potential.

[01:36:55]

If I could tweak autopilot, I would lower the talk limit.

[01:36:57]

I would add driver monitoring because autopilot can jerk the wheel hard. Yeah, I can't.

[01:37:03]

It's we we limit and all this code is open source readable and I believe now it's on Mizrachi compliant.

[01:37:11]

What's that mean.

[01:37:12]

On Misere is like the automotive coding standard.

[01:37:16]

At first I you know, I've come to respect, I've been reading like this lately and I've come to respect them. They're actually written by very smart people. Yeah.

[01:37:24]

They're brilliant people actually have a lot of experience that are sometimes a little too cautious. But in this case it pays off.

[01:37:33]

Misery's written by like computer scientists. And you can tell by the language they use, you can tell by the language they use. They talk about like weather, certain conditions in Misrata, citable or undecidable.

[01:37:43]

You mean like the halting problem and. Yes. All right. You've earned my respect. I will read carefully what you have to say. And we want to make our code compliant with that. All right.

[01:37:52]

So you proud level too. Beautiful. So you are the founder and CEO of Akamai, then? You were the head of research. What the heck are you now? What's your connection to come on the president?

[01:38:05]

But I'm one of those like unelected, unelected presidents of like like a small dictatorship country, not one of those like elected president.

[01:38:11]

Oh, so you like Putin when he was like the. Yeah, I got you. This is so there's what's the governance structure? What's the what's the future of Crimea? I find it I mean, as a business, do you are you just focused on getting things right now, making some small amount of money in the meantime?

[01:38:31]

And then when it works, it works and you scale our burn rate is about two hundred a month and our revenue is about one hundred came up.

[01:38:39]

So we need to forex our revenue, but we haven't tried very hard at that yet.

[01:38:44]

And the revenue is basically selling stuff online. We sell stuff, shoptalk, commentary. Is there other. Well, so you you'll have to figure out that's our that's our only reality.

[01:38:53]

But to me that's like respectable revenues. We make it by selling products to consumers.

[01:38:59]

We're honest and transparent about what they are most actually level for companies. Right. Because you could easily start blowing up like smoke, like overselling the hype and feeding into getting some fundraiser's, oh, you're the guy. You're a genius because you hacked iPhone. Oh, I hate that. I hate the way I can trade my social capital for more money. I did it once.

[01:39:23]

I almost I regret it for doing the first time. Well, on a small tangent, what's your you seem to not like fame and yet you're also drawn to fame was of you were on you. Where are you on that currently. Have you had some introspection, some soul searching.

[01:39:43]

Yeah, I actually I've come to a pretty stable position on that. Like after the first time I realized that I don't want attention from the masses.

[01:39:53]

I want attention from people who I respect. Who do you respect, I can give a list of people. So are these like Elon Musk type characters?

[01:40:03]

Yeah. Or actually, you know what, I'll make it more broad than that, I won't make it about a person. I respect skill. I respect people who have skills.

[01:40:12]

Right.

[01:40:13]

And I would like to, like, be so famous, but be like known among more people who have, like, real skills.

[01:40:23]

Who in cars do do you think have skill?

[01:40:29]

And I do respect all that has skill. A lot of people away might have skill and I respect them.

[01:40:37]

I respect them as engineers.

[01:40:40]

Like I can think I mean, I think about all the times in my life where I've been like dead set on approaches and they turn out to be wrong. Yeah. So, I mean, this might I might be wrong. I accept that. I accept that there is a decent chance that I am wrong.

[01:40:53]

And actually, I mean, having talked to Chris Urmson, Sterling Anderson, those guys, I mean, I deeply respect Chris. I just admire the guy. Uh, he's legit.

[01:41:03]

And you drive a car through the desert when everybody thinks it's impossible that that's legit.

[01:41:08]

And then I also really respect the people who I like writing the infrastructure of the world, like Linus Torvalds and the Chris years. They were doing the real work.

[01:41:15]

I don't doing the real work.

[01:41:17]

The dogs that Chris Ladd and you realize, especially when they're humble, it's like you realize, oh, you guys, we're just using your.

[01:41:26]

Oh, yeah. The all the hard work, the here. It's incredible. What do you think, Mr. Anthony Dyleski? Uh, what do you he's he's another mad genius sharp guy.

[01:41:38]

Oh yeah. What do you think he might long term become a competitor.

[01:41:44]

Oh, Tacoma. Well, so I think that he has the other right approach. I think that right now there's two right approaches. One is what we're doing and one is what he's doing.

[01:41:54]

Can you describe I think it's called pronto. He started using. Do you know what the approach is? Actually don't know.

[01:41:59]

Embark is also doing the same sort of thing. The idea is almost that you want to.

[01:42:03]

So if you're I can't partner with Honda and Toyota.

[01:42:08]

Honda and Toyota are like four hundred thousand person companies. It's not even a company at that point. I don't think of it like I don't personify it. I think of it like an object.

[01:42:18]

But as a trucker drives for a fleet, maybe that has like some truckers are independent. Some truckers drive for fleets with one hundred trucks. There are tons of independent trucking companies out there. Start a trucking company and drive your costs down or figure out how to drive down the cost of trucking.

[01:42:40]

Another company that I really respect is not I respect their business model. Noro sells a driver monitoring camera and they sell it to fleet owners. If I. That's right.

[01:42:51]

If I owned a fleet of cars and I could pay forty bucks a month to monitor my employees, this is going to like reduces accidents.

[01:43:01]

Eighteen percent. Yeah. It's so like that in the space that is like the business model that I like most respect because they're creating value today.

[01:43:11]

Yeah.

[01:43:12]

Which is, that's a huge one is how do we create value today with some of the lane keeping things huge. And it sounds like you're creeping in or full steam ahead on the drive of monitoring to which I think actually where the short term value, if you can get it right, I still I'm not a huge fan of the statement that everything is tough, driving, martyring. I agree with that completely. But that statement usually misses the point that to get the experience of it right is not trivial.

[01:43:38]

Oh, no, not at all. In fact, like so right now we have I think the time out depends on speed of the car.

[01:43:46]

But we want to depend on like the scene state. If you're on like an empty highway, it's very different if you don't pay attention and if, like, you are like coming up to a traffic light and long term, it should probably learn from from the driver because that's the do I watch a lot of video.

[01:44:04]

We've built a smartphone detector just to analyze how people are using smartphones and people are using it very differently as this is it's a texting styles.

[01:44:14]

There's watched nearly enough of the videos, like I got millions of miles of people driving cars in this moment.

[01:44:20]

I spend a large fraction of my time just watching videos because it's never fails to to learn like it. Never. I've never failed from a video watching session to learn something I didn't know before that I usually like when I eat lunch, I'll sit, especially when the weather's good and just watch pedestrians with an eye to understand, like from a computer vision. I just to see, can this model can you predict what are the decisions made? And there's so many things that we don't understand.

[01:44:49]

This is what I mean by the state vector. Yeah.

[01:44:51]

It's I'm trying to always think like I'm understanding in my human brain. How do we convert that into how how hard is the learning problem here, I guess is the fundamental question. So something that. From a hacking perspective, this is always comes up, especially with folks. Well, first, the most popular question is the trolley problem, right? So that's not of a serious problem. There are some ethical questions, I think, that arise.

[01:45:22]

Maybe you want to. Do you think there's any ethical, serious ethical questions?

[01:45:27]

We have we have a solution to the problem that come alive.

[01:45:30]

Well, so there is actually an alert in our code ethical dilemma detected. It's not triggered yet. We don't we don't know how yet to detect the ethical dilemmas, but we're a level two system, so we're going to disengage and leave that decision to the human.

[01:45:41]

You're such a troll. No, but the trolley problem deserves to be told. Yeah. That's a beautiful answer, actually.

[01:45:48]

I know I gave it to someone who was like sometimes people ask, like you asked about the trolley problem. Like, you can have a kind of discussion about it.

[01:45:54]

Like when you get someone who's, like, really, like, earnest about it, because it's the kind of thing where if you ask a bunch of people in an office whether we should use a sequel stack or no sequel stack, if they're not that technical, they have no opinion. But if you ask them what color they want to paint the office, everyone has an opinion on that.

[01:46:10]

And that's why the trolley problem is I mean, that's a beautiful answer. Yeah. We're able to detect the problem and we're able to pass it on to the human.

[01:46:18]

Well, I've never, never heard anyone say this is a nice escape route, OK, but crowd level to a pro level to love it.

[01:46:27]

So the other thing that people, you know, have some concern about with A.I. in general is hacking.

[01:46:34]

So how hard is it, do you think, to hack an autonomous vehicle either through physical access or through the more sort of popular and all these adversarial examples on the sensors, the adversarial examples?

[01:46:46]

One, you want to see some adversarial examples that affect humans, right?

[01:46:51]

Oh, well, there used to be a stop sign here, but I put a black bag over the stop sign and then people ran it adversarial. Yeah, right.

[01:47:00]

Like like like there's tons of human adversarial examples to the question in general about, like security.

[01:47:08]

If you saw something just came out today and like there are always such hype headlines about like how to navigate on autopilot was fooled by a spoof to take an exit. Right. At least that's all they could do was take an exit.

[01:47:20]

If your car is relying on GPS in order to have a safe driving policy, you're doing something wrong if you are lying. And this is why Vedova is such a terrible idea. Vedova now relies on both parties getting communication right.

[01:47:36]

This is not even so I think of safety. Security is like a special case of safety, right?

[01:47:45]

Safety is like we put a little piece of caution tape around the hole so that people won't walk into it by accident. Security is I put a ten foot fence around the hole so you actually physically cannot climb into it with barbed wire on the top and stuff. Right.

[01:47:58]

So, like, if you're designing systems that are like unreliable, they're definitely not secure.

[01:48:04]

Your car should always do something safe using its local sensors and then the local sensors should be hard wired.

[01:48:11]

And then could somebody hack into your canvas and turn your steering wheel and your brakes? Yes, but they could do it before I do so.

[01:48:19]

Let's think out of the box and some things. So do you think the operation has a role in any of this remotely stepping in and controlling the cars?

[01:48:30]

No, I think that if safety if the safety operation by design requires a constant link to the cars, I think it doesn't work. So that's the same argument used for VTI, the TV.

[01:48:47]

Well, there's a lot of non safety critical stuff you can do with Betawi. I like the way I like to. I weigh more than V.V. because VTI is already like I already have Internet in the car. Right. There's a lot of great stuff you can do with Vita.

[01:48:59]

I, um, like for example. You can. Well I already have. It weighs is very high. Weighs can route me around traffic jams. That's a great example of it. Mm hmm.

[01:49:09]

OK, the car automatically talks to that same service like it's improving the experience, but it's not a fundamental fallback for safety. No.

[01:49:16]

If any of your if any of your if any of your things that require wireless communication are more than, um, like have an asshole rating, you shouldn't you previously said that life is work.

[01:49:30]

Yeah. And you don't do anything to relax. So how do you think about hard work.

[01:49:37]

Well what is it, what do you think it takes to accomplish great things? And there's a lot of people saying that there needs to be some balance. You know, you need to in order to accomplish great things, you need to take some time off to reflect and so on now and then. Some people are just insanely working, burning the candle at both ends. How do you think about that?

[01:49:57]

I think I was trolling in the Saraj interview when I said that off camera right before I spoke a little bit.

[01:50:03]

We'd like it out. But this is a joke, right? Like, I do nothing to relax. Look where I am about a party, right?

[01:50:09]

Yeah. Yeah, that's true.

[01:50:11]

Of so of course, I don't what I say that life is work though. I mean that like I think that what gives my life meaning is work. I don't mean that every minute of the day you should be working. I actually think this is not the best way to maximize results. I think that if you're working twelve hours a day, you should be working smarter, not harder.

[01:50:31]

Well, so it gives work, gives you meaning for some people, other social meaning as personal relationships like family and so on. You've also in that interview with Soroush or just the trolling mentioned that one of the things you look forward to in the future is a girlfriend. Yes. So that's a topic that I'm very much fascinated by, not necessarily girlfriends, but just forming a deep connection with that. What kind of system do you imagine when you say I girlfriend, whether you were trolling or not?

[01:51:04]

Know that one? I'm very serious about and I'm serious about that. On both a shallow level and a deep level, I think that VR brothels are coming soon and are going to be really cool.

[01:51:14]

It's not cheating if it's a robot.

[01:51:16]

I see the slogan already on, but there's a I don't know if you've watched it. Just watch the Black Mary episode. I watched the lights on. Yeah, yeah, yeah.

[01:51:27]

Oh the the the actually two one or the no where there's two friends were having sex with each other in. Oh in the Viagra in the game. Yeah. It's just two guys but yeah. One of them was, was a female. Yeah. Yeah.

[01:51:43]

It's just another mind blowing concept that in VR you don't have to be the form you can be to animals having sex. Sex is weird.

[01:51:54]

I mean honestly the software maps the nerve endings. Yeah.

[01:51:58]

Yeah. They, they sweep a lot of the fascinating, really difficult technical challenges under the rug. Like assuming it's possible to do the mapping of the nerve endings, then I wish I saw that the way they did it with the little like STEM unit on the head, that that would be amazing.

[01:52:13]

So on and on a shallow level, like you could set up like almost a brothel with, like real dolls and Oculus quests. Right, right. Some good software, I think it'd be a cool novelty experience.

[01:52:25]

But on a deeper, like, emotional level, I mean.

[01:52:31]

Yeah, I would really like to to fall in love with the machine.

[01:52:34]

Do you see yourself having a long term relationship of the kind monogamous relationship that we have now with a robot, with a system even not even just the robot?

[01:52:49]

So I think about maybe my ideal future. When I was 15, I read as you could cast these early writings on the singularity and like that I is going to surpass human intelligence massively.

[01:53:10]

He made some Moore's Law based predictions that I mostly agree with. And then I really struggled for the next couple of years of my life, like, why should I even bother to learn anything?

[01:53:19]

It's all going to be meaningless when the machines show up, right?

[01:53:24]

Maybe. Maybe what?

[01:53:25]

I was that young. I was still a little bit more pure and really, like, clung to that. And then I'm like, well, the machines ain't here yet. And I seem to be pretty good at this stuff. Let's let's try my best, you know, like, what's the worst that happens? But the best possible future I see is me sort of merging with the machine and the way that I personify this isn't a long term monogamous relationship with the machine.

[01:53:49]

Oh, you don't think there's room for another human in your life if you really, truly merge with the machine?

[01:53:55]

I mean, I see merging. I see like, the best interface to my brain is like the same relationship in place to merge with an AI.

[01:54:06]

Right. What does that merging feel like?

[01:54:09]

I've seen I've seen couples who have been together for a long time and like I almost think of them as one person, like couples who spend all their time together.

[01:54:17]

And that's fascinating.

[01:54:19]

You're actually putting what does that merging actually looks like? It's not just a nice channel like a lot of people imagine. It's just an efficient link, search link to Wikipedia or something. I don't believe in that. But it's more you're saying that there's the same kind of the same kind of relationship you have with other human relationships. That's what merging looks like. That's that's pretty.

[01:54:40]

I don't believe that link is possible. I think that that link so you're like, oh, I'm going to download Wikipedia right to my brain. Yeah.

[01:54:46]

My reading speed is not limited by my eyes, by reading speed is limited by my inner processing LoopNet and to like bootstrap that sounds kind of unclear how to do it and horrifying.

[01:54:58]

But if I am with somebody and I'll use a somebody who is making a super sophisticated model of me and then running simulations on that model, I'm not going to get to the question what the simulations are, conscious or not.

[01:55:12]

I don't really want to know what it's doing.

[01:55:13]

Um, but using those simulations to play out hypothetical futures for me, deciding what things to say to me, to guide me along a path, and that's how I envision it.

[01:55:25]

So on that path to a high of superhuman level intelligence, you've mentioned that you believe in the singularity, that singularity is coming again. Could be trolling, could be not, could be part of all trolling has truth in it.

[01:55:39]

I don't know what that means anymore. What does the singularity.

[01:55:42]

Yeah. So that's that's really the question. How many years do you think before the singularity, what form do you think it will take. Does that mean fundamental shifts in capabilities of A.I.? Does it mean some other kind of ideas?

[01:55:54]

Um, uh, maybe it's just my roots, but so I can buy a human being's worth of compute for like a million bucks today. It's about one deep poverty. Three I want like I think they claim one hundred paida flops. That's being generous. I think humans are actually more like twenty. So that's like five humans. That's pretty good. Google needs to sell their TPS.

[01:56:11]

Um, but I could buy I could, I could use, I could buy a stack of like I buy ten ETFs, build a data center full of them and for a million bucks I can get a human worth of compute.

[01:56:24]

But when you look at the total number of flops in the world, when you look at human flops, which goes up very, very slowly with the population and machine flops, which goes up exponentially, but it's still nowhere near.

[01:56:38]

I think that's the key thing to talk about when the singularity happened, when most flops in the world are silicon and not biological, that's kind of the crossing point. Like they are now the dominant species on the planet.

[01:56:51]

And just looking at how technology is progressing, what do you think that could possibly happen? It's going to happen in your lifetime? Oh, yeah, definitely in my lifetime. I've done the math.

[01:57:00]

I like twenty, thirty eight because it's the Unix timestamp all over.

[01:57:06]

Yeah, beautifully put. So you've, um, you've said that the meaning of life is to win if you look five years into the future. What does winning look like?

[01:57:19]

So. There's a lot of I can go into like technical depth to what I mean by that to win. It may not mean I was criticized for that in the comments.

[01:57:34]

Like, doesn't this guy want to, like, save the penguins in Antarctica or like, you know, listen to what I'm saying.

[01:57:41]

I'm not talking about like I have a yacht or something.

[01:57:43]

Yeah, I am an agent. I am put into this world and I don't really know what my purpose is. But if you're a reinforcement, if you're if you're an intelligent agent and you're put into a world, what is the ideal thing to do?

[01:57:59]

Well, the ideal thing mathematically, you go back to like Schmidt over theories about this is to build a compressive model of the world, to build a maximally compressive, to explore the world such that your exploration function maximizes the derivative of compression of the past. Who has a paper about this? And like I took that kind of is like a personal goal function.

[01:58:21]

So what I mean to win, I mean, like maybe maybe this is religious, but like I think that in the future I might be given a real purpose or I may decide this purpose myself. And then at that point, now I know what the game is and I know how to win. I think right now I'm still just trying to figure out what the game is.

[01:58:37]

But once I know so you have you have imperfect information. You have a lot of uncertainty about the reward function and you discovering it exactly what the purpose is. That's that's a better way to put it. So the purpose is to maximize it while you have it. A lot of uncertainty around it and you're both reducing the uncertainty and maximizing at the same time. And so that's at the technical level.

[01:59:00]

What is the if you believe in the universal prior, what is the universal reward function?

[01:59:05]

That's the better way to put it. So that when it's interesting, I think I speak for everyone in saying that I wonder what that reward function is for you. And I look forward to seeing that in five years and 10 years.

[01:59:23]

I think a lot of people, including myself, are cheering you on, man. So I'm happy you exist and I wish you the best of luck. Thanks for talking to them. Thank you. This is a lot of fun.