Transcribe your podcast
[00:00:00]

Today, we have Eli Musk on. Thank you for joining us. Thanks for having me. So we want to spend the time today talking about your view of the future and what people should work on. So to start off, could you tell us, you famously said when you were younger, there were five problems that you thought were most important for you to work on. If you were 22 today, what would the five problems that you would think about working on the office?

[00:00:23]

Well, I think if somebody is doing something that is useful to the rest of society, I think that's a good thing. It doesn't have to change the world. You know, if you think something that has high value to people. And frankly, even if it's something if it's like just a little game or, you know, some improvement in photo sharing or something, if it does, it does a small amount of good for a large number of people.

[00:00:52]

That's I mean, I think that's that's fine. Like stuff doesn't need to be changed. Well, just to be good. But in terms of things that I think are most likely to affect the future of humanity, I think A.I. is probably the single biggest item in the near-term that's likely to affect humanity.

[00:01:12]

So it's very important that we have the advent of AI in a good way, but that is something that if you if you could look into a crystal ball and see the future you would like you would like that outcome because it is something that could go could go wrong. And as we've talked about many times. And so we really need to make sure it goes right. That's that's I think. I work working on AI and making sure it's a great future.

[00:01:45]

That's that's the most important thing, I think. Right now, the most pressing item then. Oh, see, I think it's through with genetics. If you can actually solve genetic diseases, if you can prevent dementia or Alzheimer's or something like that, that was genetic reprogramming. That would be wonderful. So I think there's genetics. It might be a sort of second most important item. I think having a high bandwidth interface to the brain like we're currently bandwith with limited.

[00:02:23]

We have a digital tertiary self in the form of our email capabilities that computers, phones, applications. We're effectively superhuman, but we're extremely bad with constraint in that interface between the cortex and your so that the tertiary digital form of yourself and helping solve that. And with constraint would would be, I think, very important for the future as well.

[00:02:47]

So one of the most common questions I hear young people out, ambitious young people ask is I won't be the next Elon Musk. How do I do that? Obviously, the next Elon Musk will work on very different things than than you did. But what have you done or what did you do when you were younger that you think sort of set you up to have a big impact?

[00:03:08]

Well, I think this was so that I did not expect to be involved in all these things. So the the the five things that I thought about the time in college quite a long time ago, 25 years ago.

[00:03:23]

You know, being you know, making life multi-planetary selling, exploring the transition to sustainable energy, the Internet, broadly speaking, and then genetics. And I think. I didn't expect to be involved in all of those things. I actually at the time in college, I sort of thought of helping with electrification of cars was how I was set up. And that's that's actually what I worked on as an intern was an advanced ultra capacitors was just we think there would be a breakthrough relative to batteries for energy storage in cars.

[00:04:01]

And then when I came out to go to Stanford, that's what I wanted to be doing my grad studies on as it was working on the best energy storage technologies for electric cars. And I put that on hold to start an Internet company in ninety five, because there does seem to be like a time for particular technologies when they're at a steep point in the inflection curve. And I didn't want to do poorly at Stanford. And then what will happen?

[00:04:33]

And then and I wasn't entirely certain that the technology I'd we're working on would actually succeed. I couldn't get you can get a doctrine on many things that ultimately are not do not have practical bearing on the world. And I wanted to, you know, just I really was just trying to be useful. That's the optimization. It's like, what? What can I do that would actually be useful?

[00:04:57]

Do you think people that want to be useful today should get p._h._d? I'm mostly not.

[00:05:03]

So what is the best way? Yes. But mostly not. How should someone figure out how they can be most useful, whatever this thing is that you're trying to create?

[00:05:13]

What would what would be the utility Delta compared to the current state of the art times? How many people it would affect? So that's why I think having something that has that that has a makes makes a big difference, but affects a sort of small to moderate number of people as great as this, something that makes it even a small difference. But it affects a vast number of people like the area. Yeah. You know. Yeah, except maybe that area of the curve is would actually be roughly similar for those who think so.

[00:05:42]

It's actually really about just print to be useful and measure when you're trying to estimate probability of success.

[00:05:52]

Same thing will be really useful. An area under the curve I guess to use the example of space X. Mm hmm. When you made the go decision that you're actually gonna do that, this was kind of a very crazy thing at the time.

[00:06:02]

Very crazy there. Sure. Yeah. I'm not shy about saying that, but I kind of agree.

[00:06:06]

I agreed with them that it was quite crazy. Crazy. If if if the objective was to achieve the best risk adjusted return setting our company is insane. But that was not that was not my objective. I I tend to come to the conclusion that if something didn't happen to improve rocket technology, we'd be stuck on earth forever. And. And the big aerospace companies had just had no interest in radical innovation. All they wanted to do was try to make their own.

[00:06:39]

Technology slightly better every year. And in fact, sometimes we would actually get worse. And particularly rockets' is pretty bad. In sixty nine were able to go to the moon with the Saturn 5. And then the space shuttle could only take people to low-Earth orbit and then the space shuttle retired. And that trend is basically trends to zero. If he decides that technology, he just automatically gets better every year. But it actually doesn't. It only gets better if smart people work work like crazy to make it better.

[00:07:09]

That's how any technology actually gets better. And by itself, technology. If people don't work, it actually will decline. You can look at the history of civilizations, many civilizations and look at, say, ancient Egypt were able to pull these incredible permits and then they basically forgot how to build permits. And then even hieroglyphics, they forgot how to read hydrogel hieroglyphics. So we look at Rome and how they able to look to build these incredible roadways and aqueducts and indoor plumbing.

[00:07:41]

And they forgot how to do all of those things. And there are many such examples in history. So I think choice, bear in mind that entropy is not on your side.

[00:07:59]

One thing I really like about you is you are unusually fearless and willing to go in the face of other people telling you something is crazy. And I know a lot of pretty crazy people who still stand out. Where does that come from or how do you think about making a decision when everyone tells you this is a crazy idea or where do you get the internal strength to do that?

[00:08:17]

Well, first, I'd say I actually think I get fifth feel fear quite strongly. So it's not as though I just have the absence of fear I have. I feel quite strongly. But there are times when something is important enough. You've believe in it enough that you do. You do it in spite of fear.

[00:08:37]

So speaking of important things like people shouldn't think I I should feel or should think, well, I feel fear about this and therefore I shouldn't do it.

[00:08:46]

It's normal to be to feel fear like you'd have to feel something mentally wrong. You shouldn't feel fear.

[00:08:53]

So you just feel it and let the importance of it drive you to do it anyway. Yeah. You know, I actually have some that can be helpful as fatalism to some degree. If you just think it just accept the probabilities, then that diminishes fear. So we're starting Space-X. I thought the odds of success were less than 10 percent. And I just accepted that we're actually probably I would just lose lose everything, but that maybe we'd make some progress if we could just move the ball forward.

[00:09:26]

Even if we died, maybe some other company could pick up the baton and move and keep moving forward. So we're still doing some good. Yeah, same with Tesla. I thought your odds of a car company succeeding were extremely low.

[00:09:40]

What do you think the odds of the Mars colony are at this point today? Well, oddly enough, I actually think they're pretty good. So, like, when can I go? OK. At this point, I am certain there is a way I am certain that success is one of the possible outcomes for establishing a self-sustaining masculinity. But growing last colony. I'm certain that that is possible. Whereas until maybe a few years ago, I was not sure that success was even one of the possible outcomes.

[00:10:14]

It's a meaningful number of people going to Mars. I think this is potentially something that can be accomplished in about 10 years, maybe sooner, maybe nine years. I need to make sure that Space X doesn't die between now and then and that I don't die. Or if I do die, that someone takes over who will continue that. Shouldn't go on the first launch. Exactly. This launch will be robotic anyway. So I want to go after the Internet latency.

[00:10:45]

Yeah, they're at latency. You'd be pretty significant. But Mars is roughly 12 light minutes from the sun and Earth is eight light minutes. So at closest approach, Mars is four light minutes away. That was purchased 20 little more because you have to talk directly through the sun.

[00:11:01]

Speaking of really important problems. So you have been outspoken about A.I.. Could you talk about what you think the positive future for air looks like and how we get there?

[00:11:13]

Okay. I mean, I do want emphasize that this is not really.

[00:11:20]

Something that I advocate or this is not prescriptive. This is simply. Hopefully predictive as he walks, unassailable like like this is something that I want to occur instead of this. I mean, I think that probably is the best of the available alternatives. The best of the available alternatives that I can come up with and maybe someone else can come up with a better approach or better outcome, is that we achieve democratization of high technology, meaning that no one company or small set of individuals has control over advanced technology.

[00:12:02]

I think that that's very dangerous. It could also get stolen by somebody bad. You know, like some evil dictator of a country could send their intelligence agency to go steal it and gain control. It just becomes a very unstable situation. I think if you've got any any incredibly powerful A.I.. You just don't know who's going to control that, so it's not as I think that the risk is that the air would develop a will of its own right off the bat.

[00:12:31]

I think it's more that the concern is that some someone may use it in a way that is bad or even if they weren't going to use in a way that's bad, but somebody could take it from them and use it in a way that's bad. That I think is quite a big danger. So I think we must have democratization of technology, make it widely available. And that's the reason that see. You mean the research team created opening? I was to help with the democracy.

[00:13:04]

Help, help spread out technologies so it doesn't get concentrated in the hands of a few. And but then, of course, that needs to be combined with solving the high bandwidth interface to the cortex. Humans are so slow as humans. So slow. Yes, exactly. But, you know, we already have this situation, our brain, where we've got the cortex and limbic system and the limbic system is kind of the meat of this. That's the primitive brain.

[00:13:37]

It's kind of like the urines. Your instincts and whatnot. And then the cortexes thinking of a part of the brain, those two seem to work together quite well. Occasionally your cortex and limbic system may disagree, but they think it works pretty well, generally works pretty well. And it's like rare to find someone who I've not found someone who wishes to either get rid of the cortex or get rid of their limbic system.

[00:14:02]

Very true. Yeah, it's that's unusual. So I think if if we can effectively merge with A.I. by improving that, that the neural link between your cortex and the eat your digital extension self, which already likes it already exists, just has a bandwidth issue. And then then effectively you become an A.I. human symbiote. And if that then is widespread with anyone who wants it can have it, then we solve the control problem as well. We don't have to worry about some sort of evil dictator or A.I.

[00:14:45]

because we are the AI collectively. That seems like the best outcome I can think of.

[00:14:52]

So you've seen other companies in their early days that start small and get really successful? I hope I never not asking us on camera, but how do you think opening AI is going as a six month old company? I think she got pretty well. I think we've got a really talented group at opening I and like, yeah, really, really talented team and they're working hard overnight is structured as, say, a 5 1 through a nonprofit. But, you know, many nonprofits do not have a sense of urgency.

[00:15:21]

It's fine. There have to have a sense of urgency. But opening and does is I think people really believe in the mission. I think it's important. And it's about minimizing the risk of existential harm in the future. And so I think it's going well. I'm pretty impressed with what people are doing in the tech talent level. And obviously, we're always looking for great people to join the mission list of 40 people. That's. Yes. All right.

[00:15:55]

Just a few more questions before we we wrap up. How do you spend your days now? Like what? What do you allocate most of your time to? My time is mostly split between Space X and Tesla. And of course, I try to spend part of every week at opening night. So I spend most I spend basically half a day at opening most weeks. And then and then I have some of stuff that happens during the week. But other than that, it's really what do you do when you're just like sex or Tesla?

[00:16:28]

Like, what does your time look like there? Yeah. So that's a good question.

[00:16:33]

I think a lot of people think I must spend a lot of time with media or on business things, but actually almost almost all my time, like 80 percent of it is spent on engineering, design and engineering and design. So it's developing next generation product. That's 80 percent of it.

[00:16:54]

You know, I remember it's a very long time ago, many, many years. You took me on a tour of Space X, and the most impressive thing was that you knew every detail of the rocket and every piece of engineering that went into it. I don't think many people get that about you.

[00:17:05]

Yeah, I think a lot of people think I'm kind of a business person or something. I just fine. Like business as fine. But like I but really, it's, you know, was I get Space-X Gwynne Shotwell is chief operating officer. She kind of manages legal finance sales and it kind of general business activity. And then my time is almost entirely with the insurance team working on improving the Falcon 9 and the Dragon spacecraft and developing the most colonial architecture.

[00:17:38]

And at Tesla, it's working on the Model 3. And I mean, the design studio typically happen in a week dealing with aesthetics and and look and feel things. And then most of this week is just going through engineering of the car itself as well as the engineering of the the factory, because the biggest of I've had this this year is that what really matters is, is the machine that both the machine, the factory and this is that is at least towards magnitude harder than the vehicle itself.

[00:18:18]

It's amazing to watch the robots go here in these cars just happen.

[00:18:23]

Yeah. Now, this actually is has a relatively low level of automation compared to what the Gigafactory will have and what Model 3 will have.

[00:18:33]

What's the speed on the line of these cars? Actually average the line is incredibly slow. It's probably about it, including both X and S. It's maybe five, you know, five centimeters per second.

[00:18:52]

And what can you do. This is very slow. What would you like to get to? I'm confident we can get to to at least one meter per second, so twentyfold increase. Be very fast. Yet at least me, I think quite a 1 meter per second.

[00:19:07]

Just put that in perspective. It is a slow walk or like a medium speed walk, a fast what could be one and have meters per second and. And then the fastest humans can run over 10 meters per second. So if we're the only doing points 0 five meters per second, that's very slow. Current current speed. And at one meter per second, you can still walk faster than the production line.