Transcribe your podcast
[00:00:00]

The following is a conversation with Elon Musk, part two, the second time we spoke on the podcast with parallels, if not in quality, than an outfit to the, objectively speaking, greatest of all time, Godfather Part two. As many people know, Elon Musk is a leader of Tesla, SpaceX, NewLink and the boring company, well, maybe less known that he's a world class engineer and designer, constantly emphasizing first principles, thinking and taking on big engineering problems that many before him will consider impossible.

[00:00:37]

As scientists and engineers. Most of us don't question the way things are done with simply follow the momentum of the crowd with revolutionary ideas that change the world on the small and large scales happen when you return to the fundamentals and ask, is there a better way? This conversation focuses on the incredible engineering and innovation done in brain computer interfaces that neural link. This work promises to help treat neurobiological diseases to help us further understand the connection between the individual neuron to the high level functioning of the human brain, and finally to one day expand the capacity of the brain through a two way communication with computational devices, the Internet and artificial intelligence systems.

[00:01:25]

This is the Artificial Intelligence Podcast.

[00:01:28]

If you enjoy it, subscribe on YouTube, Apple podcasts, Spotify support on Patrón or simply connect with me on Twitter. Àlex Friedman spelled F Our Ideas Man and now as an anonymous YouTube commenter, referred to our previous conversation as the, quote, historical first video of two robots conversing without supervision. Here's the second time, the second conversation with Elon Musk. Let's start with an easy question about consciousness, in your view, is consciousness something that's unique to humans or is it something that permeates all matter, almost like a fundamental force of physics?

[00:02:27]

I don't think consciousness permeates all matter. Like us believe that there's a philosophical.

[00:02:33]

How would you tell if that's true? That's a good point. I believe in scientific method or blow your mind or anything. But the scientific method, it's like if you cannot test the hypothesis, then you cannot reach a meaningful conclusion that it is true.

[00:02:47]

Do you think consciousness, understanding consciousness is within the reach of science of the scientific method?

[00:02:54]

We can dramatically improve our understanding of consciousness. You know, I would be hard pressed to say that we understand anything with complete accuracy, but can we dramatically improve our understanding of consciousness? I believe the answer is yes. Does any system in your view, have to have consciousness in order to achieve human level or superhuman level intelligence? There's a need to have some of these human qualities that consciousness, maybe a body, maybe a fear of mortality capacity to love those kinds of silly human things.

[00:03:33]

It's different, you know, there's this there's the scientific method, which I very much believe in, where something is true to the degree that it is testable. So. And otherwise, you're really just talking about, you know, preferences or untestable beliefs or that kind of thing. So ends up being somewhat of a semantic question where we were conflating a lot of things with the word intelligence. If we pass them out and say, you know, are we headed towards the future, where and I will be able to outthink us in every way, then the answer is unequivocally yes.

[00:04:22]

In order for any system that needs to outthink us in every way, it also needs to have a capacity to have consciousness, self awareness and understand it will be self aware.

[00:04:34]

Yes, that's different from consciousness. I mean, to in terms of what what consciousness feels like, it feels like consciousness is in a different dimension. But this is this could be just an illusion. You know, if you damage your brain in some way physically, you get you damage your consciousness, which implies that consciousness is a physical phenomenon.

[00:04:59]

In my view, the thing is that that I think are really quite, quite likely that digital intelligence will be able to outthink us in every way and we'll soon be able to simulate what we consider consciousness.

[00:05:13]

So to a degree that you would not be able to tell the difference and the from the from the aspect of the scientific method, it might as well be consciousness if we can simulate it perfectly.

[00:05:23]

If you can't tell the difference when this is sort of the Turing test. But think of a more sort of advanced version of the Turing test. If you're if you're talking to a digital superintelligence and can't tell if that is a computer or human, like I say, just having conversation over a phone or. A video conference or something where you think you're talking looks like a person makes all of the right inflections and movements and all the small subtleties that constitute a human and talks like human makes mistakes like a human like that.

[00:06:04]

And you literally just can't tell is this are you really conversing with a person or an AI?

[00:06:13]

Might as well as well be human, so on a darker topic, you've expressed serious concern about existential threats of AI is perhaps one of the greatest challenges our civilization faces. But since I would say what kind of an optimistic descendants of apes, perhaps you can find several paths of escaping the harm away, I so if I can give you three options, maybe you can comment. Which do you think is the most promising? So one is scaling up efforts and AI safety and beneficial AI research in hopes of finding an algorithmic or maybe a policy solution to is becoming a multiple of three species as quickly as possible.

[00:06:55]

And three is merging with A.I. and riding the wave of that increasing intelligence as it continuously improves. What do you think is the most promising, most interesting as a civilization that we should invest in? I think there's a tremendous amount of investment going on now where there's a lack of investment is in safety and there should be, in my view, a government agency that oversees anything to try to confirm that it is does not represent a public safety risk, just as there is a regulatory authority for things like the Food and Drug Administration is.

[00:07:35]

That's for automotive safety. There's the FAA for aircraft safety. We're trying to come to the conclusion that it is important to have a government referee or referee that is serving the public interest in ensuring that things are safe when when there's a potential danger to the public. I would argue that I is unequivocally something that has to be dangerous to the public and therefore should have a regulatory agency, just as other things that are dangerous to the public have a regulatory agency.

[00:08:09]

But let me tell you a problem with this is that the government is very slowly. And the rate of that, usually where a regulatory agency comes into being, is that something terrible happens. There's a huge public outcry, and years after that, there's a regulatory agency or rule put in place take something like like seatbelts. It was known for a decade or more that seatbelts would have a massive impact on safety and saves so many lives and serious injuries.

[00:08:46]

And the car industry fought the requirement to put seatbelts in tooth and nail. That's crazy. Yeah, and hundreds of thousands of people probably died because of that. And they said people wouldn't buy cars if they had seatbelts, which is obviously absurd. Look at the tobacco industry and how long they fought any thing about smoking? That's part of why I helped make that movie. Thank you for smoking. It can sort of see just how pernicious it can be when you have this.

[00:09:19]

Companies effectively. Achieve regulatory capture of government, the bad people in our community refer to the advent of digital superintelligence as a singularity, that that is not to say that it is good or bad, but that it is very difficult to predict what will happen after that point and and that there's some probability it will be bad. Some probably will be it will be good. Or if they want to effect that probability and have it be more good than bad.

[00:09:55]

Well, let me on the merger with the I question and the incredible work that's being done in your link, there's a lot of fascinating innovation here across different disciplines going on.

[00:10:06]

So the flexible wires, the robotic sewing machine, the response to brain movement and everything around ensuring safety and so on.

[00:10:14]

So we currently understand very little about the human brain. Do you also hope that the work at Neural Link will help us understand more about our about the human mind, about the brain?

[00:10:30]

Yeah, I think the work in your link will definitely shed a lot of insight into how the brain, the mind works. Right now, just the data we have regarding how the brain works is very limited.

[00:10:42]

You know, we've got MRI, which is that that's kind of like putting us, you know, a stethoscope on the outside of a factory wall and then putting it like all over the factory wall. And you can sort of hear the sounds, but you don't know what the machines are doing. Really, I it's hard. You can infer a few things, but it's very broad brushstroke in order to really know what's going on in the brain. You really you have to have high precision sensors and then you want to have stimulus and response.

[00:11:10]

Like if you trigger a neuron. What how do you feel? What do you see? How does it change your perception of the world you're speaking to physically? Just getting close to the brain, being able to measure signals from the brain will give us sort of opening the door inside the factory. Yes, exactly. Being able to have high precision sensors. But I tell you what individual neurons are doing. And then being able to trigger on your own and see what the responses in the brain.

[00:11:39]

So you can see the consequences of a if you fire this year on what happens. How do you feel? What has change? It'll be really profound to have this in people because people can articulate their change, like if there's a change in mood or if they you know, they can tell you if they can see better or hear better or. Be able to form sentences better or worse or memories are jogged or that kind of thing. So on the human side, there's this incredible general malleability plasticity of the human brain.

[00:12:15]

The human brain adapts, adjusts and so on. So that's not that plastic, to be totally frank. So there's a firm structure, but nevertheless, there is some plasticity. An open question is sort of I could ask a broad question is how much that plasticity can be utilized sort of on the human side, there's some plasticity in human brain. And on the machine side, we have neural networks, machine learning, artificial intelligence. It's able to adjust and figure out signals.

[00:12:45]

So there's mysterious language that we don't perfect understand that within the human brain. And then we're trying to understand that language, to communicate both directions. So the brain is adjusting a little bit. We don't know how much and the machine is adjusting. Where do you see as they try to sort of reach together, almost like with an alien species, try to find a protocol, communication protocol that works? What do you see? The biggest the biggest benefit arrived from on the machine side or the human side?

[00:13:17]

Do you see both of them working together?

[00:13:19]

I think the machine side is far more malleable in the biological side, but by a huge amount. So it will be the the machine that adapts to the brain, that's the only thing that's possible, the brain can adapt that well to to the machine. You can't have neurons start to record and electrode as another neuron because neurons just like the pulse. And so something else is pulsing. So there is that there is that that elasticity in the interface, which we believe is something that can happen, but the vast majority of malleability will have to be on the machine side.

[00:13:55]

But it's interesting when you look at that synaptic plasticity at the interface side, there might be like an emergent plasticity because it's a whole nother it's not like in the brain. It's the whole another extension of the brain. You know, we may have to redefine what it means to be malleable for the brain. So maybe the brain is able to adjust to external interfaces.

[00:14:16]

There will be some adjustment to the brain because there's going to be something reading and simulating that the brain. And so it will adjust to to that thing.

[00:14:26]

But but the vast majority of the adjustment will be on the machine side. This is just this is just it has to be that otherwise it will not work.

[00:14:36]

Ultimately, like we currently operate on two layers. We have sort of lembeck prime primitive brain layer, which is where all of our kind of impulses are coming from. It's sort of like we've got we've got like a monkey brain with a computer stuck on it. That's that's the human brain. And a lot of our impulses and everything are driven by the monkey brain and the the computer.

[00:14:57]

The cortex is constantly trying to make the monkey brain happy.

[00:15:01]

It's not the cortex that's steering the monkey wrench, the monkey brain stirring the cortex, you know, but the cortex is the part that tells the story of the whole thing.

[00:15:11]

So we convince ourselves it's more interesting than just the monkey brain.

[00:15:16]

The cortex is like what we call like human intelligence, like that's like the advanced computer relative to other creatures. Other creatures do not have that really. They don't have the computer or they have very weak computer relative to humans. But but it's it's like it sort of seems like surely the really smart thing should control the damn thing, but actually I don't think it was a smart thing.

[00:15:45]

So do you think some of the same kind of machine learning methods, whether that's natural language processing applications, are going to be applied for the communication between the machine and the brain to learn how to do certain things like movement of the body, how to process visual stimuli and so on. Do you see the value of using machine learning to understand the language of the two way communication with the brain? Sure, absolutely. I mean, we're a neural net and that, you know, is basically neural net, like digital neural net will interface with biological neural net and hopefully bring us along for the ride.

[00:16:26]

Yeah, but the vast majority of our of our intelligence will be digital.

[00:16:35]

So like things like the difference in intelligence between the cortex and your limbic system is gigantic. Your limbic system really has no comprehension of what the hell the cortex is doing.

[00:16:50]

It's just literally hungry or tired or angry or sexy or something, you know, just and then in that case, that that impulse to the cortex and the cortex to go satisfy that, then a great deal, like a massive amount of thinking, like truly stupendous amount of thinking, has gone into sex without purpose, without provocation, without procreation.

[00:17:21]

Which which is actually quite a silly action in the absence of procreation. It's a bit silly. Why are you doing it? Because you make the limbic system happy. That's why. That's why. But it's pretty absurd, really.

[00:17:38]

Well, the whole of existence is pretty absurd in some sense.

[00:17:41]

Yeah, but I mean, this there's a lot of computation has gone into how can I do more of that with procreation not even being a factor. This is, I think, a very important area of research by an agency that should receive a lot of funding, especially after this conversation, if I prefer the formation of a new agency.

[00:18:05]

Boy, uh, what is the most exciting or some of the most exciting things that you see in the future impact of NewLink, both on the science and engineering, a societal broad impact. So NewLink, I think at first will solve a lot of brain related diseases. So it could be anything from like autism, schizophrenia, memory loss, like everyone experiences memory loss at certain points in age, parents can't remember their their kids' names and that kind of thing.

[00:18:36]

So there's a tremendous amount of good that neural and can do in solving a critical, critical damage to the brain or the spinal cord.

[00:18:47]

There's a lot that can be done to improve quality of life of individuals, and that will be those will be steps along the way and then ultimately. It's intended to address the existential risk associated with a digital superintelligence, like we will not be able to be smarter than a digital supercomputer. So therefore, you cannot beat them, join them. And at least we want to have that option, so you have hope that your link will be able to be a kind of connection to allow us to to merge, to ride the wave of the improving AI systems?

[00:19:29]

I think the chances are above zero percent. So it's not a zero. There's a chance.

[00:19:35]

And that's what I'm seeing. Dumb and Dumber. Yes. So I'm saying there's a chance.

[00:19:41]

You're saying one in a billion and one in a million, whatever it was, the Dumb and Dumber, you know, it went for maybe one million to improving.

[00:19:48]

Maybe it'll be one in a thousand and then one hundred, then one in ten. Depends on the rate of improvement of neuro link and how fast we're able to do make progress, you know. Well, I've talked to a few folks here that are quite brilliant engineers. Some I'm excited.

[00:20:02]

I think it's fundamentally good. You know, you're giving somebody back full motor control after they've had a spinal cord injury.

[00:20:10]

You know, restoring brain functionality after a stroke, solving debilitating genetically altered brain diseases, these are all incredibly great, I think.

[00:20:20]

And in order to do this, you have to be able to interface with the neurons at detailed level and need to fire the right neurons, read the right neurons, and and then effectively, you can create a circuit, replace what's broken with with silicon and essentially fill in the missing functionality and then over time. We can develop a tertiary layer of like limbic system is a primary layer, then the cortex is like the second layer of the cortex is vastly more intelligent than the limbic system.

[00:20:57]

But people generally like the fact that they have a limbic system and cortex. I've met anyone who wants to delete either one of them, like, OK, I'll keep them both as cool limbic system kind of fun. That's where the fun is. Absolutely. And then people generally don't lose the cortex either. I still like having the cortex and the limbic system. Yeah. And and then there's a tertiary layer which will be digital superintelligence. And I think there's room for optimism given that the cortex, the cortex is very intelligent and limbic system is not.

[00:21:30]

And yet they work together. Well, perhaps they can be a tertiary layer where digital superintelligence lies and that that will be vastly more intelligent than the cortex, but still coexist peacefully and in a benign manner with the cortex and limbic system. That's super exciting future both on the level of engineering that I saw being done here and the actual possibility in the next few decades.

[00:21:55]

It's important that NewLink solve this problem sooner rather than later, because the point at which we have digital superintelligence, that's when we pass the singularity and things become just very uncertain, doesn't mean that they're necessarily bad or good for the point of which we pass singularity. Things become extremely unstable. So we want to have a human brain interface before the singularity, or at least not long after it, to minimize existential risk for humanity and consciousness as we know it.

[00:22:23]

But there's a lot of fascinating actual engineering, low level problems here. And your like that are quite, quite exciting. What are the problems that we face in your art material science, electrical engineering, software, mechanical engineering, micro fabrication? It's a bunch of engineering disciplines.

[00:22:43]

Essentially, that's what it comes down to, is that you have to have a tiny electrode so, so small it doesn't hurt neurons, but it's got to last for as long as a person just could last for decades. And then you've got to take that signal. You've got to process that single signal locally at low power. So we need a lot of chip design engineers because we got to signal processing and do so in a very efficient way so that we don't hit your brain up because it's very heat sensitive.

[00:23:21]

And then and then we've got to take those signals. We've going to do something with them, and then we've got to stimulate and stimulate the back to to, you know, so you could bidirectional communication. So it's very good at material science, software, mechanical engineering, electrical engineering, chip design, fabrication. That's what those are the things we need to work on. We need to be good at material science so that we can have tiny electrodes that last a long time.

[00:23:49]

And that's the tough thing with the science behind a tough one, because you're trying to read and stimulate electrically in an electrically active area. Your brain is very electrically active and electric, chemically active. So how do you have a coating on the electrode that doesn't dissolve over time and and is safe in the brain?

[00:24:13]

This is a very hard problem. And then how do you collect those signals? In a way that is the most efficient, because you really just have very tiny amounts of power to process those signals, you know, then we need to automate the whole thing. So it's like Lazic, you know? So it's not if this is done by neurosurgeons, there's no way it can scale to a large number of people. And it needs to scale large numbers of people because I think ultimately we want the future to be determined by a large number of humans.

[00:24:49]

Do you think that this has a chance to revolutionize surgery, period? So neurosurgery and yeah, for sure. It's got to be like lasic, like Lasik that we had done that done by hand by a person that wouldn't be great. You know, it's done by a robot.

[00:25:09]

And the ophthalmologist kind of just needs to make sure your head's in my position and then they just press the button and go. Smart, summoned, and soon Auto Park takes on the full, beautiful mess of parking lots, and they're human to human nonverbal communication. I think it has actually the potential to have a profound impact in changing how our civilization looks at AI and robotics, because this is the first time human beings, people that don't own a house, may have never seen it or heard about it as they get to watch hundreds of thousands of cars without a driver.

[00:25:44]

So do you see it this way?

[00:25:47]

Almost like an education tool for the world. But I do feel the burden of that. The excitement of that. Or do you just think it's a smart parking feature?

[00:25:57]

I do think you are getting at something important, which is most people have never really seen a robot before. And what is the car that is autonomous? It's a four wheel robot, right? Yeah, it communicates a certain sort of message with everything from safety to the possibility of what I could bring to its current limitations, its current challenges, what's possible. Do you feel the burden of that?

[00:26:20]

Almost like a communicator, educator to the world about I we were just really trying to make people's lives easier with autonomy.

[00:26:28]

But now that you mention it, I think it will be an eye opener to people about robotics because they've really never seen most people never seen a robot. And there are hundreds of thousands of times those won't be long before there's a million of them that have autonomous capability and the drive without a person in it. And you can see the kind of evolution of the car's personality and and thinking with each iteration of autopilot. You can see it's it's uncertain about the circuits, but now it's more certain now.

[00:27:04]

And it's moving in a slightly different way. Like I can tell immediately, if a car is on autopilot because it's little nuances of movement, it just moves in a slightly different way. It's on autopilot, for example, on the highway, a far more precise about being in the center of the lane than a person. If you drive down the highway and look at how and where cars are, the human driven cars are within their lane. They're like bumper cars.

[00:27:30]

They're like moving all over the place. The car on autopilot. That center.

[00:27:34]

Yes, the incredible work that's going into that neural network, it's learning fast autonomy still very, very hard. We don't actually know how hard it is fully. Of course, you look at the most problems you tackle this one included with an exponential lens. But even with an exponential improvement, things can take longer than expected sometimes. So where does Tesla currently stand on its quest for full autonomy? What's your sense? When can we see successful deployment of full autonomy?

[00:28:13]

Well, on the highway already, the probability of intervention is extremely low, yes. Um, so for highway autonomy with limited release, especially the probability of needing to intervene is this is really quite low. In fact, I'd say four stop and go traffic. It's far safer than a person right now. It's talking about the probability of an injury or the impact is much, much lower for the pilot than a person. And there was never going to change lanes, take highway interchanges, and then we're coming at it from the other direction, which is low speed, full autonomy.

[00:28:50]

And in a way, this is like it's like how does a person learn to drive? You drive in the parking lot, you know, for a silent drive. Probably wasn't jumping on Wall Street in San Francisco. Be crazy. You were driving in the parking lot, getting things right at low speed. And and then the missing piece that we're working on is traffic lights and stuff, streets, subways, streets. I was actually also relatively easy because, you know, you kind of know where the stuff street is worst case, Jakarta, and then use visualization to see where the line is and stuff at the line to eliminate the GSR.

[00:29:28]

So it's actually probably complex traffic lights and very windy roads are the two things that need to get solved. What's harder perception of control for these problems? So being able to perfectly perceive everything or figuring out a plan once you perceive everything, how to interact with all the agents in the environment, in your sense, from a learning perspective, is perception or action harder? And that giant, beautiful, multitask learning network.

[00:30:00]

The hardest thing is having an accurate representation of the physical objects in vector space. So taking the visual input, primarily visual input, some sonar and radar and and then creating the an accurate vector space representation of the objects around you. Once you have an accurate vector space representation, the planning and control is relatively easier. It is relatively easy basically once you have. Accurate back to space representation. Then you're kind of like a video game, like cars and like Grand Theft Auto or something like.

[00:30:36]

They work pretty well. They drive down the road. They don't crash know pretty much unless you crash into them. That's because they've they've got an accurate vector space representation of where the cars are.

[00:30:46]

And they're just and then the rendering that as the as the outward, you have a sense high level that that's is on track and being able to achieve full autonomy.

[00:30:59]

So on the highway. Yeah, absolutely. And still no driver state driver sensing. And we have drivers sensing with talk on the wheel. That's right. Yeah. By the way, just a quick comment on karaoke. Most people think it's fun, but I also think it is the driving feature I've been saying for a long time. Sitting in a car is really good for attention management and vigilance management. Sorry, Tesla karaoke is great. Is that one of the most fun features of the car, do you think, of the connection between fun and safety?

[00:31:28]

Sometimes.

[00:31:29]

Yeah, you can do both at the same time. That's great.

[00:31:33]

I just met with Andrew and wife of Carl Sagan and the Cosmos are generally a big fan of Carl Sagan. He's super cool and had a great way of putting things all about consciousness, all civilisations. Everything we've ever known and done is on this tiny blue dot. People get they get too trapped in their squabbles amongst humans. Let's not think of the big picture. They take civilization and our continued existence for granted. I shouldn't do that. Look at the history of civilizations, the rise and they fall.

[00:32:07]

And now civilization is all it's globalized. And so civilization, I think, now rises and falls together. There's not there's not geographic isolation. This is a big risk. Things don't always go up. That should be that's an important lesson of history.

[00:32:30]

In 1990, at the request of Carl Sagan, the Voyager one spacecraft, which is a spacecraft that's reaching out farther than anything human made into space, turned around to take a picture of Earth from three point seven billion miles away. And as you're talking about the pale blue dot, that picture there, it takes up less than a single pixel. And that image appearing as tiny blue dot, as pale blue dot as Carl Sagan called it. So he spoke about this dot of ours in nineteen ninety four.

[00:33:04]

And if you could humor me, I was wondering if in the last two minutes you could read the words that you wrote describing this burbled up.

[00:33:14]

So the universe appears to be thirteen point eight billion years old. Earth is like four and a half billion years old.

[00:33:29]

You know, another half billion years or so, the sun will expand and probably evaporate the oceans and make life possible on Earth, which means that if it had taken consciousness 10 percent longer to evolve, it would never have bothered or.

[00:33:44]

Just 10 percent longer, and I wonder I wonder how many dead one plant civilizations that are out there in the cosmos that never made it to the other planet and ultimately extinguished themselves or were destroyed by external factors? Probably a few. It's only just possible to try to travel to Mars, just barely. If GWM Xenomorph when work. Really? It was 10 percent lower, busy working a single stage from the surface of Mars, always up to the Earth, because Mars is the center of gravity about.

[00:34:29]

A giant prosecutive channeling Carl Sagan, look again at that dot that's here, that's home, that's us on it. Everyone you love, everyone you know, everyone you've ever heard of, every human being who ever was left out, their lives, the aggregate of our joy and suffering, thousands of confident religions, ideologies and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every superstar, every supreme leader, every saint and center in the history of our species lived there on a mode of dust suspended in a sunbeam.

[00:35:31]

Our planet is a lonely speck in the great enveloping cosmic dark in our obscurity, in all this vastness, there is no hint that help will come from elsewhere to save us from ourselves. The Earth is the only world known so far to harbour life. There is nowhere else, at least in the near future, to which our species could migrate.

[00:35:51]

This is not true.

[00:35:53]

This is what Mars and I think Carl Sagan would agree with that. He couldn't even imagine it at that time. So thank you for making the world dream and thank you for talking today. I really appreciate it. Thank you.