Transcribe your podcast
[00:00:00]

The following is a conversation with Max Tegmark, his second time on the podcast, in fact, the previous conversation was episode number one of this very podcast. He is a physicist and artificial intelligence researcher at MIT, co-founder of the Future of Life Institute and author of Life 3.0 Being Human in the Age of Artificial Intelligence. He's also the head of a bunch of other huge, fascinating projects and has written a lot of different things that you should definitely check out.

[00:00:32]

He has been one of the key humans who has been outspoken about long term existential risks of AI and also it's exciting possibilities and solutions to real world problems, most recently at the intersection of AI and physics, and also in reengineering the algorithms that divide us by controlling the information we see and thereby creating bubbles and all other kinds of complex social phenomena that we see today in general. He's one of the most passionate and brilliant people I have the fortune of knowing.

[00:01:04]

I hope to talk to many more times on this podcast in the future. Quick mention of our sponsors, the Jordan Harbage, a show for stigmatic mushroom coffee, better help online therapy and express support for the choices, wisdom, caffeine, sanity or privacy. Choose wisely, my friends, and if you wish, click the sponsor links below to get a discount to support this podcast. As a side note, let me say that much of the researchers in the machine learning and artificial intelligence communities do not spend much time thinking deeply about existential risks of AI because our current algorithms are seen as useful.

[00:01:45]

But dumb is difficult to imagine how they may become destructive to the fabric of human civilization in the foreseeable future. I understand this mindset, but it's very troublesome to me. This is both a dangerous and uninspiring perspective, reminiscent of the lobster sitting in a pot of lukewarm water that a minute ago was called. I feel a kinship with this lobster.

[00:02:08]

I believe that already the algorithms that drive our interaction on social media have an intelligence and power have far outstripped the intelligence of power of any one human being. Now really is the time to think about this, to define the trajectory of the interplay of technology and human beings in our society. I think that the future of human civilization very well may be at stake over this very question of the role of artificial intelligence in our society. If you enjoy this thing, subscribe on YouTube, review an Apple podcast, follow on Spotify, support on Patreon or connect with me on Twitter, Allex Friedemann, as usual.

[00:02:45]

I'll do a few minutes of ads now and no ads in the middle. I tried to make these interesting, but I give you a timestamp. So if you skip please to check out the sponsors by clicking the links in the description. It is in fact the best way to support this podcast. This episode is sponsored by the Jordan Harbinger Show. Go to Jordan Harbinger dot com slash leks subscribe to. Listen, you won't regret it. I've been bingeing on this podcast for the entirety of 20/20.

[00:03:15]

Jordan is the great interviewer. He gets the best out of his guests, dives deep, calls them out when it's needed and makes the whole thing fun to listen to. He has interviewed Kobe Bryant, Mark Cuban, Neil deGrasse Tyson, Garry Kasparov and many more. Perhaps more importantly, he is unafraid of addressing challenging even controversial topics with thought and grace. I especially like his feedback Friday's episodes, where his combination of fearlessness and thoughtfulness is especially on display, touching topics of sex, corruption, mental disorders, hate, love and everything in between.

[00:03:53]

Again, go to Jordan Harbinger. That comes next is how he knows I sent you on that page. There's links to subscribe to the show on everywhere. You listen to podcasts, including Apple podcast Spotify. If you're listening to this, I'm sure you know how this works out. If you like it, probably go give five stars they show, leave a nice review, tell them I sent you this shows also sponsored by the thing I currently really need, which is coffee for Stigmatic, the maker of delicious mushroom coffee and plant based protein.

[00:04:24]

I enjoy both. The coffee has lion's mane, mushroom for productivity and Chagga mushroom for immune support. The plant based protein has immune support as well and tastes delicious, which I think honestly, at least for me, is the most important quality of both protein and coffee. Supporting your immune system is one of the best things you can do now to stay healthy in this difficult time for the human species. Not only does for stigmatic always have 100 percent money back guarantee, but right now you can try their amazing products for up to 50 percent off.

[00:05:00]

And on top of that, you get extra discounts for just the listeners of this podcast. If you go to for stigmatic dot com slash leks, that's for stigmatic dot com slash Slack's. This episode is sponsored by Better Hope spelled h e l p help, I always think about the movie Castaway when I spell out the word help, OK, they figured out what you need and match you with a licensed professional therapist.

[00:05:27]

And under 48 hours I chat with a person on there and enjoy it.

[00:05:31]

Of course, I also have been talking to over the stretch of twenty twenty and looks like twenty twenty one as well to David Goggins, who is definitely not a licensed professional or some would say sane, but he does have me meet his and my demons and become comfortable to exist in their presence.

[00:05:53]

This is just my view, but I think mental struggle is essential for creation. But I think it can struggle in a way that's controlled and done beautifully as opposed to in a way that destroys you. I think therapy can help in this and better help is a good option for that. There are easy, private, affordable, available worldwide.

[00:06:13]

You can communicate by text any time and schedule weekly audio and video session to give a shout out to the two Ogg's, my two favorite psychiatrist, Sigmund Freud and Carl Jung. When I was younger. Their work was important in my intellectual development. Anyway, check out better health outcomes, Cognex. That's better health outcomes. Lex, this show is also sponsored by Express VPN. It provides privacy in your digital life. Without VPN, your Internet service provider can see every site you've ever visited, keeping a log of every single thing you did online.

[00:06:49]

So your Internet provider like AT&T or Comcast is allowed to store those logs and sell this data to anyone. That's why you should use Express VPN as much as possible. I do it. You should consider doing it as well. They don't keep any logs. They audit their attack by external companies. These guys are legit.

[00:07:09]

I think this topic and VPN certainly are especially relevant now when the power of social media firms and ISPs was made apparent with a wave of platforming actions. We need to use tools to lessen the power of these centralized entities. I use. Express is just one example of such a tool. Go to express dot com slash pod to get an extra three months free on a one year package is a big red button if you enjoy those kinds of things.

[00:07:39]

I certainly do. OK, that's express dotcom, that legs pod sign up for the privacy and the big red button that's expressly Buy.com Legs Pod. And now here's my conversation with Max Tegmark. So people might not know this, but you were actually episode number one of this podcast just a couple of years ago, and now we're back. And it so happens that a lot of exciting things happened in both physics and artificial intelligence, both fields that you're super passionate about.

[00:08:34]

Can we try to catch up to some of the exciting things happening in artificial intelligence, especially in the context of the way it's cracking open the different problems of the sciences?

[00:08:46]

Yeah, I'd love to, especially now as we start twenty, twenty one here. It's really fun.

[00:08:52]

Time to think about what were the biggest breakthroughs in AI, not the ones that sterilely the media wrote about, but there really matter. And what does that mean for our ability to do better science.

[00:09:04]

What does it mean for our ability to help people around the world? And what does it mean for new problems that they could cause if we're not smart enough to avoid them? So, you know, what did we learn basically from this? Yes, absolutely. So one of the amazing things you're part of is the A.I. Institute for Artificial Intelligence and Fundamental Interactions. What's up with this institute?

[00:09:29]

What are you working on? What are you thinking about? The idea is something I'm very on fire with, which is basically A.I. meets physics.

[00:09:38]

And, you know, it's been almost five years now since I shifted my own MIT research from physics to machine learning.

[00:09:47]

And in the beginning, I noticed that a lot of my colleagues, even though they were polite about it, I kind of ism doing this weird stuff.

[00:09:55]

He's lost his mind then.

[00:09:57]

But then gradually I got together with some colleagues, were able to persuade more and more of the other professors in the physics department to get interested in this. And now we got this amazing NSF sentence of 20 million bucks for for the next five years at MIT and a bunch of neighboring universities here also. And I noticed now those colleagues are looking at me funny. I've stopped asking them what the point is of this because it's becoming more clear. And I really believe that, of course, I can help physics a lot to do better physics, but physics can also help A.I. a lot, both by building better hardware.

[00:10:43]

My colleague might install gadgets, for example, is working on optical chip for much faster machine learning where the computation is done not by moving electrons around, but by moving photons around dramatically. Less energy use.

[00:10:58]

Faster, better we can. We can also help AI a lot, I think by having a. A different set of tools and a different, maybe more audacious attitude, you know, I has led to a significant extent been an engineering discipline where you're just trying to make things at work and being less more interested in maybe selling them than in figuring out exactly how they work and proving theorems about that.

[00:11:28]

They will always work. Right. Contrast that with physics. You know, when Elon Musk sends a rocket to the International Space Station. They didn't just train with machine learning. All that's fired a little bit, left more to the left a bit more to the right. That awesome. Let's try here. No, you know, we figured out Newton's laws of gravitation and other and got other things. We've got a really deep, fundamental understanding. And that's what gives us such confidence in in rockets.

[00:11:56]

And my vision is that. In the future, all machine learning systems that actually have impact on people's lives will be understood at a really, really deep level. So we trust them not because some sales rep told us to, but because they've earned our trust we can. And really safety critical things even prove that they will always do. You know, what we expect them to do? That's very much the physics mindset. So it's interesting if you look at big breakthroughs that have happened in machine learning this year, you know, from dancing robots, you know, it's pretty fantastic, not just because it's cool, but if you think about not that many years ago, this YouTube video at the DARPA challenge with the MIT robot comes out of the car and face plants.

[00:12:45]

Yeah, how far we've come. And just a few years.

[00:12:49]

Similarly, Alpha fold to, you know. Crushing the protein folding problem. We can talk more about the implications for medical research and stuff, but, hey, you know, that's huge progress.

[00:13:05]

You can look at the GP 83 that can spout off English texts, which sometimes really, really blows your way. You can look at the Google at the mines museum, which doesn't just kick your butt and go and chest and Shoghi, but also in all these Atari games. And you don't even have to teach at the rules. Now, you know, what all of those have in common is, besides being powerful is we don't fully understand how they work.

[00:13:36]

And that's fine if it's just some dancing robots and the worst thing that can happen is the faceplant, right? Or if they're playing, go on, the worst thing that can happen is that they make a bad move and lose the game. And it's less fine if that's what's controlling your self-driving car or your nuclear power plant. And we've seen already that even though Hollywood had all these movies where they try to make us worry about the wrong things, like machines turning evil, the actual bad things that have happened with automation have not been machines turning evil.

[00:14:11]

They've been caused by Overthrust in things we didn't understand as well as we thought we did. And even the very simple automated systems like what Boeing put into the 737 Max. Right? Yes. Killed a lot of people.

[00:14:25]

Was it that that little simple system was evil? Of course not. But we didn't understand it as well as we should have.

[00:14:32]

And we trusted without understanding exactly how trust we didn't even understand that. We didn't understand that humility is really at the core of being a scientist. I think step one, if you want to be a scientist, is don't ever fool yourself into thinking you understand things when you actually don't. Yes, right.

[00:14:52]

That's probably good advice for humans in general. I think humility in general was good, but in science, it's so, so spectacular. Like, why did we have the wrong theory of gravity ever from Aristotle onward and closed until Galileo's time? Like, why would we believe something so dumb as that? If I throw this water bottle, it's going to go up with constant speed until it realizes that it's natural motion is that changes its mind, you know, because we people just kind of assumed Aristotle was right is an authority.

[00:15:21]

We understand that. Why did we believe things like that? The sun is going around the earth. Why did we believe that time flows at the same rate for everyone until Einstein? Same exact mistake over and over again. We just weren't humble enough to acknowledge that. We actually didn't know for sure. We assumed we knew. So we didn't discover the truth because we assumed there was nothing there to be discovered. Right. There was something to be discovered about the 737, Max.

[00:15:49]

And if you had been a bit more suspicious and tested it better, we would have found it. And it's the same thing with most harm that's been done by automation so far, I would say. So I don't know if you ever hear of a company called Knight Capital know so good.

[00:16:04]

That means you didn't invest in them earlier.

[00:16:07]

Deployed this automated rating system. Yes, all nice and shiny. They didn't understand it as well as they thought. And that went about losing ten million bucks per minute. Yeah, for forty four minutes straight until someone presumably was like, oh, shut up.

[00:16:24]

You know, it was it evil?

[00:16:25]

No, it was again misplaced trust, something they didn't fully understand. Right. And there have been so many even when people have been killed by robots, it's quite rare still.

[00:16:38]

But in fact, three accidents in every single case been not malice, just that the robot didn't understand that a human is different from an auto part or whatever.

[00:16:47]

And we so this is why I think there's so, so much opportunity for physics approach where you just aim for a higher level of understanding.

[00:16:59]

And if you look at the all these systems that we talked about from the from.

[00:17:05]

Reinforcement learning systems and dancing robots to all these neural networks that power eighty-three and and go playing software. So they're all basically black boxes. Much like not so different from if you teach a human something, you have no idea how their brain works. Right. Except the human brain at least has been error corrected during many, many centuries of evolution in the way that these some of these systems have not. Right. And my my MIT research is entirely focused on demystifying this box.

[00:17:38]

Intelligible intelligence is my slogan. That's a good line. Intelligible intelligence.

[00:17:44]

Yeah. And that we shouldn't settle for something that seems intelligent, but we should be intelligible so that we actually trust it because we understand that. Right. Like, again, Ilan Trusters rockets, because he understands Newton's laws and thrust and how everything works. And let me tell you what. Can I tell you why I'm optimistic about this? Yes, I think I think we've made a bit of a mistake where we some people still think that somehow we're never going to understand neural networks.

[00:18:12]

And we're just going to have to learn to live with this. It's a very powerful black box basically for those, you know, haven't spent time building their own. It's super simple what happens inside. You send in a long list of numbers and then you do a bunch of operations on them, multiply by matrices, et cetera, et cetera. And some other numbers come out that the output of it. And then there are a bunch of knobs you can tune.

[00:18:39]

And when you change them, you know, it affects the computation, the input output relation. And then you just give the computer some definition of good and it keeps optimizing these knobs until it performs as good as possible. And often you go like, wow, that's really good. This robot can dance or this machine is beating me at chess now. And in the end, you have something which even though you can look inside it, you have very little idea how it works.

[00:19:03]

You know, you can print out tables of all the millions of parameters in there.

[00:19:08]

Is it crystal clear now how it's working? And, of course, nitrite?

[00:19:11]

This is many of my colleagues seem willing to settle for that. And I'm like, no, that's like the halfway point.

[00:19:19]

Some have even gone as far as sort of guessing that the Mr. inscrutability of this is where the some of the power comes from instead of some sort of mysticism. I think that's total nonsense. I think the real power of.

[00:19:34]

Neural networks comes not from inscrutability, but from differential ability, and what I mean by that is simply that the output, the change is only smoothly if you tweak your knobs and then you can use all these powerful methods we have for optimization in science, we can just tweak them a little bit and see, did that get better or worse? That's the fundamental idea of machine learning, that the machine itself can keep optimizing until it gets better. Suppose you wrote this algorithm instead in Python or some other programming language.

[00:20:09]

And then what? What the knobs did was they just changed random letters in your in your code. Mm hmm. Now we're just epically fail. Why do you change one thing? And instead of saying print that says sint syntax error, you don't even know. Was that for the better. For the worse. Write this to me is this is what I believe is the fundamental power of neural networks.

[00:20:30]

And just to clarify, the changing the different letters in a program would not be a differentiable process.

[00:20:36]

It would make it in an invalid program typically. And then you wouldn't even know if you changed more letters, if it would make it work again. Right.

[00:20:44]

So that's the magic of neural networks, the ability that defensibility that every every setting of the parameters is a program.

[00:20:52]

And you can tell is it better or worse. Right. And so so you don't like the poetry of the mystery of neural networks as the source of its power?

[00:21:00]

I generally like poetry, but in this case it's so misleading and it's above all, it's shortchanges us. It fails. It makes us underestimate what we can the good things we can accomplish. Because so what we've been doing in my group is basically step one, train the mysterious neural network to do something well and then step to do some additional AI techniques to see if we can now transform this blackbox into something equally intelligent that you can actually understand.

[00:21:32]

So, for example, I'll give you one example to say Feynman project that we just published.

[00:21:37]

So we took the 100 most famous or complicated equations from one of my favorite physics textbooks. In fact, the one that got me into physics in the first place, the Feynman lectures on physics.

[00:21:51]

And so you have a formula, you know, maybe it has. What goes into the formula is six different variables and then what comes out as one, so then you can make like a giant Excel spreadsheet with seven columns, you put in just random numbers for the six columns for those six input variables. And then you calculate with the formula the seventh column, the output. So maybe it's like the force equals in the last column, some function of the other.

[00:22:17]

And then the task is, OK, if I don't tell you what the formula was.

[00:22:21]

Can you figure that out from looking at my spreadsheet I gave you? Yes, this problem is called symbolic regression.

[00:22:29]

If I tell you that the formula is what we call a linear formula, so it's just that the output is. Some some of all the things input the time, some constants, that's of the famous easy problem we can solve. Can we do it all the time in science and engineering?

[00:22:46]

But the general one, if it's more complicated functions with logarithms or cosigns or other math. It's a very, very hard one and probably impossible to do fast in general just because the number of formulas with end symbols are just grows exponentially, just like the number of passwords you can make grow dramatically with length.

[00:23:06]

So some. But we have this idea that if you first have a neural network that can actually approximate the formula, you just train that. Even if you don't understand how it works, that can be. First step towards actually understanding how it works. So that's what we do first, and then we study that neural network now and put in all sorts of other data that wasn't in the original training data and use that to discover simplifying properties of the formula.

[00:23:36]

And that lets us break it apart often at the many simpler pieces and kind of divide and conquer approach. So we were able to solve all of those hundred formulas to discover them automatically, plus a whole bunch of other ones. And it's a it's actually kind of humbling to see that this code, which anyone who wants now is listening to this type PIP install a fireman on the computer and run it. You know, it can actually do what Johannes Kepler spent four years doing when he stared at Mars data until like, finally, Eureka.

[00:24:07]

This is an ellipse. Yeah. This will do it automatically for you in one hour. Right. Or Max Planck, he was looking at at how much radiation comes out from the different wavelengths from a hot object and discovered the famous black baby formula. This discovers that automatically. I'm actually excited about.

[00:24:30]

Seeing if we can discover not just old formulas again, but new formulas that no one has seen before.

[00:24:37]

And do you like this process of using kind of a neural network to find some basic insights and then dissecting the neural network to then gain the final? So that's in that way.

[00:24:49]

You've been forcing the inability, ability issue of, you know, really trying to analyze the neural network for the things it knows in order to come up with the final beautiful, simple theory underlying the whole idea in the initial system that you were looking at. I love that.

[00:25:09]

And the reason I'm so optimistic that it can be generalized to so much more is because that's exactly what we do as humans. Scientists think of Galileo, whom we mentioned right about when he was a little kid. If his dad threw him an apple, he would catch it.

[00:25:26]

Why? Because he had a neural network in his brain that he had trained to protect the parabolic orbit of apples that are thrown under. Grab it if you throw a tennis ball to a dog. It also has the same ability of deep learning to figure out how the ball is going to move and touch it. But Galileo went one step further.

[00:25:46]

When he got older, he went back. It was like, wait a minute.

[00:25:51]

I can write down a formula. Yes, Y equals X squared a parabola, you know, and he helped revolutionize physics as we know it, right.

[00:26:02]

So there is a basic network in there from childhood that captured, like the base, the experiences of observing different kinds of trajectories. And then he was able to go back in with another extra a little neural network and analyze all those experiences and be like, wait a minute, there's a deeper rule here.

[00:26:21]

Exactly. He was able to distill out in symbolic form what that complicated blackboard was doing. Not only did it, the form he got ultimately become more accurate, you know, and similarly, this is how he how Newton got Newton's laws, which is why Elon can send Rocket to the space station now. So it's not only more accurate, but it's also simpler, much simpler. And it's so simple that we can actually describe it to our friends and each other.

[00:26:50]

Right. And we've talked about it just in the context of physics now. But, hey, you know, isn't this what we're doing when we're talking to each other? Also, we go around with our neural networks just like dogs and cats and chipmunks and bluejays, and we experience things in the world. But then we humans do this additional stuff on top of that, where we then distill out certain high level knowledge that we've extracted from this in a way that we can communicate to each other in a symbolic form in English in this case.

[00:27:21]

Right.

[00:27:22]

So if we can do it and we believe that we are information processing entities, then we should be able to make machine learning that does it also.

[00:27:32]

Well, do you think the entire thing could be learning because they're the dissection process? Like for A.I. Feynman, the secondary stage feels like something like reasoning and the initial step feels like more like the the more basic kind of differentiable learning. Do you think the whole thing could be differentiable learning during the whole thing? Could be basic neural networks on top of each other. I think turtles all the way down can be neural networks all the way down.

[00:28:01]

I mean, that's a really interesting question. We know that in your case it is neural networks all the way down, because I like what you have in your skull as a bunch of neurons doing their thing. Right. But if you ask the question more generally, what what algorithms are your brain? Is your brain are being used in your brain? I think super interesting to compare. I think we've gotten a little bit backwards historically because we humans first discovered.

[00:28:27]

Good old fashioned, the logic based A.I. that we often called govi for good old fashioned A.I. and then more recently we did machine learning because it required bigger computers. So we had to discover it later. So we think of machine learning with neural networks. As the modern thing and the logic based I as the old fashioned thing, but if you look at the evolution on Earth, right, it's actually been the other way around, I would say that, for example, an eagle has a better vision system than I have using.

[00:29:04]

And dogs are just as good at casting tennis balls as I am. All this stuff, which is done by training internal network and not interpreting it in words, you know. It's something so many of our animal friends can do, at least as well as us, right. What is it that we humans can do that the Chipmunks and the Eagles cannot? It's more to do with this logic based stuff, right, where we can extract out information in symbols and language.

[00:29:33]

And now even with equations if you're a scientist. Right. So basically what happened was first we built these computers that could multiply numbers real fast and manipulate symbols, and we felt they were pretty dumb. And that and then we made the neural networks that can see as well as a CAT scan and do a lot of this inscrutable blackbox neural networks, what we humans can do also is put the two together in a useful way. Yes, artificial in our own brain.

[00:30:01]

Yes, in our own brain. So if we ever want to get artificial general intelligence that can do all jobs as well as humans can, right then that's what's going to be required to be able to combine the neural networks with with. Symbolic combine the old II with the new EHI and a good way we do it in our brains and there seems to be basically two strategies I see in industry now. One scares the heebie jeebies out of me and the other one I find much more encouraging.

[00:30:31]

OK, which one can we break them apart? Which which are the two? The one that scares the heebie jeebies out of me is this attitude. We're just going to make ever bigger systems that we still don't understand until they can be as smart as humans. What could possibly go wrong, right? Yeah, I think it's just such a reckless thing to do. And unfortunately and if we actually succeed as a species to build artificial general intelligence, then we still have no clue how it works.

[00:30:57]

I think at least 50 percent chance we're going to be extinct before too long is just going to be an utter, epic own goal.

[00:31:06]

You know that 44 minutes was losing money problem or like the paperclip problem, like where we don't understand how it works and it's just in a matter of seconds runs away in some kind of direction.

[00:31:18]

That's going to be very problematic even long before you have to worry about the machines themselves somehow deciding to do things.

[00:31:27]

And to us that we have to worry about people using machines that are short of ajai and power to do bad things. I mean, just take your moment. And if anyone is not worried, particularly about advanced, I just take 10 seconds and just think about your least favorite leader on the planet right now. Don't tell me who it is. I want to keep this apolitical, but just see the face in front of you, that person, for ten seconds.

[00:31:55]

Yes. Now, imagine that that person has this incredibly powerful eye under their control and can use it to impose their will on the whole planet. How does that make you feel? Yeah, so can we break it apart just briefly for the 50 percent chance that we'll run into trouble with this approach? Do you see the bigger worry in that leader or humans using the system to do damage? Or are you more worried? And I think I'm in this camp more worried about, like accidental unintentional destruction of everything, sort of like humans trying to do good and like.

[00:32:40]

In a way where everyone agrees it's kind of good, it's just they're trying to do good without understanding, because I think every leader in history thought there to some degree thought they're trying to do good.

[00:32:51]

Oh, yeah. I'm sure Hitler thought he was doing a good start. Good, too.

[00:32:55]

I've been reading a lot about Stalin. I'm sure Stalin is from he legitimately thought that communism was good for the world and then he was doing good.

[00:33:03]

I think Mao Zedong thought what he was doing was a great leap forward as good, too.

[00:33:06]

Yeah, I'm actually concerned about both of those before. I promise to answer this in detail. But before we do that, let me finish answering the first question, because I told you that there were two different ways we could get artificial general intelligence and one scares the hell out of me, which is this one where we build something. We just say bigger neural networks, ever more hardware and just train that more data. And poof. Now, it's very powerful that I think is the most unsafe and reckless approach.

[00:33:37]

The alternative to that is the intelligent, intelligible intelligence approach. Instead, where we say neural networks is just a tool to this, like for the first step to get the intuition. But then we're going to spend also serious resources sources on other AI techniques for demystifying this black box and figuring out what it's actually doing so we can convert it into something that's equally intelligent, but that we actually understand what it's doing. Maybe we can even prove theorems about it, that this car here will never be hacked when it's driving, because here is the proof.

[00:34:19]

There is a whole science of this. It doesn't work for neural networks that are big black boxes, but it works well. And what certain other kinds of codes might.

[00:34:28]

That approach, I think, is much more promising. That's exactly why I'm working on it, frankly, not just because I think it's cool for science, but because I think the more we understand these systems, the better the chances that we can make them do the things that are good for us that are actually intended, not unintended.

[00:34:47]

So you think it's possible to prove things about something as complicated as in your network? That's the hope.

[00:34:54]

Well, ideally, there's no reason it has to be a neural network in the end either. Right. We discovered that Newton's laws of gravity with neural network in Newton's head.

[00:35:05]

Yes, but that's not the way it's programmed into the navigation system of Elon Musk's rocket anymore. It's written in C++ or I don't know what language he uses exactly. Yeah. And then there are software tools called symbolic verification. DARBA by the US military has done a lot of really great research on this because they really want to understand that when they build weapons systems, they don't just fire at random or malfunction. Right. And there is even a whole operating system called cell three that's been developed by a dark background where you can actually mathematically prove that this thing can never be hacked.

[00:35:44]

I one day I hope that will be something you can say about the OS that's running on our laptops, too. As you know, we're not there, but I think we should be ambitious, frankly. Yeah. And and if we can use machine learning to help do the proofs and so on as well. Right. Then it's much easier to verify that the proof is correct than to come up with the proof in the first place. That's really the core idea here.

[00:36:10]

If someone comes on your show, on your podcast and says they prove the Riemann Hypothesis or some new sensational new theorem. It's not it's much easier for some one else takes some smart grid math grad students to check, oh, there's an error here on equation. An equation five or six really checks out. Then it was to discover the proof. Yeah, although some of those issues are pretty complicated, but yes, it's still nevertheless much easier to to verify.

[00:36:38]

I love the optimism. You know, we kind of even with the security of systems, there's a kind of cynicism that pervades people who think about this, which is like, well, it's hopeless. I mean, in the same sense, exactly like you're saying when your life works out, just to understand what's happening with security, people are just like, well, it's always there's always going to be attack vectors like we wish to attack the system.

[00:37:06]

But you're right, we're just very new with these computational systems when you knew these intelligent systems. And it's not out of the realm of possibility, just like people that understand the movement of the stars and the planets and so on. Yeah, it's entirely possible that, like within hopefully soon, but it could be within one hundred years if we start to have an obvious, like, laws of gravity about intelligence and God forbid of our consciousness to that one.

[00:37:36]

Agreed.

[00:37:37]

You know, I think, of course, if you're selling computers that get hacked a lot, that's in your interest as a company that people think it's impossible to make it safe. So you nobody's going to get the idea of suing you. But I want to really inject optimism here.

[00:37:49]

It it it's it's absolutely possible to do much better than we're doing now. And, you know, your laptop does so much stuff.

[00:38:00]

You don't need the music player to be super safe in your in your future self-driving car. Right. If someone hacks that and starts playing music you don't like. The world won't end, but what you can do is you can break out and say drive computer that controls your safety must be completely physically decoupled entirely from the entertainment system, and it must physically be such that it can't take on over the air updates while you're driving. And it can be it can have it's not that it can have ultimately some operating system on it, which is symbolically verified and proven that it's always going to way it's going to what it's supposed to do that we can basically have and companies should take that attitude.

[00:38:46]

They should look at everything they do and say, what are the few systems in our and our company the threaten the whole life of the company if they get hacked, you know, and have the highest standards for them, and then they can save money by going for the poorly understood stuff for the rest. You know, this this is very feasible, I think.

[00:39:04]

And coming back to the bigger question. But you worried about that? That will be unintentional failures. I think there are two quite separate risks here. Right. We talked a lot about one of them, which is that the goals are noble of the human. The human says, I want this airplane to not crash because this is not Mohammed Atta now flying the airplane. Right. And now there's this technical challenge of making sure that the autopilot is actually going to behave as as the pilot wants.

[00:39:36]

If you set that aside, there's also the separate question, how do you make sure that the goals of the pilot are actually aligned with the goals of the passenger?

[00:39:45]

How do you make sure very much more broadly that if we can all agree is a species, that we would like things to kind of go well for humanity as a whole, that the goals are aligned here, the alignment problem?

[00:39:57]

And there's been a lot of progress in in the sense that there's suddenly huge amounts of research going on on it about it. And I'm very grateful to Elon Musk for giving us that money five years ago so we could launch the first research program on technical safety and alignment.

[00:40:15]

There's a lot of stuff happening, but I think we need to do more than just make sure little machines do always what their owners do. You know, that wouldn't have prevented September 11 if Mohammed Atta said, OK, OK, auto pilot, please fly into the World Trade Center, you know, and it's like, OK, that that even happened in a different situation. There was a depressed pilot and then Andreas Lubitz, he told his Germanwings passenger jet to fly into the Alps.

[00:40:44]

He just told the computer to change the altitude to 100 meters or something like that. You know what the computer said? Right. OK, OK.

[00:40:52]

And it had the frigging topographical map of the Alps in there. It had GPS, everything. No one had bothered teaching it. Even the basic kindergarten ethics of like, no, we never want airplanes to fly into mountains under any circumstances.

[00:41:07]

And so we have to think beyond just the technical issues and think about how do we align in general incentives on this planet for the greater good.

[00:41:20]

So starting with simple stuff like that, every airplane that has a computer in it should be taught whatever kindergarten ethics that's smart enough to understand, like, no, don't fly into fixed object. If the pilot tells you to do so, then go on autopilot mode, send an email to the cops and learn at the latest airport nearest airport, you know, any car with a forward facing camera should just be programmed by the vet, by the manufacturer, so that it will never accelerate into a human ever.

[00:41:50]

Hmm. That would avoid things like the nese attack and many horrible terrorist vehicle attacks where they deliberately did that. Right. This was not some sort of thing. Oh, you know, us and China, different views. No, there was not a single car manufacturer in the world. In the world. Right. Who wanted the cars to do this. They just hadn't thought through the alignment. And if you look at more broadly problems that happen on this planet, the vast majority have to do a poor alignment.

[00:42:20]

I mean, think about this go back really big because I know this is so good. Yeah. In the very long ago in evolution, we had these genes and they wanted to make copies of themselves. That's really all they cared about. So some genes and hey, I'm going to build a brain on this body I'm in so that I can get better at making copies of myself. And then they decided for their benefit to get copied more to align your brain's incentives with their incentives.

[00:42:51]

So it didn't want you to starve to death.

[00:42:56]

So it gave you an incentive to eat. Yes. And it wanted you to make copies of the genes. So it gives you incentive to fall in love and do lots of naughty things to make copies of itself, right? Yeah. So that was successful value alignment done on the genes. They created something more intelligent than themselves, but they made sure to try to align the values. But then something went a little bit wrong against the idea of what the genes wanted, because a lot of humans discovered, hey, you know, yeah, we really like this business about sex that the genes that made us enjoy.

[00:43:33]

But we don't want to have babies right now. Yeah. So we're going to hack the genes and use birth control. And I really feel. Like drinking Coca-Cola right now, but I don't want to get a Potbelly's, so I'm going to drink Diet Coke, you know, we have all these things we figured out because we're smarter than the genes, how we can actually subvert their intentions. So it's not surprising that we humans now, when we are in the role of these genes, creating other non-human entities with a lot of power have to face the same exact challenge.

[00:44:05]

How do we make other powerful entities have incentives that are aligned with ours and so they won't hack them? Corporations, for example, we humans decided to create corporations because they can benefit us greatly. Now all of a sudden there's a supermarket. I can go buy food there.

[00:44:21]

I don't have to hunt. Awesome. And then to make sure that this corporation would do things that were good for us and not bad for us, we created institutions to keep them in check. Like if the local supermarket sells poisonous food, then.

[00:44:40]

Those some owners of a supermarket have to spend some years reflecting behind bars, right?

[00:44:47]

So we created incentives to align them. But of course, just like we were able to see through this thing about birth control, if you're powerful corporation, you also have an incentive to try to hack the institutions that are supposed to govern you because you ultimately, as a corporation, have an incentive to maximize your profit, just like you have an incentive to maximize the enjoyment your brain has, not for your genes. So if they can figure out a way of of bribing regulators, then they're going to do that.

[00:45:17]

In the US, we kind of caught onto that and made laws against corruption and bribery. Then in the late eighteen hundreds.

[00:45:27]

Teddy Roosevelt realized that, no, we were still being kind of hacked because the Massachusetts railroad companies had like a bigger budget than the state of Massachusetts and they were doing a lot of very corrupt stuff. And so he did the whole trustbusting thing to try to align these other non-human entities, the companies, again, more with the incentives of Americans as a whole. It's not surprising, though, that, you know, this is a battle you have to keep fighting.

[00:45:52]

Now, we have even larger companies than we ever had before. And, of course, they're going to try to, again, subvert the institutions, not because, you know, I think people make the mistake of getting all to. But thinking about things in terms of good and evil, like arguing about whether corporations are good or evil or whether robots are good or evil, a robot isn't good or evil is a tool.

[00:46:23]

And you can use it for great things like robotic surgery or for bad things. And a corporation also is a tool, of course. And if you have good incentives to the corporation, that will do great things like start a hospital or a grocery store. If you have really bad incentives, then it's going to start maybe marketing addictive drugs to people and you'll have an opioid epidemic right up. It's all about. I don't want we should not make the mistake of getting into some sort of fairy tale, good, evil thing about corporations or robots, we should focus on putting the right incentives in place.

[00:46:58]

My optimistic vision is that if we can do that, you know, then we can really get good things. We're not doing so great with that right now, either on, I think, or on other intelligent non-human entities, like big companies, like we just have a new secretary of defense.

[00:47:15]

There's going to start up now and the Biden administration, who is was an active member of the board of Raytheon, I hope. Yeah. So, you know, I have nothing against Raytheon. I'm I'm not a pacifist. But there's an obvious conflict of interest if someone is in the job where they decide who they're going to contract with. And I think somehow we have maybe we need another Teddy Roosevelt to come along again and say, hey, you know, we want what's good for all Americans and we need to go do some serious realigning again of the incentives that we're giving to these big companies.

[00:47:56]

And then we're going to be better off, it seems that natural with human beings, just like you beautifully described, the history of this whole thing, of it all started with the genes. And they're probably pretty upset by all the unintended consequences that have happened since. But the it seems that it kind of works out like it's in this collective intelligence that emerges at the different levels. It seems to find sometimes last minute a way to realign the values or keep the values aligned.

[00:48:26]

It's almost, um, it finds a way like different leaders, different humans pop up all over the place that reset the system.

[00:48:36]

Do you one I mean, do you have an explanation why that is or is that just survivor bias?

[00:48:42]

And also, is that different somehow fundamentally different than with the A.I. systems where you're no longer dealing with something that was a direct maybe companies of the same, a direct byproduct of the evolutionary process?

[00:48:58]

I think there is one thing which has changed. That's why I'm not at all optimistic. That's why I think there's about a 50 percent chance if we if we take the dumb route with artificial intelligence that we humans, humanity will be extinct in this century. First, just the big picture.

[00:49:19]

Companies need to have the right incentives, even governments.

[00:49:24]

Right. We used to have governments. Usually there were just some king, you know, who was the wiser king because his dad was the king, you know, and and then there were some benefits of having this powerful. Kingdom because or empire of any sort, because then it could prevent a lot of local squabbles. So at least everybody in that region would stop worrying about each other and their incentives of different cities in the kingdom became more aligned. Right.

[00:49:50]

That's that was the whole selling point. Harare. Yeah. Neuvo Harare has a beautiful piece on how empires were collaboration enablers. And then we also, Harare says, invented money for that reason so we could have better alignment and we could do trade even with people we didn't know. So this sort of stuff has been playing out since time immemorial. And what's changed is that it happens on ever larger scales. Right. Technology keeps getting better because science gets better.

[00:50:20]

So now we can communicate over larger distances, transport things faster, over larger distances, and so the entities get ever bigger. But our planet is not getting bigger anymore.

[00:50:31]

So in the past, you could have one experiment that just totally screwed up, like Easter Island, where they actually managed to have such poor alignment that when they went extinct people there was no one else to come back and replace them. If Elon Musk doesn't get us to Mars and then we go extinct on a global scale, then we're not coming back. That that's the fundamental difference.

[00:50:57]

And that's a mistake we don't make for that reason in the past. Of course, history is full of fiasco's, but it was never the whole planet. And then, OK, now there's this nice uninhabited land here. Some other people could move in and organize things better. This is different.

[00:51:16]

The second thing which is also different is that technology gives us so much more empowerment, both to the good things and also the screw up in the Stone Age.

[00:51:27]

Even if you had someone whose goals were really poorly aligned, like maybe he was really pissed off because the Stone Age girlfriend dumped him and he just wanted to if you wanted to, like, kill as many people as he could. Yeah. How many could he really take out with a rock and a stick before it was overpowered? Right. Just a handful right now.

[00:51:47]

With today's technology, if we have an accidental nuclear war between Russia and the US, which we almost have about a dozen times, and then we have a nuclear winter, it could take out seven billion people or six billion people. We don't know.

[00:52:03]

The scale of the damage is bigger threat that we can do. And if if. There's obviously no law of physics that says the technology will never get powerful enough that we could wipe out our species entirely. That would just be a fantasy to think that science is somehow doomed, not get more powerful than that. Right. And and it's not at all unfeasible in our lifetime that someone could design a designer pandemic, which spreads as easily as covid, but just basically kills everybody.

[00:52:32]

We already had smallpox. We killed one third of everybody who got it.

[00:52:36]

And what do you think of the here's an intuition.

[00:52:40]

Maybe it's completely naive and this optimistic intuition I have, which it seems and maybe it's a biased experience that I have. But it seems like the most brilliant people I've met in my life. All are. Really fundamentally good human beings and not like naive, good, like they really want to do good for the world in a way that maybe is aligned to my sense of what good means. And so I have a sense that the. The people that will be defining the very cutting edge of technology, there will be much more of the ones that are doing good versus the ones that are doing evil.

[00:53:21]

So the race are more optimistic than the US always like last minute coming up with the solutions of if there's an engineered pandemic that has the capability to destroy most of the human civilization. It feels like to me, either leading up to that before or as is going on, there will be we're able to rally the collective genius of the human species.

[00:53:49]

I could tell by your smile that your at least some percentage doubtful.

[00:53:55]

But is that could that be a fundamental law of human nature, that evolution only create?

[00:54:02]

It creates like karma is beneficial, good as beneficial, and therefore will be all right.

[00:54:09]

I hope you're right that I would really love it if you're right, if there's some sort of law of nature that says that we always get lucky in the last second because of karma.

[00:54:19]

But, you know, I, I prefer not playing it so close and gambling on that. And I think in fact, I think it can be dangerous to have too strong faith in that because it makes us complacent. Like, if someone tells you you never have to worry about your house burning down, then you're not going to put in a smoke detector because why would you need to I even if it's sometimes very simple precautions, you don't take them.

[00:54:46]

If feel like all of the government is going to take care of everything for us, they can always trust my politicians. Like, you know, we abdicate our own responsibility. I think it's a healthier attitude to say, yeah, maybe things will work out. Maybe I'm actually going to have to myself step up and take responsibility.

[00:55:02]

Uh. And the stakes are so huge, I mean, if we do this right. We can develop all this ever more powerful technology and cure all diseases and create a future where humanity is healthy and wealthy for not just the next election cycle, but like billions of years throughout our universe, that's really worth working hard for and not just, you know, sitting and hoping for some sort of fairytale karma.

[00:55:25]

Well, I just mean so you're absolutely right from the perspective of the individual.

[00:55:29]

Like for me, the primary thing should be to take responsibility and to build the solutions that your skill set allows.

[00:55:38]

Which is a lot I think we underestimate often very much how much good we can do if if you or anyone listening to this that is completely confident that our government would do a perfect job on handling any future crisis with engineered pandemics or future, I want to put out there on what actually happened in 2020. Do you feel that the government, by and large, around the world has handled this flawlessly?

[00:56:08]

That's a really sad and disappointing reality.

[00:56:11]

That hopefully is a wake up call for everybody, for the scientists, for the for the for the engineers, for the researchers. And especially it was disappointing to see. How, uh, inefficient we were as a collecting the right amount of data in a privacy, preserving way and spreading that data and utilizing that data to make decisions, all that kind of stuff.

[00:56:36]

Yeah, I think when something bad happens to me, I made myself a promise many years ago that I would not. Be a winner. So when something bad happens to me, of course, I happen to the process, the disappointment, but then I try to focus on what did I learn from this that can make me a better person in the future. And there's usually something to be learned when I fail. And I think we should all ask ourselves, what can we learn from the pandemic about how we can do better in the future?

[00:57:09]

And you mentioned there's a really good lesson. You know, we were not as resilient as we thought we were and we were not as prepared maybe as we wish you were. You can even see very stark contrast around the planet South Korea.

[00:57:24]

They have over 50 million people. Do you know how many deaths they have from covid last time I checked? It's about 500. You know, why is that? Well. The short answer is that they have prepared they were incredibly quick, incredibly quick to get on it with very rapid testing and contact tracing and so on, which is why they never had more cases than they could contact trace effectively. Right. They never even had to have the kind of big lockdown's we had in the West.

[00:57:59]

But the deeper answer to it's not just the Koreans are just smaller, better people.

[00:58:04]

The reason I think they were better prepared was because they had already had a pretty bad hit from the SARS pandemic, which never became a pandemic, something like 17 years ago, I think.

[00:58:17]

So it's kind of fresh memory that, you know, we need to be prepared for pandemics. So they were right.

[00:58:23]

And so maybe this is the lesson here for all of us to draw from covid that rather than just wait for the next pandemic or the next problem with EHI getting out of control or anything else, maybe we should actually.

[00:58:39]

Set aside a tiny fraction of our GDP to have people very systematically do some horizon scanning and say, OK, what are the things that could go wrong and let's duke it out and see which are the more likely ones and which are the ones that are actually actionable. And then be prepared. And so one of the observations is one little ant argument that I am of disappointment is the political division over information that has been observed that I observed this year, that it seemed the discussion was less about.

[00:59:19]

Sort of what happened in understanding what happened deeply and more about there's different truths out there and it's a great argument. My truth is better than your truth is. It's like red versus blue or different. Like it was like this ridiculous discourse that doesn't seem to get at any kind of notion of the sound like some kind of scientific process. Even science got politicized in ways that's very heartbreaking to me.

[00:59:49]

You have an exciting project on the front of trying to rethink one of the you mentioned corporations.

[00:59:59]

There's one of the other collective intelligence systems that have emerged through all of this is social networks and just the spread. The Internet is the spread of information on the now the Internet, our ability to share that information. There's all different kinds of news sources and so on. As you said, like that's from first principles. Let's rethink how we think about the news, how we think about information. Can you talk about this amazing effort that you're undertaking?

[01:00:29]

Oh, I'd love to it. This has been my big covid project, nights and weekends on ever since the lockdown. The Segway into this, actually, let me come back to what you said earlier, that you had this hope that in your experience, people who you felt were very talented were often idealistic and wanted to do good. Frankly, I feel the same about all people. By and large, there are always exceptions, but I think the vast majority of everybody, regardless of education and whatnot, really are fundamentally good.

[01:00:59]

Right. So how can it be that people still do so much nasty stuff? But I think it has everything to do with us, with the information that we're given. Yes. You know, if you go into Sweden. Five hundred years ago, and you start telling all the farmers that those Danes and Denmark, there are so terrible people, you know, and we have to invade them.

[01:01:19]

Yes, because they've done all these terrible things that you can't fact check yourself.

[01:01:23]

There are a lot of people Swedes did that. Right.

[01:01:26]

And and we've seen we're seeing so much of this today in the world, both geopolitically, you know, where we are told that that China is bad and Russia is bad and Venezuela is bad.

[01:01:40]

And people in those countries are often told that we are bad.

[01:01:43]

And we also see it at a micro level, you know, where people are told that, oh, those who voted for the other party are bad people.

[01:01:51]

It's not just an intellectual disagreement, but they're bad people and we're getting even more divided. And so how do you reconcile this with this intrinsic goodness in people?

[01:02:05]

I think it's pretty obvious that it has, again, to do with this with information there were fed and given and we evolved to live in small groups where you might know 30 people in total.

[01:02:18]

Right. So you then had a system that was quite good for assessing who you could trust and who you could not. And if someone told you that, you know, Joe, there is a jerk, but you had interacted with him yourself and seen him in action, and you would pretty quickly realize maybe that that's actually not quite accurate. Right. But now that we the most people on the planet are people who've never met, it's very important that we have a way of trusting information we're given.

[01:02:44]

And so. OK, so where does the news project come in? Well, throughout history, you can go read Machiavelli, you know, from the fourteen hundred and you'll see how already. Then there are busy manipulating people with propaganda and stuff.

[01:02:57]

Propaganda is not new at all and the incentives to manipulate people is not new at all. What is it that's new? What's new is machine learning meets propaganda. That's what's new. That's why this has gotten so much worse. You know, some people like to blame certain individuals, like in my liberal university bubble. Many people blame Donald Trump and say it was his fault. I see it differently. I think what Donald Trump just had this extreme skill at playing this game in the machine learning algorithm age.

[01:03:36]

He couldn't have played 10 years ago. So what's changed? What's changes? Well, Facebook and Google and other companies and I don't want I'm not badmouthing them. I have a lot of friends who work for these companies. The good people they deployed machine learning algorithms just increase the profit a little bit to just maximize the time people spend watching ads. Then they had totally underestimated how effective they were going to be. This was, again, the black box, non intelligible intelligence.

[01:04:05]

They just noticed only getting more ad revenue growth. That took a long time to leave and realize why and how and how damaging this was for society. Because, of course, what the machine learning figured out was that the by far most effective way of gluing you to your little rectangle was to show you things that triggered strong emotions, anger, etc. resentment and the. If it was true or not, didn't really matter, it was also easier to find stories that weren't true if you weren't limited.

[01:04:36]

That's just the limitations. That's that's a very limiting factor.

[01:04:40]

And before long, we got these amazing filter bubbles on a scale we had never seen before a couple days to the fact that also.

[01:04:51]

The online news media was so effective that they killed a lot of print journalism, there is there's less than half as many journalists now in America, I believe, as there was a generation ago.

[01:05:05]

You just couldn't compete with the online advertising. So all of a sudden, most people are not getting even reading newspapers. They get get their news from social media and most people only get news in their little bubble. So along comes now some people like Donald Trump who have figured out among the first successful politicians to figure out how to really play this new game and become very, very influential. But I think that Donald Trump was a symbol. What he he he took advantage of it.

[01:05:36]

He didn't create the fundamental conditions were created by machine learning, taking over the news media.

[01:05:44]

So this is what motivated my covid project here. So, you know, I said before, machine learning and tech in general, it's not evil, but it's also not good. It's just a tool that you can use for good things are bad things. And as it happens, machine learning in news is mainly used by the big players, big tech to manipulate people into watch as many ads as possible, but which have this unintended consequence of really screwing up our democracy and fragmenting it into filter bubbles.

[01:06:15]

So I thought, well, machine learning algorithms are basically free. They can run on your smartphone for free also if someone gives them away to you. Right. There's no reason why they only have to help the big guy to manipulate the little guy. They can just as well help the little guy to see through all the manipulation attempts from the big guy. So did this project. You can go to improve the news dog. The first thing we've built is that this a little news aggregator looks a bit like Google News, except it has these sliders on it to help you break out of your filter bubble.

[01:06:47]

So if you're reading, you can click, click and go to your favorite topic. And then if you just slide the left right slider away all the way over to the left, there's two sliders, right? Yeah, there's the one. The most obvious one is the one that has left the right label on us. You go to the left, you get one set of articles, you go to the right to see a very different truth appearing.

[01:07:09]

That's literally left and right on the political spectrum and the political.

[01:07:13]

So if you're reading about immigration, for example, it it's very, very noticeable. And I think step one always, if you want to not get manipulated, is just to be able to recognize the techniques people use. So it's very helpful to see how they spin things on the two sides. I think many people are under the misconception that the main problem is fake news. It's not like I had an amazing team of MIT students where we did an academic project using machine learning to detect the main kinds of bias over the summer.

[01:07:49]

And yes, of course, sometimes there's fake news where someone just claims something that's false, right? Like all, Hillary Clinton just got divorced or something. Yes, but what we see much more of is actually just omissions. If you go to there are some stories which just won't be mentioned by the left or the right because it doesn't suit their agenda, and then the ones that mention other ones, very, very, very much. So, for example, we've we've had.

[01:08:19]

I watched a number of stories about the Trump family's financial dealings. And then there's been a bunch of stories about the Biden family's Joe Biden's financial dealings. Surprise, surprise. They don't get equal coverage on the left and the right. One side loves the cover that Biden, Hunter, Biden's stuff and one side loves to cover the Trump. Never guess which is which. Right. But the great news is, if you want to if you're a normal American citizen and you dislike corruption in all its forms, then slide, slide.

[01:08:52]

You can get look at both sides and you can see all the political corruption stories.

[01:08:58]

It's really liberating to just take in the both sides.

[01:09:04]

The spin on both sides is somehow unlocks your mind to think on your own, to realize that that I don't know, is the same thing that was useful right in the Soviet Union times for when when everybody was much more aware that they're surrounded by propaganda.

[01:09:23]

It's so interesting what you're saying, actually. So Noam Chomsky used to be a mythic colleague, once said that propaganda is to democracy, what violence is to totalitarianism.

[01:09:38]

And what he means by that is if you have a really totalitarian government, you don't need propaganda. People will do what you want them to do anyway.

[01:09:49]

But out of fear, right? Yes, but otherwise, you need propaganda. So I would say actually that the propaganda is much higher quality in democracies, more believable. And it's brilliant.

[01:10:01]

It's really striking when I talk to colleagues, science colleagues like from Russia and China and so on, I noticed they are actually much more aware of the propaganda in their own media than many of my American colleagues are about the propaganda and Western media as brilliant.

[01:10:18]

That means the propaganda in the Western media is better. Yes, that's so really better in the West, even the propaganda.

[01:10:27]

But that's good. But when you when once you realize that, you realize there's also something very optimistic there that you can do about it. Right. Because first of all, omissions, as long as there's no outright censorship, you can just look at both sides. And pretty quickly pieced together a much more accurate idea of what's actually going on, right, and develop a natural skepticism, too. Yeah, some analytical scientific mind you take in information. Yeah.

[01:10:58]

And I think I have to say, sometimes I feel that some of us in the academic bubble are too arrogant about this and somehow think, oh, it's just people who aren't who was educated when we are often just as gullible.

[01:11:12]

Also, you know, we read only our media and and don't see through things. Anyone who looks at both sides like this and compares, well, we immediately start noticing the shenanigans being pulled.

[01:11:23]

And, you know, I think what I try to do with with this app is that the big tech has to some extent try to blame the individual for being manipulated, much like Big Tobacco tried to blame the individuals entirely for smoking. And then later on, you know, our government stepped up and say, actually, you know, you can't just blame little kids for starting to smoke. We have to have more responsible advertising in this and that. I think it's a bit the same here.

[01:11:50]

It's very convenient for a big tech to blame. So it's just people who are so dumb and get fooled by the blame usually comes in saying, oh, it's just human psychology.

[01:12:01]

People just want to hear what they already believe.

[01:12:03]

But Professor David Van, that MIT actually partly debunked that with a really nice study showing that people tend to be interested in hearing things that go against what they believe, if it's presented in a respectful way, like suppose, for example, that. You have a company and you're just about to launch this project and you're convinced it's going to work, and someone says, you know, like, I hate to tell you this, but this is going to fail.

[01:12:31]

Why would you be like, shut up? I don't want to hear it. La la la la la la la la la. Yeah. Would you you would be interested.

[01:12:38]

And also, if you are on an airplane back in the cockpit covid times, you know, and the guy next to you is clearly from the opposite side of the political spectrum, but is very respectful and polite to you. Wouldn't you be kind of interested to hear a bit about how he or she thinks about things? Of course. But it's not so easy to find out respectful disagreement now because like, for example, if you are a Democrat and you're like, oh, I want to see something on the other side, you just got Breitbart dot com.

[01:13:11]

And then after the first 10 seconds, you feel deeply insulted by something and they. It's not going to work or if you take someone who votes Republican. And they go to something on the left and they just get very offended very quickly by them having put a deliberately ugly picture of Donald Trump on the front page or something, it doesn't really work. So this news aggregator also has a nuance slider, which you can pull to the right and then to make it easier to get exposed to actually more academic style or more respectful press portrayals of different views.

[01:13:47]

And finally, the one kind of bias I think people are mostly aware of is the left right, because it's so obvious because both left and right are very powerful here. Right.

[01:13:58]

Both of them have well-funded TV stations and newspapers. And it's kind of hard to miss. But there's another one, the establishment slider, which is it's also really fun. I love to play with it. That's more about corruption. Yes, because if you have a society. That's where. Almost all the powerful entities want you to believe a certain thing, that's what you're going to read, and both the big media, mainstream media on the left and on the right, of course, and powerful companies can push back very hard, like tobacco companies pushed back very hard back in the day when some newspapers started writing articles about tobacco being dangerous.

[01:14:40]

So it was hard to get a lot of coverage about it initially. And also, if you look at geopolitically. Right, of course, in any country, when you read their media, you're mainly going to be reading a lot of articles about how our country is the good guy and the other countries are the bad guys. Right. So if you want to have a really more nuanced understanding, you know, like the Germans used to be told that that the British used to be told that the French were the bad guys and the French used to be told that the British were the bad guys.

[01:15:07]

And now they visit each other's countries a lot and have a much more nuanced understanding. I don't think there's going to be any more wars between France and Germany, but on the geopolitical scale. It's just as much as ever, you know, a big Cold War now, us, China and so on. And if you want if you want to get a more nuanced understanding of what's happening geopolitically, then it's really fun to look at this establishments later, because it turns out there are tons of little newspapers both on the left and on the right, who sometimes challenge the establishment and say, you know, maybe we shouldn't actually invade Iraq right now.

[01:15:43]

Maybe this weapons of mass destruction thing was because if you look at the journalism research afterwards, you can actually see that clearly. Both CNN and Fox were very pro. Let's get rid of Saddam. There are weapons of mass destruction. Then there were a lot of smaller newspapers. They were like, wait a minute, this evidence seems a bit sketchy and maybe we but of course, they were so hard to find.

[01:16:07]

Most people didn't even know they existed yet. It would have been better for American national security if those voices had also come up. I think it harmed America's national security, actually, that we invaded Iraq. And arguably, there's a lot more interest in that kind of thinking, too, from those small sources.

[01:16:26]

So like the when you say big, it's more about kind of the reach of the broadcast, but it's not big in terms of the interest.

[01:16:37]

I think it's I think there's a lot of interest in that kind of anti-establishment or like skepticism towards, you know, out of the box thinking there's a lot of interest in that kind of thing. Do you see this new project or something like it being basically taken over the world as the main way we consume information? Like how do we get how do we get there? Like how do we you know? So the idea is brilliant. It's you calling it your little project in 2020.

[01:17:09]

But how does that become the new way we consume information?

[01:17:14]

I hope, first of all, does the plant, the seed there? Because normally the big barrier of doing anything in media is you need a ton of money, but this costs no money at all. I'm just making myself pay a tiny amount of money each month to Amazon to run the thing in their cloud. We're not there never will never be any. And the point is not to make any money off of it. And we just train machine learning algorithms to classify the articles and stuff.

[01:17:38]

So it just kind of runs by itself. So if it actually gets good enough at some point that it's catching on, it could scale. And if other people carbon copy and make other versions that are better, that's. The more the merrier, I think there's a real opportunity for machine learning to empower the individual against the less the powerful players. It's as I said in the beginning here, it's been mostly the other way round so far that the big players have the eye and then they tell people, this is the truth, this is how it is.

[01:18:15]

But it can just as well go the other way round. Then when the Internet was born, actually, a lot of people had this hope that maybe this will be a great thing for democracy, make it easier to find out about things and maybe machine learning and things like this and can actually help it. And I have to say, I think it's it's more important than ever now because. This is very linked also to the whole future of life, as we discussed earlier, why we're getting this ever more powerful attack.

[01:18:41]

You know, it's frankly, it's pretty clear if you look on the one or two generations, three generation time scale, that there are only two ways this can end geopolitically. Yeah, either it ends great for all humanity or ends terribly for all of us. There's there's really no in between. And we're so stuck in and because, you know, technology knows no borders and you can't have people fighting when the weapons just keep getting ever more powerful indefinitely.

[01:19:12]

Eventually, Lux, the luck runs out. And, you know, right now we have I love America. But the fact of the matter is, what's good for America is not opposites in the long term to what's good for other countries. It would be if this was some sort of zero sum game like it was thousands of years ago, when the only way one country could get more resources was to take land from other countries because that was the basis of the resource.

[01:19:42]

I look at the map of Europe. Some countries kept getting bigger and smaller with endless wars. But then since 1945, there hasn't been any war in Western Europe and they all got way richer because of tech. So the optimistic outcome is that the big winner in this century is going to be America and China and Russia and everybody else, because technology just makes us all healthier and wealthier and we just find some way of keeping the peace on this planet.

[01:20:12]

But I think, unfortunately, there are some pretty powerful forces right now that are pushing in exactly the opposite direction and trying to demonize other countries, which just makes it more likely that this ever more powerful tech we're building is going to lead to disastrous ways for aggression versus cooperation, that kind of thing.

[01:20:30]

Yeah, even look at look at this military. I know, right? 20/20. It was so awesome to see these dancing robots. I love that. Right. But one of the biggest growth areas in robotics now is, of course, autonomous weapons. And 20, 20 was like the best marketing year ever, photonics weapons, because in both Libya, it's a civil war and in government karaba.

[01:20:56]

They made the decisive difference, right? And everybody else is like watching this. Oh, yeah, we want to build autonomous weapons, too, in Libya.

[01:21:07]

You had on one hand, our ally, the United Arab Emirates, that were flying their autonomous weapons that they bought from China, bombing Libyans, and on the other side you had our other ally, Turkey, flying their drones and they had no skin in the game, any of these other countries.

[01:21:27]

And of course, there was the Libyans who really got screwed and they're going to Colaba.

[01:21:31]

You had actually again, now Turkey is sending drones built by this company that was actually founded by a guy who went to MIT. Mysterious drone. You know that? Attiyah Yeah. So MIT has a direct responsibility for ultimately this.

[01:21:48]

And a lot of civilians were killed there, you know, and so because it was militarily so effective. Now, now, suddenly there's a huge push. Oh, yeah, yeah, let's go build ever more autonomy into these these weapons and it's going to be great. And I think actually people who are obsessed about some sort of future Terminator scenario right now should start focusing on the fact that we have too much more urgent threats happening for machine learning. One of them is the whole destruction of democracy that we've talked about now where where our flow of information is being manipulated by machine learning.

[01:22:29]

And the other one is that right now, you know, this is the year when the big arms race and out of control, arms race and lethal weapons is going to start or it's going to stop.

[01:22:40]

So you have a sense that there is like 20, 20 was instrumental catalyst for the race of four, the autonomous weapons race.

[01:22:49]

Yeah, because it was the first year when when they proved decisive in the battlefield. And and these ones are still not fully autonomous, mostly the remote controlled. Right.

[01:22:58]

But, you know, we could. Very quickly. Make things about the size and cost of a smartphone, which you just put in the GPS coordinates or the face of the one you want to kill, a skin color or whatever, and it flies away and does it.

[01:23:14]

And the real good reason why the US and all the other superpowers should put the kibosh on this is the same reason we decided to put the kibosh on bio weapons. So, you know, we gave the Future Life Award that we can talk more about later. Matthew Meselson from Harvard before for convincing Nixon the banned bio weapons. And I asked them, how did you do it?

[01:23:39]

It was like, well, I just said, look, we don't want there to be a five hundred dollar weapon of mass destruction that if all our enemies can afford even non-state actors. And Nixon was like. Good point. Know it's in America's interest that the powerful weapons are really expensive, so only we can afford them or maybe some more stable adversaries, right? Nuclear weapons are like that, but bio weapons were not like that. That's why we banned them.

[01:24:11]

And that's why you never hear about them now. And that's why we love biology.

[01:24:15]

So you have a sense that it's possible for the big powerhouses in terms of the big nations in the world to agree that autonomous weapons is not a race we want to be on.

[01:24:27]

It doesn't end well because we that we know it's just going to end in mass proliferation and every terrorist everywhere is going to have these super cheap weapons that they will use against us. And our and our politicians have to constantly worry about being assassinated every time they go outdoors by some anonymous little mini drone. You know, we don't want that.

[01:24:47]

And if even if the US and China and everyone else could just agree that you can only build these weapons if they cost at least 10 million bucks, that would that would be a huge win for the superpowers and frankly, for everybody. The you don't. And people often push back and say, well, it's so hard to prevent cheating, but, hey, you could say the same about bio weapons. You know, take any of your MIT colleagues and biology.

[01:25:14]

Of course, they could build some nasty bioweapon if they really wanted to. But first of all, they don't want to because they think it's disgusting because of a stigma. And second. Even if there's some sort of nut case and want to, it's very likely that some of their students or someone would rat them out because everyone else thinks it's so disgusting. And in fact, we now know there was even a fair bit of cheating on the bio weapons ban, but none no countries use them because it was so stigmatized that it just wasn't worth revealing that they had cheated.

[01:25:47]

You talk about drones, but you kind of think the drones is the remote operation, which they are mostly. Yes, but you're not taking the next intellectual step of like where does this go? You're kind of saying the problem with drones is that you're removing yourself from direct violence. Therefore, you're not able to sort of maintain the common humanity required to make the proper decisions strategically. But that's the criticism as opposed to like if this is automated and just exactly as you said, if you automated and there's a race, then it's going to technology can get better and better and better, which means getting cheaper and cheaper and cheaper and unlike perhaps nuclear weapons, which is connected to resources in a way like it's hard to get the it's.

[01:26:38]

Yeah. It feels like it's you know, there's too much overlap between the tech industry and autonomous weapons, the way you could have smartphone type of cheapness. If you look at drones, you know, it's it's a you know, for thousand dollars, you can have an incredible system that's able to maintain flight autonomously for you and take pictures and stuff. You could see that going into the autonomous weapon space that. But like, why is that not thought about or discussed enough in the public, do you think you see those dancing Boston Dynamics robots and everybody has this kind of like as if this is like a far future?

[01:27:20]

Yeah, they have this, like, fear, like, oh, this will be Terminator and like some, I don't know, unspecified 20, 30, 40 years. And they don't think about, well, this is like some much less dramatic version of that is actually happening now. It's not going to have it's not going to be Leggat is not going to be dancing, but it's it's already has the capability to use artificial intelligence to kill humans. Yeah.

[01:27:46]

The Boston Dynamics like robots. I think the reason we imagine them holding guns is just because the more thing Arnold Schwarzenegger, why? That's a reference point. That's pretty useless. That's not going to be the main military use of them. They might be useful in law enforcement in the future. And there's a whole debate about you want robots showing up at your house with guns telling you to who will be perfectly obedient to whatever dictator controls them. But let's leave that aside for a moment and look at what's actually relevant now.

[01:28:16]

So there is a spectrum of things you can do with a guy in the military. And again, to put my cards on the table, I'm not the pacifist. I think we should have good defense. And so, for example, a Predator drone is a basically a fancy little remote controlled airplane. There's a human pilot. And yet and the decision ultimately about whether to kill somebody with it is made by humans, though.

[01:28:44]

And this is a line I think we should never cross.

[01:28:49]

There's a current theory policy again. You have to have a human in the loop. I think algorithms should never make life or death decisions. They should be left to the humans.

[01:28:59]

Now, why might we cross that line? Well, first of all, these are expensive, right? So, for example, when. When as when Azerbaijan had all these drones and Armenia didn't have any, they started trying to jerry rig little cheap things fly around, but then of course, the meetings were jammed.

[01:29:18]

Them, like the Azeris, would jam them and remote control things can be jammed. That makes them inferior. Also, there's a bit of a time delay between, you know, if we're piloting something from far away speed of light and the human has a reaction time as well. It would be nice to eliminate that jamming possibility and the time delay by having it fully autonomous. But now you might. So then if you do but now you might be crossing that exact line.

[01:29:45]

You might program at just oh, dear drone. Go hover over this country for a while. And whenever you find someone who is a bad guy, you know, kill them.

[01:29:57]

Now the machine is making these sort of decisions. And you and some people who defend this, they'll say, well, that's morally fine because we are the good guys and we will tell it the definition of bad guy that we think is moral. But now it would be very naive to think that if ISIS buys that same drone, that they're going to use our definition of bad guy, maybe for them bad guy is someone wearing a US Army uniform. Right.

[01:30:24]

Or maybe maybe there will be some.

[01:30:29]

We are the ethnic group who decides that someone of another ethnic group, they are the bad guys, right? The thing is, human soldiers with all of our faults, right. We still have some basic wiring in us, like, no, it's not OK to kill kids and civilians. And Thomas Weapon has none of that. It's just going to do whatever it's programmed, it's like the perfect Adolf Eichmann on steroids, like they told him. Adolf Eichmann, you know, you want you to do this and this and this to make the Holocaust more efficient.

[01:31:01]

And he was like. On and off he went and did it right. Do we really want to make machines that are like that, like completely amoral and will take the user's definition of who is the bad guy? And do we then want to make them so cheap that all our adversaries can have them, like what could possibly go wrong? That's the that's, I think the big argument for why we want to this year really put the kibosh on this.

[01:31:28]

And I think you can tell there's a lot of very active debate even going on within the US military and undoubtedly in other militaries around the world, also about whether we should have some sort of international agreement to at least require that.

[01:31:44]

These weapons have to be above a certain size and cost, you know, so that things just don't totally spiral out of control. And finally, just for your question. But is it possible to stop it? Because some people tell me, oh, just give up you.

[01:32:02]

But again. So so Matthew Meselson, again, from Harvard, right, who the bio weapons hero, he had exactly this criticism of the bio weapons. People were like, how can you check for sure that the Russians aren't cheating?

[01:32:18]

And he told me this, I think, really ingenious insight.

[01:32:24]

He said, you know, Max, some people think you have to have inspections and things and you have to make sure that people you can catch the cheaters with 100 percent chance you don't need 100 percent.

[01:32:35]

He said one percent is usually enough because if it's an enemy, if it's another big state, like suppose China and the US have signed the treaty limit, drawing a certain line and saying, yeah, these kind of drones are OK, but these fully autonomous ones are not. I suppose you are. China and you have cheated and secretly developed some clandestine little thing, or you're thinking about doing it. What's your calculation that you do well, that you like?

[01:33:06]

OK, what's the probability that we're going to get caught? Hmm. If the probability is 100 percent, of course, we're not going to do it. But if the probability is five percent that we're going to get caught and it's going to be like a huge embarrassment for us.

[01:33:20]

Yeah, and it doesn't it's we still have our nuclear weapons anyway, so it doesn't really make an enormous difference in terms of deterring the U.S., you know, and that feeds the stigma that you kind of establish this fabric, this universal stigma over the thing.

[01:33:40]

Exactly.

[01:33:41]

It's very reasonable for them to say, well, you know, we probably get away with it, but if we don't, then the US will know we cheated and then they're going to go full tilt with their program and say, look, the Chinese are cheaters and now we have all these weapons against us and that's bad. So the stigma alone is very, very powerful. And again, look what happened with bio weapons, right? It's been 50 years now.

[01:34:01]

Yeah. When was the last time you read about a bioterrorism attack? The only deaths I really know about with bio weapons that have happened when we Americans managed to kill some of our own with anthrax, you know, idiot who sent them to Tom Daschle and others in letters. Right. And similarly, that in that sort of lost in the Soviet Union, they had some anthrax in some lab there. Maybe they were cheating or who knows? And it leaked out and killed a bunch of Russians.

[01:34:28]

I'd say that's a pretty good success rate, 50 years just to own goals by the superpowers and then nothing. And that's why whenever I ask anyone what they think about biology, they think it's great. They associated with new cures, new diseases, maybe a good vaccine. This is how I want to think about A.I. in the future. I want and I want others to think about it, too, as a source of all these great solutions to our problems.

[01:34:53]

Not as. Oh, I. Oh yeah. That's the reason I feel scared going outside these days. Yes, it's kind of brilliant.

[01:35:02]

But bio weapons and nuclear weapons we've figured out. I mean, of course there's still a huge source of danger, but we figured out some way of creating rules and social stigma over these weapons that then creates a stability to whatever that game theoretic stability. Exactly. And we don't have that where they are.

[01:35:24]

And you can scare me from the top of the mountain about this, that we need to find that, because just like, you know, it's very possible, you know, with a feature of life, as you point out, institute awards pointed out that, you know, with nuclear weapons, we could have destroyed ourselves quite a few times. And it's you know, it's a learning experience that doesn't is very costly.

[01:35:54]

We gave this Future Life Award. We gave it the first time to this guy, Vasily archtop of you know, he was on most people haven't even heard of you say who he is, Vasily AACAP of.

[01:36:05]

He has, in my opinion, made the greatest positive contribution to humanity of any human in modern history. And maybe it sounds like hyperbole here, like I'm just over the top. But let me tell you the story and I think maybe you'll agree. So during the Cuban missile crisis.

[01:36:25]

We Americans first didn't know that the Russians had sent for submarines, but we caught two of them and we didn't know that, so we dropped practice depth charges on the one that he was on to try to force it to the surface. But we didn't know that this nuclear submarine actually was a nuclear submarine with a nuclear torpedo. We also didn't know that they had authorisation to launch it without clearance from Moscow. And we also didn't know that they were running out of electricity.

[01:36:54]

The batteries were almost dead. They were running out of oxygen. Sailors were fainting left and right. The temperature was about 110, 120 Fahrenheit on board. It was really hellish conditions, really, this kind of doomsday. Mm hmm. And at that point, these giant explosions start happening from Americans dropping these the captain thought were worth we had begun. They decided that they were going to launch the nuclear torpedo. And one of them shouted, you know, we're all going to die, but we're not going to disgrace our Navy.

[01:37:23]

You know, we don't know what would have happened if there had been a giant mushroom cloud all of a sudden, you know, against the Americans. But since everybody had their hands on the trigger as pretty, you don't have to be too creative to think that it could have led to an all out nuclear war, in which case we wouldn't be having this conversation now. Right. What actually took place was they needed three people to approve this. The captain had said, yes, there was the Communist Party political officer.

[01:37:49]

He also said, yes, let's do it. And the third man was this guy, Vasily Octopods, who said nyet. Yeah. For some reason, he was just more chill than the others and he was the right man at the right time. I don't want us as a species rely on the right person being there at the right time. You know, we tracked down his family living in relative poverty outside Moscow. We flew his daughter. He had passed away and flew into London.

[01:38:20]

They had never been to the West even. And it was incredibly moving to get to honor them for this. The next year, we gave this Future Life Award to Stanislav Petrov. Have you heard of him? Yes. So he he was in charge of the Soviet early warning station, which was built with Soviet technology and honestly, not that reliable. It said that there were five US missiles coming in. Again, if they have launched at that point, we probably wouldn't be having this conversation.

[01:38:48]

He decided, based on just mainly gut instinct, to just not tell, not escalate this. And I'm very glad he wasn't replaced by an ally that was just automatically following orders. And then we gave the third one to Matthew Meselson last year.

[01:39:06]

We gave us the word to these guys who actually use technology for good, not avoiding something bad, but for something good. The guys who eliminated this disease, which is way worse than covid, but that had killed half a billion people in the final century, smallpox. So you mentioned it earlier, covid on average kills less than one percent. The people who get it, smallpox, about 30 percent. And they just ultimately Victor Danoff and Bill Fagih, most of my colleagues have never heard of either of them.

[01:39:42]

And one American, one Russian. They did this amazing effort. Not only was able to get the U.S. and the Soviet Union to team up against smallpox during the Cold War, but Bill Fagih came up with this ingenious strategy for making it actually go all the way to defeat the disease without funding for vaccinating everyone. And as a result, we haven't had any. We went from 15 million deaths the year I was born and smallpox. So what are we have in combat now?

[01:40:11]

A little bit short of two million, right? Yes, two zero deaths, of course, this year and forever. There have been two hundred million people estimated who would have died since then by smallpox had it not been for this. Isn't science awesome? Forget that when you use it for good. And the reason we want to celebrate these sort of people is to remind them of this. Science is so awesome when you use it for good.

[01:40:35]

And those those are words that actually the variety there paints a very interesting picture.

[01:40:40]

So the the the first two are looking at it's kind of exciting to think that these these average humans, in some sense, they're products of, you know, billions of other humans that came before them evolution and some little you said gut, you know, but there's something in there that that stopped the annihilation of the human race. And that's a magical thing. But that's like this deeply human thing.

[01:41:10]

And then there's the other aspect where that's also very human, which is to build solution to the to the existential crisis that we're facing, like to build it, to take the responsibility to take come up with different technologies and so on. Yeah. And both of those are deeply human, the gut and the mind, whatever that the best is when they work together.

[01:41:34]

Archetype of I wish I could have met him of course, but he had passed away. He was really a fantastic military officer, combining all all the best traits that we in America admire in our military, because first of all, he was very loyal. Of course, he never even told anyone about this during his whole life, even though you think he had some bragging rights. Right. But he just was like, this is just business doing my job.

[01:41:57]

It only came out later after his death. And second, the reason he did the right thing was not because he was some sort of liberal or not because he was.

[01:42:09]

Just, oh, you know, peace and love, it was partly because he had been the captain on another submarine that had a nuclear reactor meltdown and there was his heroism that. Helped contain this. That's why he died of cancer later also. And he's seen many of his crew members die. And I think for him, that gave him this gut feeling that, you know, if there's a nuclear war between the US and the Soviet Union, the whole world is going to go through what I saw my dear crew members suffer through.

[01:42:39]

It wasn't just an abstract thing for him. I think it was real. And second, though, not just the mind.

[01:42:46]

He was for some reason very headed personality and very smart guy. Which is exactly what we want our best fighter pilots to be also. I never forget Neil Armstrong on his landing on the moon and almost running out of gas, and he doesn't even really say 30 seconds. It doesn't even change. The tone of voice just keeps going. Arquit Bob, I think, was just like that. So when the explosions started going off and his captain is screaming, nuke them and all, he's like.

[01:43:16]

I don't think the Americans are trying to sink us. I think they're trying to send us a message. That's pretty badass. Yes, coolness, because he said if they wanted to sink us, no. And he said, listen, listen, it's alternating one loud explosion on the left, one on the right, one on the left, one on the right. He was the only one to notice this pattern. And it is like I think this is just them trying to send us a signal.

[01:43:46]

That they wanted to surface and they're not going to sink us and somehow. This is how he then managed ultimately with his combination of God and also the cool analytical thinking, he was able to de-escalate the whole thing. And yeah, so this is the best in humanity. I guess coming back to what we talked about earlier is the combination of the neural network, you know, with. Yeah, I'm getting the theory up here, I guess. But he was just he is one of my superheroes having both the heart and the mind combined, especially in that time.

[01:44:27]

There's something about the I mean, this is a very in America, people are used to this kind of idea of being the individual of, like, on your own thinking. Yeah, I think under the Soviet Union, under communism, it's actually much harder to do that.

[01:44:43]

Oh, yeah. He didn't even even he didn't get any accolades either when he came back for this. Right. They just wanted to hush the whole thing up.

[01:44:51]

Yeah. There's echoes of that which are noble.

[01:44:53]

There's all kinds of uh that that's one that's that's a really hopeful thing, that amidst big centralized powers, whether it's companies or states, there's still the power of the individual to think on their own to act.

[01:45:09]

But I think we need to think of people like this not as a panacea we can always count on, but rather as.

[01:45:18]

A wake up call, you know, it's because of them, because of our kipa, we are alive to learn from this lesson, to learn from the fact that we shouldn't keep playing Russian roulette and almost have a nuclear war by mistake now and then, because relying on luck is not a good long term strategy. If you keep playing Russian roulette over and over again, the probability of surviving just drops exponentially with time. Yeah, and if you have some probability of having an accidental Nucor every year, the probability of not having one also drops exponentially.

[01:45:46]

I think we can do better than that. So I think the message is very clear.

[01:45:51]

There are once in a while shit happens and there is a lot of very concrete things we can do. To reduce the risk of things like that happening in the first place on the A.I. front, if we just link an effort to see your friends with you off and talk with Elon Musk, thought history did a lot of interesting things together.

[01:46:14]

He has a set of fears about the future of artificial intelligence.

[01:46:19]

Ajai, do you have a sense we've we've already talked about the things we should be worried about with the idea of a sense of the shape of his spheres, in particular about AI, of the which subset of what we talked about, whether it's creating you know, it's that direction of creating sort of these giant computational systems that are not explainable, they're not intelligible intelligence or is it is it the and then like as a branch of that, is that the manipulation by big corporations of that or individual evil people to use that for destruction or the unintentional consequences?

[01:47:03]

Do you have a sense of where his thinking is on this?

[01:47:05]

From my many conversations with Elon. Yeah, I. I certainly have a model of how how how he thinks it's actually very much like the way I think also I'll elaborate on it a bit. I just want to push back on when you said evil people, I don't think it's a very helpful context. Concept, evil people. Yes.

[01:47:25]

Sometimes people do very, very bad things, but they usually do it because they think it's a good thing. Yes, because somehow other people had told them that that was a good thing or given them incorrect information or or whatever. Right. Right. I have I believe in the fundamental goodness of humanity that if we. Educate people well, and they find out how things really are, people generally want to do good and be good now has the value alignment.

[01:47:56]

Yes, it's about and for me, it's about knowledge. And then once we have that, will will will likely be able to do good in a way that's aligned with everybody else who thinks, yeah, and it's not just the individual people we have to align.

[01:48:10]

So we don't just want people to be educated, to know the way things actually are and treat each other well. But we also need to align other non-human entities. We talked about corporations. There has to be institutions. So what they do is actually good for the country there. And and we should make sure that what countries do is actually good for the species as a whole, etc..

[01:48:33]

Coming back to Elon. Yeah. My, my, my. Understanding of how you see this is really quite similar to my own, which is one of the reasons I like him so much and enjoy talking with him so much, I think he's quite different from most people in that he. Thinks much more than most people about the really big picture, not just what's going to happen in the next election cycle, but in millennia, millions and billions of years from now.

[01:49:02]

And then when you look in this more cosmic perspective, it's so obvious that we are gazing out into this universe that, as far as we can tell, is mostly dead with life being almost imperceptibly tiny perturbation. And and he sees this enormous opportunity for the universe to come alive, for us to become an interplanetary species. Mars is obviously just first stop. On this cosmic journey, and precisely because he thinks more long term. It's much more clear to him than to most people that what we do with this Russian roulette thing we keep playing with our nukes is a really poor strategy, really reckless strategy, and also that we're just building these ever more powerful A.I. systems that we don't understand.

[01:49:47]

It's also a really reckless strategy, because I feel Iran is a very much a humanist in the sense that he wants an awesome future for humanity. He wants to be us that control the machines rather than the machines that control us. Yes, know. And why shouldn't we insist on that? We're building them after.

[01:50:08]

All right. Why should we build things that just make us into some little cog in the machinery that has no further say in the matter? It's not my idea of an inspiring future either. Yeah, if you think on the cosmic scale, in terms of both time and space, so much is put into perspective.

[01:50:28]

Yeah, whenever I have a bad day, that's what I think about me. That makes me feel better.

[01:50:34]

Well, it makes me sad that for us individual humans, at least for now, the ride ends too quickly, though we don't get to experience the cosmic scale.

[01:50:45]

Yeah, I mean, I think in our universe sometimes as an organism that has only begun to wake up a tiny bit, just like the very first little glimmers of consciousness you have in the morning when you start coming around where the coffee, before the coffee, even before you get out of bed, before you even open your eyes, starts start to wake up a little bit.

[01:51:05]

There's something here that's very much how I think of where we are. You know, those all those galaxies out there. You know, I think they're really beautiful. But why are they beautiful? They're beautiful because conscious entities are actually observing them and experiencing them through our telescopes.

[01:51:25]

If I you know, I define consciousness as subjective experience, whether it be colors or emotions or sounds. So beauty is an experience, meaning is an experience purpose as an experience. If there was no conscious experience of observing these galaxies, they wouldn't be beautiful. If we do something dumb with advanced A.I. in the future here and earth originating life goes extinct.

[01:51:54]

And that was it for this. If there is nothing else with telescopes in our universe, then it's kind of game over for meaning, beauty and meaning and purpose and a whole universe and, you know, be just such an opportunity lost, frankly.

[01:52:07]

And I think when Elon points out he gets very unfairly maligned in the media for all the dumb media biased reasons we talked about. Right. They want to print precisely the things about Elon out of context that are really click Baity like he has gotten so much flak for this summoning the demon statement.

[01:52:29]

Yeah, I happen to know exactly the context is I was in the front row when he gave that talk.

[01:52:35]

He was at MIT. You'll be pleased. There was the Arrow Astro anniversary. They had Buzz Aldrin there from the moon landing, Holehouse Kresge Auditorium packed with MIT students. And he had this amazing Q&A, might have gone for an hour. And they talked about rockets and Mars and everything. At the very end. This one student who was actually in my class asked him, what about I? Elon makes this one comment and they take this out of context.

[01:53:03]

That goes viral is like with I was summoning the demon, something like that, and trying to cast him as some sort of doom and gloom dude, you know.

[01:53:12]

Yeah. You know, Elon. Yes. Not the doom and gloom dude here. He is such a positive visionary. And the whole reason he warns about this is because he realizes more than most what the opportunity cost is of screwing up, that there is so much awesomeness in the future that we can, we can and our descendants can enjoy if we don't screw up. Right. I get so pissed off when people try to cast him as some sort of.

[01:53:38]

Technophobic Luddite. Then at this point, it's kind of ludicrous when when I hear people say that people who worry about artificial general intelligence are Luddites, because, of course, we should look more closely. You have some of the more outspoken. People making warnings are people like Professor Stewart Russell from Berkeley, who's written the best selling a textbook, you know. So claiming that he, a Luddite who doesn't understand is the joke is really on the people who said it.

[01:54:12]

But but I think more broadly, this message is really not sunk in at all. What it is that people worry about. They think that Elon and Stewart, Russell and others are worried about the dancing robots picking up an AR 15 and going on a rampage. Right. They think they're worried about robots turning evil. They're not. I'm not. You know, the risk is not. Malice, it's competence, the risk is just that we build some systems that are incredibly competent, which means they're always going to get their goals accomplished even if they clash with our goals.

[01:54:49]

That's the risk.

[01:54:51]

Why did we humans drive the West African black rhino extinct? Is it because we're malicious, evil rhinoceros haters?

[01:55:00]

No, it just because our goals didn't align with the goals of those rhinos.

[01:55:05]

And tough luck for the rhinos, you know.

[01:55:08]

So what I'm the point is just we don't want to put ourselves in the position of those rhinos creating these something more powerful than us if we haven't first figured out how to align the goals. And I am optimistic. I think we could do it if we worked really hard on it, because I spent a lot of time around intelligent entities that were more intelligent than me, my mom and my dad. And I was little and that was fine because their goals were actually aligned with mine quite well, but we've seen today many examples of where the goals of our powerful systems are not so aligned.

[01:55:43]

So those. Click through optimization algorithms that are polarized social media. They were actually pretty poorly aligned with what was good for democracy, it turned out. And again, almost all problems we've had, the machine learning again came so far not from malice, but from pure alignment. And it's that's exactly why that's why we should be concerned about in the future.

[01:56:05]

Do you think it's possible that with those systems and NewLink and brain computer interfaces, you know, again, thinking of the cosmic scale has talked about this, but others have as well throughout history of figuring out how the exact mechanism of how to achieve that kind of alignment. So one of them is having a symbiosis with A.I., which is like coming up with clever ways where we're like stuck together in this weird relationship, whether it's biological or in some kind of other way.

[01:56:40]

Do you think there's there's a possibility of having that kind of symbiosis or do we want to instead kind of focus on this? Distinct entities of us humans talking to these intelligible, self doubting eyes, maybe like Stuart Russell thinks about it, like this is where we're self doubting and full of uncertainty and have our systems, therefore.

[01:57:04]

So we communicate back and forth and in that way achieve somebody else's. I honestly don't know, I would say that because we don't know for sure what, if any, of our which of any of our ideas will work. But we do know that if we don't, I'm pretty convinced that if we don't get any of these things to work and just forge ahead, then our species is probably going to go extinct this century.

[01:57:29]

I think this century you think like you think we're facing this crisis is a 21st century crisis like this century will be remembered by hard drives and hard drives somewhere, or maybe where future generations is like, uh, like there will be future future of Life Institute awards for people that have done something about it.

[01:57:55]

I think you're also an even worse weather is we're not superseded by leaving any AA behind either way. We just totally wipe out, you know, like on Easter Island.

[01:58:04]

Our century is long now and there are still seventy nine years left of it. I mean, think about how far we've come just in the last 30 years, so.

[01:58:15]

We can talk more about what might go wrong, but you asked me this really good question about what's the best strategy? Is it NewLink or Russel's approach or whatever? I think. You know, when we when we did the Manhattan Project, we didn't know if any of our four ideas for enriching uranium and getting out the uranium 235 were going to work. But we felt this was really important to get it before Hitler did. So you know what we did?

[01:58:43]

We tried all four of them here. I think it's analogous where there's the greatest threat that's ever faced our species and of course, the US national security, by implication, we don't know. We don't have any method that's guaranteed to work, but we have a lot of ideas. So we should invest pretty heavily in pursuing all of them with an open mind and hope that one of them at least works.

[01:59:06]

These are the good news is the century is long, you know, and it might take decades until we have artificial general intelligence. So we have some time, hopefully, but it takes a long time to solve these very, very difficult problems. It's going to actually be the most difficult problem we were ever trying to solve as a species. So we have to start now so we don't have it rather than begin thinking about it the night before. Some people have had too much Red Bull switching on.

[01:59:34]

And we have to coming back to your question, we have to pursue all of these different avenues and see if you're my investment adviser. And I was trying to invest in the future.

[01:59:45]

How do you think the human species is most likely to destroy itself in this century? Yeah.

[01:59:55]

And so if if the crises many of the crises we're facing are really before us within the next hundred years, how do we make explicit, make known the the unknowns and solve those problems to avoid the biggest, um, starting with the biggest existential crisis is your investment adviser.

[02:00:18]

How are you planning to make money on destroying ourselves? I have that I don't know.

[02:00:24]

Might be the Russian origins. It's somehow involved at the micro level of detail strategies.

[02:00:30]

Of course, these are unsolved problems for alignment. We can break it into three sub problems that are all unsolved.

[02:00:39]

I think, you know, you want to make machines, understand our goals, then adopt our goals and then retain our goals.

[02:00:49]

So the hit on all three real quickly, the problem when Andreas Lubitz told his auto pilot to fly into the Alps was that the computer didn't even understand anything about his goals.

[02:01:04]

Right. It was too dumb.

[02:01:06]

It could have understood, actually, but we would have had to put some effort in as that system designer the don't fly in the mountains.

[02:01:14]

So that's the first challenge.

[02:01:15]

How do you how do you program into computers, human values, human goals? We can start rather than saying, oh, it's so hard, we should start with the simple stuff, as I said. Self-driving cars, airplanes just put in all the goals that we all agree on already and then have a habit of whenever machines get smarter so they can understand one level higher goals, you know, put them into the second challenge is getting them to adopt the goals.

[02:01:46]

It's easier for situations like that. We you just program it in. But when you have self-learning systems like children, you know, any parent knows that the. There is a difference between getting our kids to understand where we want them to do and to actually adopt our goals with humans, with children. Fortunately, the. Did they go through this phase first? They're too dumb to understand what we want our goals are, and then to have this period of some years when they're both smart enough to understand them and malleable enough that we have a chance to raise them well, and then they become teenagers to.

[02:02:24]

But we have this window with machines, the challenges the intelligence might grow so fast that that window is pretty short. So that's that's the research from the third one is how do you make sure they keep the goals if they keep learning more and getting smarter? Many sci fi movies are about how you have something in which initially was a line, but then things kind of go off here.

[02:02:45]

And, you know, my kids were very, very excited about their Legos when they were little. Now they're just gathering dust in the basement. You know, if we if we create machines that are really on board with the goal of taking care of humanity, we don't want them to get as bored with us and as my kids go with Legos. So this is another research challenge. How can you make some sort of recursively self improving system, retain certain basic goals?

[02:03:13]

That said, a lot of adult people still play with Legos and maybe we succeeded with Legos.

[02:03:19]

I like your optimism, but not all A.I. systems have to maintain the goals or some just some fraction. Yeah, so so there is there's a lot of talented researchers now who I've heard of this and want to work on it. Not so much funding for it yet. Of the billions that go into building a more powerful it's only a miniscule fraction so far going into the safety of research. My attitude is generally we should not try to slow down the technology, but we should greatly accelerate the investment in this sort of safety research and also make sure it's been this was very embarrassing last year.

[02:03:54]

But, you know, the NSF decided to give out six of these big institutes. We got one of them for A.I. in science you asked me about. Another one was supposed to be a safety research. And they gave it to people studying oceans and climate and stuff like, yeah, so I'm all for studying oceans and climate, but we need to actually have some money that actually goes into a safe space and also doesn't just get grabbed by whatever.

[02:04:22]

That's a fantastic investment. And then at the higher level, you ask this question, OK, what can we do? What are the biggest risks? I think I think we cannot just consider this to be only a technical problem again, because if you solve only the technical problem, can I play with your robot? Yes. If we get our machines, you know, to just blindly obey the orders we give them so we can always trust that it will do what we want.

[02:04:51]

That might be great for the owner of the robot. There might not be so great for the rest of humanity if that person is the least favorite word leader or whatever you can imagine. Right.

[02:05:02]

So we have to also take a look at the appli alignment, not just to machines, but to all the other powerful structures. That's why it's so important to strengthen our democracy. Again, as I said, to have institutions make sure that the playing field is not rigged so that corporations are given the right incentives to do the things that both make profit and are good for people to make sure that countries have incentives to do things that are both good for their people and don't screw up the rest of the world.

[02:05:32]

And this is not just something for A.I. nerds, you know, to get out on. This is the interesting challenge for political scientists, economists and so many other thinkers.

[02:05:42]

So one of the magical things that perhaps makes this earth quite unique is that it's home to conscious beings. So you mentioned consciousness, perhaps as a small aside, because we didn't really get specific to what how we might do the alignment. Like I said, it's just a really important research problem. But do you think engineering consciousness into A.I. systems is a possibility, is something that we might to one day do? Or is there something more fundamental to consciousness that is is there something about consciousness that is fundamental to humans and humans only?

[02:06:28]

I think it's possible I think both consciousness and intelligence are information processing, certain types of information processing, and that fundamentally it doesn't matter whether the information is processed by carbon atoms in the neurons, in brains or by silicon atoms and so on in our technology. Some people disagree.

[02:06:53]

This is what I think is a physicist than I am.

[02:06:57]

And that consciousness is the same kind of you said consciousness is information processing. So meaning, you know, I think a quote something like it's information of knowing itself, that kind of thing.

[02:07:13]

I think consciousness is the way information feels when it's being processed. People that in ways we don't know exactly what those complex ways are. It's clear that most of the information processing in our brains does not create an experience. We're not even aware of it. Right. Like, for example, you're not aware of your heartbeat regulation right now, even though it's clearly being done by your body. Right. It's just kind of doing its own thing when you go jogging.

[02:07:39]

There's a lot of complicated stuff about how you put your foot down and we know it's hard. That's why robots used to fall over so much. But you're mostly unaware about your brain, just your CEO consciousness module to send an email. Hey, you know, I'm going to keep jogging along this path. The rest is on autopilot. Right? And so most of it is not conscious, but somehow there are some of the information processing, which is we don't know what what exactly.

[02:08:07]

I think this is a science problem that I hope one day we'll have some equation for something so we can be able to build a consciousness detector and say, yeah, here are there is some consciousness here. There's not oh, don't boil that lobster because it's feeling pain or it's OK because it's not feeling pain. Right now.

[02:08:25]

We treat this as sort of just metaphysics, but it would be very useful in emergency rooms to know if a patient has locked in syndrome and is conscious or if they are actually just out. And in the future, if you build a very, very intelligent helper robot to take care of you, you know, I think you'd like to know if you should feel guilty about shutting it down or if or if it's just like a zombie going through the motions like a fancy tape recorder.

[02:08:54]

Right. And and once we can make progress on the science of consciousness and figure out. What is conscious and what isn't then?

[02:09:06]

We assuming we want to create positive experiences are not suffering, we'll probably choose to build some machines that are deliberately unconscious that do, you know, incredibly boring, repetitive jobs in his own mind somewhere or whatever, and maybe will choose to create helper robots for the elderly that are conscious so that people don't just feel creeped out that the robot is just faking it when it acts like it's sad or sad elderly.

[02:09:39]

I think everybody gets pretty deeply lonely in this world. And so there's a place, I think, for everybody to have a connection with conscious beings, whether they're human or otherwise. But I know for sure that I would if I had a robot, if I was going to develop any kind of personal emotional connection with it, I would be very creeped out if I knew it at an intellectual level that the whole thing was just a fraud. Today, you can buy.

[02:10:05]

Little talking doll for a kid, which will say things, and that little child will often think that this is actually conscious and even real secrets to it, that then go on the Internet and with all sorts of creepy repercussions.

[02:10:18]

You know, I would not want to be just hacked and tricked like this if I was going to be developing real emotional connections with the robot. I would want to know that this is actually real. It's acting conscious, acting happy because it actually feels it. And I think this is not sci fi.

[02:10:36]

I think it's possible to measure to come up with tools to understand the science of consciousness. You're saying it will be able to come up with tools that can measure consciousness and definitively say, like, this thing is experiencing the things it says it's experiencing, kind of by definition, if it is a physical phenomena, information processing there.

[02:10:57]

And we know that some information processing is conscious and some isn't. Well, then there is something there to be discovered with the methods of science. Julia N'goni has stuck his neck out the farthest and written down some equations for a theory. Maybe that's right. Maybe it's wrong. We certainly don't know. But I applaud that kind of efforts to sort of take us. So say this is not just something that philosophers can have beer and muse about, but something we can measure.

[02:11:23]

We can study and bring that back to us.

[02:11:26]

I think what we would probably choose to do, as I said, is if we cannot figure this out or choose to make that be quite mindful about what sort of consciousness, if any, we put in different machines that we have.

[02:11:39]

We, um, and certainly not we wouldn't want to make them. You should not be making too much of machines that suffer without us even knowing it. Right. And if at any point someone decides to upload themselves like Ray Kurzweil wants to do, I don't know if you had him on your show.

[02:11:56]

We agree. But then covid happens. Oh, we're waiting it out a little bit.

[02:11:59]

You know, supposedly uploads himself into this robot, Ray, and the talks like him and act like him and laugh like him and before he powers off his biological body. He would probably be pretty disturbed if he realized that there's no one home, this robot is not having any subjective experience. Right. If we if humanity gets replaced by by rote, by machine descendants, which do all these cool things and build spaceships and go to intergalactic rock concerts, and it turned out turns out that they are all unconscious, just going through the motions.

[02:12:37]

Wouldn't that be like the ultimate robot zombie apocalypse, right. Just to play for empty benches?

[02:12:43]

Yeah, I have a sense that there is some kind of once you understand consciousness, but will understand that there's some kind of continuum and it would be a greater appreciation. And we'll probably understand, just like you said, it would be unfortunate if it's a trick, we'll probably definitely understand that love is indeed a trick that will play on each other, that we humans are we convince ourselves we're conscious, but we're really, you know, us and trees and dolphins are all the same kind of you might try to cheer you up a little bit with a philosophical thought here about the last part.

[02:13:15]

Yes. Let's you know, you might say, OK, your love is just the collaboration enabler.

[02:13:23]

And then and then maybe you can go and get depressed about that.

[02:13:27]

But I think that would be the wrong conclusion, actually.

[02:13:30]

You know, I know that the only reason I enjoy food is because my genes hacked me and they don't want me to starve to death, not because they care about me consciously enjoying succulent delights of pistachio ice cream. But they just they just want me to make copies of them and the whole thing. So in a sense, the whole the whole enjoyment of food is also a scam like this. But does that mean I shouldn't take pleasure in this pistachio ice cream?

[02:13:58]

I love pistachio ice cream. And I can tell you, I have. I have. I know this is an accidental fact. I enjoy pistachio ice cream every bit as much, even though I scientifically know exactly why. What is this? What your genes really appreciate that you like the pistachio ice cream.

[02:14:16]

Well, but I my mind appreciates it too, you know, and I have a conscious experience right now. Ultimately, all of my brain is also just something the genes built to copy themselves. But so what? You know, I'm grateful that.

[02:14:28]

Yeah, thanks genes for doing this. But, you know, now it's my brain that's in charge here and I'm going to enjoy my conscious experience. Thank you very much.

[02:14:35]

And not just pistachio ice cream, but also the love I feel for my amazing wife and all the other delights of being conscious.

[02:14:44]

I don't actually Richard Feynman, I think, said this so well.

[02:14:50]

He is also the guy who really got me into physics. Some art friend said that science kind of just is the party pooper. It's kind of ruined the fun. Like you have a beautiful flowers as the artist and then the scientist is going to deconstruct that into just a blob of quarks and electrons. And and Feynman pushed back on that and such a beautiful way, which I think also can be used to push back and make you not feel guilty about falling in love.

[02:15:18]

So so here's what Feynman basically said. He said his friend, you know. Yeah, I can also, as a scientist, see that this is a beautiful flower. Thank you very much. Maybe I can draw as good a painting as you because I'm not as talented an artist. But, yeah, I can really see the beauty in it.

[02:15:32]

And it just also looks beautiful to me.

[02:15:34]

But in addition to that, Feynman said, as a scientist, I see even more beauty that the artist did not see, I suppose is the a flower on a blossoming apple tree.

[02:15:46]

He can say this tree has more beauty in it than just the colors and the fragrance. This tree is made of air. If I'm wrote. This is one of my favorite Feynman quotes ever, and it took the carbon out of the air and bound it in using the flaming heat of the sun know turn around a tree and when you burn logs in your fireplace, it's really beautiful to think that this is being reversed. Now the tree is going, the wood is going back in the air and in this flaming beautiful dance of the fire that the artist can see is the flaming light of the sun that was bound in to turn the air into a tree.

[02:16:24]

And then the ashes is the little residue that didn't come from the air that the tree sucked out of the ground. You know, firemen said these are beautiful things and science just adds. It doesn't subtract. And I feel exactly that way about love and about pistachio ice cream. Also, I can understand that. And even there is even more nuance to the whole thing. Right. At this very visceral level, you can fall in love just as much as someone who knows nothing about neuroscience.

[02:16:53]

But you can also appreciate this even greater beauty in it. Isn't it remarkable that it came about from from this completely lifeless universe, just as much a hot blob of plasma expanding and then over the eons, you know.

[02:17:10]

Gradually, first, the strong nuclear force decided to combine Quirk's together into nuclei and then the electric force, bounden electrons are made atoms and then they cluster from gravity and you've got planets and stars and this and that. And then natural selection came along and the genes had their little thing. And you started getting what went from seeming like a completely pointless universe that was just trying to increase entropy and approach heat death into something that looked more goal oriented. Isn't that kind of beautiful?

[02:17:38]

And then there's goal oriented and this revolution got ever more sophisticated where you got ever more. And then you started getting this thing, which is kind of like the mind's museum on steroids itself. The ultimate stealth play is not what what the mind's eye does against itself to get better at the go. It's what all these little cork blobs did against each other in the game of survival of the fittest. Now, when you had really dumb bacteria living in a simple environment, there wasn't much incentive to get intelligent.

[02:18:11]

But then the life made the environment more and more complex, and then there was more incentive to get even smarter. And and that gave the other organisms more incentive to also get smarter. And then here we are now, just like like you zero learn to become world master at at the go and just from playing yourself by just playing against itself. All the quarks here on our planet, the electrons that have created the giraffes and elephants and humans and pistachios.

[02:18:43]

I just find that really beautiful. And I mean, that just adds to the enjoyment of love. It doesn't subtract anything.

[02:18:51]

Do you feel a little more. I feel that way better. That was that was incredible. This self play of Quirk's taking back to the beginning of our conversation a little bit. There's so many exciting possibilities, but artificial intelligence, understanding the basic laws of physics, do you think I will help us unlock?

[02:19:12]

There's been quite a bit of excitement throughout the history of physics of coming up with more and more general simple laws that explain the nature of our reality. And then the ultimate of that will be a theory of everything that combines everything together. Do you think it's possible that all one we humans, but perhaps our systems will figure out a theory of physics that unifies all the laws of physics? Yeah, I think it's absolutely, absolutely possible. I think it's very clear that we're going to see a great boost to science.

[02:19:50]

We're already seeing a boost, actually, from from machine learning, helping science. Alpha Fold was an example, you know, the decades old protein folding problem. Solved and gradually, yeah, unless we go extinct by doing something dumb like this, I think it's very likely that our understanding of physics will become so good that our technology will no longer be limited by human intelligence, but instead be limited by the laws of physics. So our tech today is limited by what we've been able to invent.

[02:20:27]

Right. I think as I progresses, it'll just be limited by the speed of light and other physical limits, which will mean. I mean, just dramatically beyond, you know, where we are now. Do you think it's a fundamentally mathematical pursuit of trying to understand, like the laws of this that govern our universe from a mathematical perspective? It's almost like if it's AI, it's exploring in the space of like theorems and those kinds of things, or is there some other way?

[02:21:01]

But is there some other more computational ideas, more sort of empirical ideas? They're both.

[02:21:07]

I would say it's really interesting to look out of the landscape of everything we call science today. So here we come out with this big new hammer says, machine learning on it. And you know where there's something else that you can help with here that you can hammer ultimately. If machine learning gets the point that it can do everything better than us, it will be able to help across the whole space of science. But maybe we can, angered by starting a little bit right now near term and see how we kind of move forward.

[02:21:37]

So so right now, first of all, you have a lot of big data science where, for example, with telescopes, we are able to collect way more data every hour than a grad student can just pour over like in the old time. And machine learning is already being used very effectively, even at MIT, to find planets around other stars to detect exciting new signatures of a new particle physics in the sky, to detect the ripples in the fabric of space time that we call gravitational waves caused by enormous black holes crashing into each other halfway across the observable universe.

[02:22:15]

Machine learning is running and ticking right now, doing all these things, and it's really helping all these experimental fields.

[02:22:23]

There is a separate front, the physics, computational physics, which is getting an enormous boost also.

[02:22:31]

So we have to do all our computations by hand. Right. People who have these giant books with tables of logarithms and oh my God, it pains me to even think how long time it would have taken to do simple stuff.

[02:22:45]

Then we started to get little calculators and computers that do some basic math for us. Now we're starting to see is. Kind of a shift from govi computational physics to. Neural network, computational physics. What I mean by that is most computational physics would be done by humans programming in the intelligence of how to do the computation into the computer. Just as when Garry Kasparov got his posterior kicked by IBM's deep blue and chess humans had programmed in exactly how to play chess.

[02:23:25]

Intelligence came from the humans. There wasn't learned new zero. Can beat not only Kasparov in chess, but also Stockfish, which is the best sort of Golf Digest program by learning and we've seen more of that now that's beginning to happen in physics.

[02:23:44]

So let me give you an example. So, laddies, QCT is an area of physics whose goal is basically to take the periodic table and just compute the whole thing from first principles. Mm hmm.

[02:23:57]

This is not the search for a theory of everything. We already know the theory that's supposed to produce the output, the periodic table. Which atoms are stable? How much? How heavy they are. All that good stuff, their spectral lines. It's called it's a theory, us, QED, you can put it on your T-shirt, our colleague Frank will check out the Nobel Prize for working on it, but the math is just too hard for us to solve.

[02:24:22]

We have not been able to start with these equations and solve them to the extent that we can predict. Oh, yeah. And then there is carbon and this is what the spectrum of the carbon atom looks like. But the awesome people are building these super computer simulations where you just put in these equations and you make a big cubic lattice of space or actually very small lattice because you're going at the subatomic get down to the subatomic scale and you try to solve it.

[02:24:52]

But it's just so computationally expensive that we still haven't been able to calculate things as accurately as we measure them in many cases.

[02:25:00]

And now machine learning is really revolutionizing this. So my colleague Fiala Shannahan, then at MIT, for example, she's been using this really cool machine learning technique called normalizing flows, where she's realized she can actually speed up the calculation dramatically by having to learn how to do things faster.

[02:25:21]

Another area like this. Where? Where? We suck up an enormous amount of supercomputer time to do physics is black hole collisions. So now that we've done the sexy stuff of detecting a bunch of this, like another experiment, we want to be able to know what we're seeing. And so it's a very simple conceptual problem. It's the two body problem. Newton solved it for a classical gravity years ago when the two body problem is still not fully solved for black holes.

[02:25:54]

Yes. And Einstein's gravity, because they won't just orbit each other forever anymore. Two things. They give off gravitational waves and make sure they crash into each other. And the game. What do you want to do is you want to figure out, OK, what kind of wave comes out as a function of the masses of the two black holes, as a function of how they're spinning relative to each other, etc. And that is so hard.

[02:26:17]

It can take months of supercomputer time and massive numbers, of course, to do it, you know. Wouldn't it be great if you can use machine learning? To speed that up right now, you can use the expensive old Gulf by calculation as the truth and then see if machine learning can figure out smarter, faster way of getting the right answer. Yet another area like computational physics, these are probably the big three that suck up the most computer time.

[02:26:49]

Larry Kishida black hole collisions and cosmological simulations where you take on a subatomic thing and try to figure out the mass of the proton, but you take something with enormous and try to look at how all the galaxies get formed in there. Oh, yeah.

[02:27:06]

There again, there are a lot of very cool ideas right now about how you can use machine learning to do this sort of better stuff better.

[02:27:15]

The difference between this and the big data is you kind of make the data yourself, right?

[02:27:20]

So and then finally, we're looking over the physical landscape and seeing what can we hammer with machine learning? So we talked about experimental data, big data, discovering cool stuff that we humans can look more closely at than we talked about taking the expensive computations we're doing now and figuring out how to do them much faster and better with A.I.. And finally, let's go. Really theoretical. So things like discovering equation's, having the fundamental insights, this comes, this is something closest to what I've been doing in my group.

[02:27:59]

We talked earlier about the whole Fairman project where if you just have some data, how do you automatically discover equations that seem to describe this? Well, that you can then go back as a human and then work with and test and explore and that. You asked a really good question also about if this is sort of a search problem in some sense. That's very deep, actually, what you said, because it is suppose I ask you to prove the mathematical theorem.

[02:28:27]

Mm hmm. What is the proof in math? It's just a long string of steps, logical steps that you can write out symbols. And once you find it, it's very easy to write a program to check whether it's a valid proof or not.

[02:28:41]

So why is it so hard to prove it then? Well, because there are ridiculously many possible candidate proof that you could write down, right. If if the proof contains 10000 symbols, even if there were only 10 options for what each symbol could be, a 10 to the power, a thousand possible proofs, which is way more than there are atoms in our universe. Right. So you could say it's trivial to prove these things. You just write a computer, generate all strings and then check.

[02:29:07]

Is this a valid proof? No. Is this about prove it? No.

[02:29:13]

And then you just keep doing this forever. But there are a lot of. But it is fundamentally a search problem. You just want to search the space of all those all strings of symbols to find the one. Find one. That is the proof. Right. And there's a whole area of machine learning called search. How do you search through some giant space to find the needle in the haystack?

[02:29:39]

It's easier in cases where there's a clear measure of of good like you're not just right or wrong, but this is better and this is worse. You can maybe get some hints as to which direction to go in. That's why we talked about neural networks work so well there.

[02:29:53]

I mean, it's such a human thing of that moment of genius, of figuring out the intuition of of good, essentially.

[02:30:02]

I mean, we thought that that is it. Maybe it's not right.

[02:30:06]

We thought that about chess. Right. That exactly. That the ability to see like 10, 15, sometimes 20 steps ahead was not a calculation that humans are performing. It was some kind of weird intuition about different patterns, about board positions, about the relative positions that are somehow stitching stuff together. And a lot of it is just like intuition. But then you have like, I guess zero will be the first one that did the self play. It just came up with this.

[02:30:37]

It was able to learn to play mechanism, this kind of intuition. Exactly. But just like you said, it's so fascinating to think within in the space of totally new ideas. Can that be done in developing theorems?

[02:30:54]

We know it can be done by neural networks because we did it with the neural networks in the cranium, the great mathematicians of our of humanity. And and I'm so glad you brought up Alpha Zero, because that's the counterexample. It turned out we were flattering ourselves when we said intuition is something different. It's only humans can do it. It's not information processing if you if it used to be that way. But again, it's very it's really instructive, I think, to compare the chess computer, Deep Blue that beat Kasparov with Alpha Zero with Beat Lisa Doll and go because four Deep Blue, there was no intuition.

[02:31:34]

There was some humans had programmed in some intuition after humans had played a lot of games, they told the computer, you know, count the pawn as one point. The bishop has three points at Rook is five points and so on. You add it all up and then you add some extra points for past pawns and subtract if the opponent has it then and bla bla bla bla. And then what about people who did was just such? Just very brute force and tried many, many moves ahead, all these combinations and apparently searched and it could think much faster than Kasparov and it won.

[02:32:08]

And that, I think, inflated our egos in a way it shouldn't have because people started to say, yeah, it's just brute force search, but has no intuition here.

[02:32:17]

The Alpha Zero really popped our bubble there because. But Alpha Zero does. Yes, it does also do some of that research, but it also has this intuition module which in geek speak is called a value function, where it just looks at the board and comes up with a number of how good is our position.

[02:32:40]

The difference was no human told it how good the position is. It just learned it. And zero is the coolest or scariest of all, depending on your mood, because it's the same basic A.I. system will learn what the good board position is, regardless of whether it's chess or Go or Shogi or PAC Man or lady, PAC Man or break out or Space Invaders or any bunch of other games, you don't tell anything and it gets this intuition after a while for what's good.

[02:33:15]

So this is very hopeful for science, I think, because if it can get intuition, what's a good position there? Maybe it can also get intuition for what are some good directions to go if you're trying to prove something.

[02:33:28]

If I offered one of the more most fun things in my science career is when I've been able to prove some theorem about something and. It's very heavily intuition guided, of course, I don't sit and try all random strangers, I have a hunch that, you know, this reminds me a little bit of about this other proof I've seen for this thing.

[02:33:45]

So maybe I first what if I try this now? It didn't work out. But this reminds me actually the way this failed reminds me of that. And so so combining the intuition that with all this brute force capabilities, I think I think it's going to be able to help physics, too.

[02:34:04]

Do you think there will be a day when an AI system being the primary contributor, let's say 90 percent plus wins the Nobel Prize in physics? Obviously, they'll give it to the humans because we humans don't like to give prizes to machines, give it'll give it to the humans behind the system. You could argue that has already been involved in some Nobel prizes, probably maybe some of the black holes and stuff like that.

[02:34:29]

Yeah, we don't like giving prizes. The other life forms, if someone wins a horse racing contest that they don't give the prize the horse either. That's true.

[02:34:38]

But do you think that we might be able to see something like that in our lifetimes? When I see like the first system, I would say that makes us think about a Nobel Prize. Seriously, it's like our four fold is making us think about in medicine and physiology, a Nobel Prize, perhaps discoveries that are a direct result of something that's discovered by our fall. Do you think in physics we might be able to see that in our lifetimes?

[02:35:06]

I think what's probably going to happen is more of a blurring of the distinctions. So. Today, if somebody uses a computer to do a computation that gives them the normal prize, nobody's going to dream of giving the prize to the computer. Are they going to be like that was just a tool, I think, for these things. Also, people are just going to for a long time view the computer as a tool. But what's going to change is that as the ubiquity, the ubiquity of of machine learning, I think.

[02:35:40]

At some point in my lifetime. Finding human physicists who knows nothing about machine learning is going to be almost as hard as it is today, finding a human physicist who doesn't says, oh, I don't know anything about computers or I don't use math. I would just be a ridiculous concept. But the thing is. There is a magic moment, though, like with Alpha Zero, when the system surprises us in a way where the best people in the world.

[02:36:12]

Truly learn something from the system in a way where you feel like it's another entity, like the way people the way Magnus calls in the way certain people are looking at the work of Alpha Zero, it's like it it truly is no longer a tool in the sense that it doesn't feel like a tool. It feels like some other entity. So there is a magic difference everywhere where you're like, you know, if and our system is able to come up with an insight that surprises everybody in a some in some like major way, that's a phase shift in our understanding of some particular size or some particular aspect of physics.

[02:36:55]

I feel like that is no longer a tool. And then you can start to say that they can perhaps deserves the prize.

[02:37:04]

So for sure, the more important than the more fundamental transformation of the 21st century, science is exactly what you're saying, which is probably everybody will be doing machine learning is to some degree, like if you want to be successful at unlocking the mysteries of science, you should be doing machine learning, which is just exciting to think about, like whether there will be one that comes along that's super surprising. And they'll make us question like who the real inventors are in this world.

[02:37:35]

Yeah, yeah.

[02:37:37]

I think the question of isn't if it's going to happen, but when. And but it's also in my mind, the time when that happens is also more or less the same time when we get artificial general intelligence. Yes. And then we have a lot bigger things to worry about. And the Nobel Prize or not. Right. Yeah, because when you have machines that can outperform our best scientists at science, they can probably outperform us at a lot of other stuff as well, which can at a minimum make them incredibly powerful agents in the world.

[02:38:14]

You know, and I think it's a mistake to think we only have to start worrying about. Loss of control when machines get to AGI across the board, when they can do everything, all our jobs, long before that, they'll be hugely influential. We talked at length about how the hacking of our minds with algorithms trying to get us glued to our screens right now has already had a big impact on on society.

[02:38:47]

That was incredibly dumb algorithm in the grand scheme of things right now. Supervised machine learning, yet that had had huge impact. So. So, um, I just don't want us to be lulled into a false sense of security and think there won't be any societal impact until things with human level because it's happening already. And it's I was just thinking the other week, you know, when I see some scaremongering going, oh, the robots are coming.

[02:39:12]

The implication is always that they're coming to kill us. Yeah, and maybe you should have worried about that if you were in Nagorno Karabakh during the recent war there. But more seriously. The robots are coming right now, but they're mainly not coming to kill us. They're coming to hack us. Hmm. They're coming to hack our minds. If they're buying things that we maybe we didn't need to vote for people who may not have our best interests in mind.

[02:39:40]

And it's kind of humbling, I think, actually, as a human being, to admit that it turns out that our minds are actually much more hackable than we thought.

[02:39:50]

And the ultimate insult is that we are actually getting hacked by the machine learning algorithms that are, in some objective sense, much dumber than us, you know, but maybe we shouldn't be so surprised because, you know. How do you feel about the cute puppies? I love them. So you would probably argue that in some across the board measure you're more intelligent than they are, but boy, are our cute puppies. Good at hacking us, right?

[02:40:16]

Yeah, they move into our house, persuade us to feed them and do all these things. What do they ever do for us? Yeah, they're being cute and making us feel good. So if puppies can hurt us, maybe we shouldn't be so surprised if. A pretty dumb machine learning algorithms can hack us to not to speak of Kath's, which is a low level. And and I think we should to counter your previous point about there. Let us not think about evil creatures in this world.

[02:40:43]

We can all agree that cats are as close to objective evil as we can get. But that's just me saying that. OK, you have you seen the cartoon?

[02:40:52]

I think it's maybe The Onion, um, where this incredibly cute kitten and it just says what's underneath something like thinks about murder all day. Exactly. That's that's accurate.

[02:41:08]

You mentioned I find that there might be a link between post biological Ajai and Saidee.

[02:41:13]

So last time we talked, you've you've talked about this intuition that we humans might be quite unique in our galactic neighborhood, perhaps our galaxy, perhaps the entirety of the observable universe, who might be the only intelligent civilization here, which is.

[02:41:39]

And you argue pretty well for that thought. So I have a few little questions around this one.

[02:41:48]

The scientific question in which way? Would you be if you were wrong? In that intuition, in which way do you think you'll be surprised, like why were you wrong? If we find out the other end up being wrong, like in which dimension? So like, is it because we care? See them, is it because the nature of their intelligence, of the nature of their life is totally different than we can possibly imagine? Is it because the I mean, something about the great filters and surviving them?

[02:42:30]

Or maybe because we're being protected from signals, all those explanations for for why we haven't heard a big, loud, like red light that says, yeah, we're here. Yeah.

[02:42:47]

So there are actually two separate things there that I could be wrong about. Two separate claims that I made them.

[02:42:54]

One one of them is I made the claim, I think most civilizations.

[02:43:02]

When you're going from simple bacteria like things to space faring space colonizing civilizations, they spend only a very, very tiny fraction of other of their life being where we are.

[02:43:20]

That I could be wrong about. The other one I could be wrong about is quite different statement that I think that actually I'm guessing that we are the only civilization in our observable universe from which life has reached us so far that that's actually gotten far enough to advance telescopes. So let's talk about maybe both of them in turn, because they really are different. The first one, if if you look at the end, equals one the data point we have on this planet.

[02:43:46]

Yeah.

[02:43:47]

So we spent four and a half billion years futzing around on this planet with life we got and most of it was pretty lame stuff from an intelligence perspective, you know, bacteria and then the dinosaurs spent.

[02:44:05]

But things greatly accelerated that the dinosaurs spent over one hundred million years stomping around here without even inventing smartphones, and then very recently, you know, it's only we've only spent four hundred years going from Newton to us, right.

[02:44:20]

In terms of technology. And look what we've done. Even you know, when I was a little kid, there was no Internet. Even so, it's I think it's pretty likely in this case of this planet that we're either going to really get our act together and start spreading life into space, the century and doing all sorts of great things or where you're going to going to wipe out. It's a little hard, if I could be wrong, in the sense that maybe what happened on this earth is very atypical.

[02:44:51]

And for some reason, what's more common on other planets is that they spend an enormously long time futzing around with the ham radio and things, but they just never really take it to the next level for reasons I don't I haven't understood. I'm a humble and open to that, but I would bet at least 10 to one that our situation is more typical because the whole thing with Moore's Law and accelerating technology, it's pretty obvious why it's happening. Everything that grows exponentially, we call it an explosion, whether it's a population explosion or a nuclear explosion that's always caused by the same thing.

[02:45:23]

It's that the next step trigger is the step after that. So I tomorrow's technology, today's technology enables tomorrow's technology and that enables the next level and. As that because the technology is always better. Of course, the steps can come faster and faster. On the other question that I might be wrong about, that's the much more controversial one, I think. But before we close out on this thing about if the first one, if it's true that most civilizations spend only a very short amount of their total time in the stage, say, between inventing telescopes or mastering electricity and leaving and doing space travel.

[02:46:09]

Yeah, if that's actually generally true. But then that should apply also elsewhere out there. So we should be very, very sure. We should be very, very surprised if we find some random civilization and we happen to catch them exactly in that very, very short stage. It's much more likely that we find a planet full of bacteria. Yes. Or that we find some civilization that's already post biological and has done some really cool galactic construction projects in there.

[02:46:38]

And the galaxy, will we be able to recognize them, do you think? Is it possible that we just can't. I mean, this post biological world, could it be just existing in some other dimension? It could be just all a virtual reality game for them or something. I don't know that that it changes completely. Well, we won't be able to detect.

[02:46:58]

We have to be honest, very humble about this. I think that I said I think I said earlier, the number one principle of being a scientist is you have to be humble, willing to acknowledge that everything we think just might be totally wrong. Of course, you can imagine some civilization where they all decide to become Buddhists and very inward looking and move into their little virtual reality and not disturb the flora and fauna around them. And we might not notice them.

[02:47:23]

But this is a numbers game, right? If you have millions of civilizations out there or billions of them, all it takes is one with a more ambitious mentality that decides, hey, we are going to go out and. Settle. A bunch of other solar systems in maybe galaxies, and then it doesn't matter if they're a bunch of quiet Buddhists, we're still going to notice that expansionist one. And it seems like quite the stretch to assume that now we know even in our own galaxy that there are.

[02:47:56]

Probably a billion or more planets that are pretty earthlike and many of them were formed over a billion years before ours, so had a big head start. So if you actually assume also that life happens kind of automatically on an earth like planet, I think it's pretty quite the stretch to then go and say, OK, so we are there billions of another billion civilizations out there that also have our level of tech.

[02:48:22]

And they all decided to become Buddhists and not a single one decided to go like Hitler on the galaxy and say we need to go on and colonize or and or not. And not a single one decided for a more benevolent reasons to go out and get more resources. That that seems that seems like a bit of a stretch, frankly. And this leads into the second thing you challenged me to, that I might be wrong about how rare or common is life.

[02:48:47]

You know, so Francis Drake, when he wrote down the Drake equation, multiplied together a huge number of factors and said, we don't know any of them. So we know even less what to get when you multiply together the whole product.

[02:48:59]

And since then, a lot of those factors have become much better known. One of his big uncertainties was how common is it that a solar system even has a planet? Right. Well, now we know a very common earth like planets. We know, but they're a dime a dozen and many, many of them even in our galaxy. At the same time, you know, we have thanks to I'm a big supporter of the city project and its cousins, and I think we should keep doing this.

[02:49:26]

And we've learned a lot. We've learned that so far all we have is the unconvincing hints, nothing more. And and there are certainly many scenarios where it would be dead obvious if there were hundred million other human civilizations in our galaxy.

[02:49:44]

It would not be that hard to notice some of them with today's technology. And we haven't tried so, so well. We can well, we can say is well, OK.

[02:49:55]

We can rule out that there is a human level of civilization on the moon and in fact in many nearby solar systems where we we cannot rule out, of course, that there are something like Earth setting in a galaxy five billion light years away.

[02:50:12]

But we've ruled out a lot, and that's already kind of shocking, given that there are all these planets there, you know, so like where are they? Where are they all? That's the that's the classic Fermi paradox. Yeah. And. So so my argument, which might very well be wrong, it's very simple, really just goes like this, OK, we have no clue about this. It could be the probability of getting life on a random planet that could be 10 to the minus one that apriori or a tenth of minus 10, 10th or minus 20, 10th or minus 30, minus 40, basically every order of magnitude is about equally likely.

[02:50:48]

When then do the math and ask, how close is our nearest neighbour? It's, again, equally likely that it's 10 to the 10 meters away, 10 to 20 meters away, 20 to 30 meters away. We can we have some nerdy ways of talking about this with Bayesian statistics that a uniform log prior. But that's irrelevant. This is the simple basic argument. And now comes the data. So we can say, OK, how many were there?

[02:51:11]

Are all these orders of magnitude ten to the 26 meters away. There's the edge of our observable universe. If it's farther than that, light hasn't even reached us yet. If it's less than 10 to the 16 meters away, well, it's within Earth's or if it's no farther away than the sun, we can definitely rule that out, you know?

[02:51:31]

And so I think about it like this a priori. Before we looked through telescopes, you know, it could be 10, 15 meters in the 20s and 30s, in the 40s and 50s and the equally likely anywhere here. Yeah. And now we've ruled out like this chunk. Yeah.

[02:51:49]

And most of it is outside. And here is the edge of our observable universe. Yes. Yeah. So I'm certainly not saying I don't think there's any life elsewhere in space. If space is infinite, then you're basically 100 percent guaranteed that there is. But the probability that there is life.

[02:52:05]

The that the nearest neighbor happens to be in this little region between where we would have seen it already and where we will never see it is actually significantly less than one, I think. And I think there's a moral lesson from this, which is really important, which is to be good stewards of this planet. And this shot we've had, you know, it can be very dangerous to say, oh, you know, it's fine if we nuke our planet or ruin the climate or mess it up with Honouliuli, because I know there is this nice Star Trek's fleet out there.

[02:52:40]

They're going to swoop in and take over. But we failed just like it wasn't the big deal that the Easter Island losers wiped themselves out. That's a dangerous way of lulling yourself into a false sense of security. If it's actually the case that it might be up to us and only us the whole future of intelligent life in our observable universe, then I think it's both. It really puts a lot of responsibility on our shoulders, inspiring.

[02:53:09]

And it's a little bit terrifying, but it's also inspiring.

[02:53:12]

But it's empowering, I think most of all, because because the biggest problem today is I see this even I teach. Right.

[02:53:18]

So many people feel that it doesn't matter what they do or we do, we feel disempowered. Oh, it makes no difference.

[02:53:28]

This is about as far from that as you can come to realize that what we do on our little spinning ball here in our lifetime, you know, could make the difference for the entire future of life in our universe.

[02:53:41]

You know, how empowering is that survival of consciousness? I mean, on the other, a very similar kind of empowering aspect of the Drake equation is say there is a huge number of intelligent civilizations are springing up everywhere. But because of the Drake equation, which is the lifetime of a civilization, maybe many of them hit a wall.

[02:54:05]

And and just like you said, it's clear that that for us, the great filter, the one possible great filter seems to be coming, you know, in the next hundred years.

[02:54:16]

So it's also empowering to say, OK, well, uh, we have a chance to not only do a great filters work, get most of them.

[02:54:27]

Exactly. Nick Bostrom has articulated this really beautifully to, you know, every time yet another search for life on Mars comes back negative or something like that.

[02:54:40]

Our odds for us surviving. Yes. Sounds like you already made the argument that broadbrush. They're right. But the point is, we already know. There is a crap ton of planets out there that are Earth like, and we also know that most of them do not seem to have anything like our kind of life on. So what went wrong? There's clearly one step along the evolutionary at least one filter roadblock in going from no life to spacefaring life.

[02:55:11]

And where is it? Is it in front of us or is it behind us? Right.

[02:55:17]

If there's no filter behind us and we keep finding all sorts of little mice on Mars and or whatever.

[02:55:26]

Right. That's actually very depressing because that makes it much more likely that the filter is in front of us and that what actually is going on is like the ultimate dark joke that that whenever a civilization invents sufficiently powerful tech is just to set the clock. And then after awhile it goes poof for one reason or another and wipes itself out. Now, wouldn't that be, like, utterly depressing if we're actually doomed, whereas if it turns out that there is a really there is a great filter early on that, for whatever reason seems to be really hard to get to the stage of.

[02:56:02]

Sexually reproducing organisms or even the first ribosome or whatever, right, or or maybe you have lots of planets with dinosaurs and cows, but for some reason they tend to get stuck there and never invent smartphones.

[02:56:16]

All of those are huge boosts for our own odds because been there, done that.

[02:56:23]

You know, it doesn't matter how hard or unlikely it was that we got past that roadblock because what we did. Yeah. And then that makes it likely that the jury is in our own hands. We're not doomed.

[02:56:36]

So that's why that's why I think the fact that life is rare in the universe, it's not just something that there's some evidence for, but also something we should actually hope for. So that's the the and the mortality, the death of human civilization that we've been discussing in life, maybe prospering beyond any kind of great filter, do you think about your own death?

[02:57:04]

Does it make you sad that you may. Not witness some of the. You know, you lead a research group on working some of the biggest questions in the universe, actually, both on the physics and the aside, does it make you sad that you may not be able to see some of these exciting things come to fruition that we've been talking about?

[02:57:26]

Of course. Of course it sucks. The fact that I'm going to die and I remember once and I was much younger, my dad made this remark that life is fundamentally tragic. And I'm like talking about that many years later. I felt now I feel I totally understand what he means. You know, we grow up very little kids and everything is infinite and it's so cool. And then something new that we find out that actually, you know, you got to there only you can get game over at some point.

[02:57:55]

So, of course, it's it's. It's something that's sad. Are you afraid? No, not in the sense that I think anything terrible is going to happen after I die or anything like that. No, I think it's really going to be game over, but it's more that. It makes me very acutely aware of I just thought what a wonderful gift this is, that I get to be alive right now and and is a steady reminder to just live life to the fullest and really enjoy it because it is finite, you know, and I think actually and we know we all get regular reminders when someone near and dear to us dies, that that one day it's going to be our turn and just kind of focus.

[02:58:47]

I wonder what it would feel like actually to be an immortal being if they might even enjoy some of the wonderful things of life a little bit less because. There isn't that finiteness, yeah, do you think that could be a feature, not a bug, the fact that we. Beings are finite, maybe there's lessons for engineering artificial intelligence systems as well that are conscious, like do you think it makes? Is it possible that the reason the pistachio ice cream is delicious is the fact that you're going to die one day?

[02:59:25]

And you will not have all the pistachio ice cream to eat because of that fact. Well, let me say two things.

[02:59:33]

First of all, it's actually quite profound what you're saying. I do think I appreciate the pistachio ice cream a lot more. And knowing that I well, there is only a finite number of times I get to enjoy that. And I can only remember find a number of times in the past. And moreover, my life is not so long that it just starts to feel like things are repeating themselves in general. It's so new and fresh. I also think, though, that.

[02:59:59]

Therefore, there's a little bit overrated in the sense that the. It comes from a sort of outdated view of physics and what life actually is because. If you ask what is that that's going to die, exactly what am I really when I say I feel sad about the idea of myself dying, am I really sad that the skin cell here is going to die? Of course not, because it's going to die next week anyway. And I'll grow a new one.

[03:00:28]

Right.

[03:00:29]

And it's not any of my cells that I'm associating really with who I really am, nor is it any of my atoms or quarks or electrons. In fact, basically all of my atoms get replaced on a regular basis. Right. So what is it? That's really me from a modern physics perspective. It's the information and processing, Amy.

[03:00:54]

That's where my my memory that's my memory is. That's my my values. My dreams. My passion, my love.

[03:01:05]

It that's what's really fundamentally me and frankly, not all of that will die when my body dies, like Richard Feynman, for example.

[03:01:18]

His body died of cancer, you know, but many of his ideas that he felt may have buried him actually live on I this is my own little personal tribute to Richard Feynman while I try to keep a little bit of him alive and myself alive, even quoted him today, right?

[03:01:35]

Yeah. He almost came alive for a brief moment in this conversation. Yeah. Yeah.

[03:01:39]

And this honestly gives me some solace.

[03:01:42]

You know, when I work as a teacher, I feel if I can actually share a bit about myself. That my students feel worthy enough to copy and adopt as a part of things that they know or they believe or aspire to. Now I live on also a little bit on them. Right. And I so being a teacher is a little bit of what I.

[03:02:14]

That's something also that contributes to making me a little teeny bit less mortal might, because I'm not, at least not all going to die all at once. And I find that a beautiful tribute to people we respect if we can if we can remember them and carry in us. The things that we felt was the most. Ask them about them and then they live on. And getting a bit emotionally of a but it's a very beautiful idea you bring up there.

[03:02:45]

I think we should stop this old fashioned materialism and just equate who we are with our quarks and electrons. There's no scientific basis for that, really. And it's also very uninspiring.

[03:03:02]

Now, if you look a little bit towards the future, right, one thing which really sucks about humans dying is that even though some of their teachings and memories and stories and ethics and so on will be copied by those around them, hopefully a lot of it can't be copied and just dies with them, with their brain. And that really sucks. That's the fundamental reason why we find it so tragic. When someone goes from having all this information, they're just gonna ruin right.

[03:03:33]

With more post biological intelligence.

[03:03:37]

That's going to shift a lot. Right. The only reason that's so hard to make it back up of your brain and in its entirety is exactly because it wasn't built for that. Right. If you have a future machine intelligence, there's no reason for why it has to die at all if it wants to. If you want to copy it, whatever it into some other quark blob. Right. You can copy all of it. Right. And so.

[03:04:06]

So in that sense. You can get immortality because all the information can be copied out of any individual entity, and it's not just mortality that will change if we get more post biological life. It's also the with that very much the whole individual, the whole individualism we have now. Right. The reason that we make such a big difference between me and you is exactly because we're a little bit limited in how much we can copy. Like, I would love to go like this and copy your Russian skills, Russian speaking skills.

[03:04:42]

Yeah. Wouldn't it be awesome?

[03:04:44]

Why can't I have to actually work for years to get better on it?

[03:04:48]

And but if you have if we were robots, just copy and paste freely, then that loses completely.

[03:04:57]

It washes away the sense of what immortality is and also individuality a little bit.

[03:05:03]

We would start feeling much more. Maybe we would feel much more collaborative with each other if we can. Hey, you know, I'll give you my you can give me your Russian and I'll give you whatever I can and something you can speak Swedish. Maybe that's a bad trade for you, but whatever else you want from my brain right here and there have been a lot of sci fi stories about hive minds and so on where where people where experiences can be more broadly shared.

[03:05:31]

And I think, one, we don't I don't pretend to know what it would feel like to be a.

[03:05:40]

A superintelligent machine, but I'm quite confident that however it feels about mortality and individuality will be very, very different from how it is for us. Well, for us, mortality in finiteness seems to be pretty important at this particular moment, and so all good things must come to an end, just like this conversation that I saw that coming.

[03:06:06]

So has the world's worst physician I could talk to you forever is such a huge honor that you spend time with me or is mine.

[03:06:15]

Thank you so much for getting me essentially to start this podcast by doing the first conversation, making me realize falling in love with conversation in itself. And thank you so much for inspiring so many people in the world with your books, with your research, with your talking and with the other the like. This this ripple effect of friends, including Elon and everybody else that you inspire. So thank you so much for talking to me.

[03:06:43]

Thank you. I feel so fortunate that you're doing this podcast and getting so many interesting voices out there into the ether and not just the five second soundbites, but so many of interviews are what you do, where you really let people go in into depth in a way which we certainly need in this day and age, and that I got to be number one. I feel important. Yeah, you started it. Thank you so much, Max. Thanks for listening to this conversation with Max Tegmark and thank you to our sponsors, the Jordan Harbinger show for stigmatic mushroom coffee, better help online therapy and express VPN.

[03:07:24]

So the choices was them caffeine, sanity or privacy? Choose wisely, my friends, and if you wish, click the sponsor links below to get a discount and to support this podcast. And now let me leave you some words for Max Tegmark. If consciousness is the way that information feels when it's processed in certain ways that it must be substrate independent. It's only the structure of information processing that matters, not the structure of the matter doing the information processing.

[03:07:57]

Thank you for listening and hope to see you next time.