Happy Scribe Logo

Transcript

Proofread by 1 reader
Proofread
[00:00:00]

Welcome to the Artificial Intelligence podcast. My name is Leks Friedman. I'm a research scientist at MIT. This podcast is an extension of the courses on deep learning, autonomous vehicles and artificial general intelligence that I've taught and organize.

[00:00:16]

It is not only about machine learning or robotics or neuroscience or philosophy or any one technical field. It considers all of these avenues of thought in a way that is hopefully accessible to everyone.

[00:00:31]

The aim here is to explore the nature of human and machine intelligence, the big picture of understanding the human mind and creating echoes of it in the machine. To me, that is one of our civilization's most challenging and exciting scientific journeys into the unknown. I will first repost parts of previous YouTube conversations and lecture Cunard's that can be listened to without video. If you want to see the video version, please go to my YouTube channel. My username is Leks Friedemann there and on Twitter.

[00:01:07]

So reach out and connect.

[00:01:09]

If you find these conversations interesting moving forward, this podcast will be long form conversations with some of the most fascinating people in the world who are thinking about the nature of intelligence. But first, like I said, I will be posting old content, but now an audio form. For a little while, I'll probably repeat this intro for repost the YouTube content like this episode, and we'll try to keep it to what looks to be just over two minutes, maybe two, 30 or so in the future.

[00:01:41]

If you want to skip this intro, just jump to the two 30 minute mark. In this episode, I talk with Max Tegmark. He's a professor at MIT, a physicist who has spent much of his career studying and writing about the mysteries of our cosmological universe and now thinking and writing about the beneficial possibilities and existential risks of artificial intelligence. He's the co-founder of the Future of Life Institute, author of two books, Our Mathematical Universe and Life 3.0.

[00:02:13]

He is truly an out of the box thinkers. So I really enjoyed this conversation. I hope you do as well.

[00:02:35]

Do you think there's intelligent life out there in the universe? Let's open up with an easy question. I have a minority view here, actually, when I give public lectures, I often ask for a show of hands who thinks there is intelligent life out there somewhere else. And almost everyone put their hands up. And when I ask why, they'll be like, oh, there's so many galaxies out there. There's got to be. But I'm not numbers nerd, right, so when you look more carefully at it, it's not so clear at all the when we talk about our universe, first of all, we don't mean all of space.

[00:03:11]

We actually mean I don't know, you could throw me in the universe if you wants behind you there. It would simply mean the spherical region of space from which light has a plan to reach us. So far during the fourteen point eight billion year, thirteen point eight billion years since the Big Bang. There's more space here, but this is what we call a universe because that's all we have access to. So is there intelligent life here that's gotten to the point of building telescopes and computers?

[00:03:39]

My guess is no, actually, the probability of it happening on any given planet. There's some number we don't know what it is and what we do know is that the number can't be super high because there's over a billion Earth like planets in the middle keyway galaxy alone, many of which are billions of years older than Earth. And aside from some UFO believers, there isn't much evidence that any super human civilization has come here at all. And so that's the famous Fermi paradox.

[00:04:15]

Right. And then if you if you work the numbers, what you find is that if you have no clue what the probability is of getting life on a given planet. So it could be 10 to the minus 10, then the minus 20 or 10 minutes to any power of 10 is sort of equally likely if you want to be really open minded. That translates into it being equally likely that our nearest neighbor is 10 to the 16 meters away, ten to the 70 meters away, intimidating, you know, by the time you get.

[00:04:45]

Much less than than 10, 16 already, we pretty much know there is nothing else that close, and when you get because it would have discovered us, yeah, they would have been discovered as long ago or if they're really close, we would have probably noticed some engineering projects that they're doing.

[00:05:02]

And if it's beyond ten to twenty six meters, that's already outside of here.

[00:05:07]

So.

[00:05:07]

So my guess is actually that there are we are the only life in here that's gotten to the point of building advanced tech, which I think is very. Puts a lot of responsibility on our shoulders, not screw up, you know, I think people who take for granted that it's OK for us to screw up, have an accidental nuclear war or go extinct somehow because there's a sort of Star Trek like situation out there with some other lifeforms are going to come and bail us out.

[00:05:36]

And it doesn't matter what I think, they'll let us into a false sense of security. I think it's much more prudent to say, you know, let's be really grateful for this amazing opportunity we've had and make the best of it just in case it is down to us. So from a physics perspective, do you think Intelligent Life says unique from a sort of statistical view of the size of the universe, but from the basic matter of the universe, how difficult is it for intelligent life to come about?

[00:06:06]

The kind of advanced tech building life is implied in your statement that it's really difficult to create something like a human species?

[00:06:15]

Well, I think what we know is that going from no life to having life that can do our level of tech, there is some sort of to going beyond that and actually settling a whole universe with life. There is some major roadblock. There, which is some great filter, as it's sometimes called, which which is tough to get through. It's either that that roadblock is either behind us or in front of us. So I'm hoping very much that it's behind us.

[00:06:48]

I'm super excited every time we get a new report from NASA saying they failed to find any life on Mars, I just awesome.

[00:06:57]

Because that suggests that the hard part, maybe it maybe it was getting the first ribosome or some some very low level kind of.

[00:07:05]

Stepping stone, so they were home free because if that's true, then the future is really only limited by our own imagination would be much sexier if it turns out that this level of life is kind of a dime a dozen.

[00:07:19]

But maybe there are some other problem, like as soon as a civilization gets advanced technology within one hundred years, they get into some stupid fight with themselves and poof. Yeah. Now, that would be a bummer. Yeah.

[00:07:30]

So you've explored the mysteries of the universe, the cosmological universe, the one that's sitting between us today. I think you've also begun to explore the other universe, which is sort of the mystery, the mysterious universe of the mind of the intelligence of intelligent life. So is there a common thread between your interest and the way you think about space and intelligence? Oh, yeah, when I was a teenager. I have I was already very fascinated by the biggest questions, and I felt that the two biggest mysteries of all in science, where our universe out there and the universe in here.

[00:08:11]

Yeah. So it's quite natural. After having spent. A quarter of a century of my career thinking a lot about this one and now indulging in the luxury of doing research on this one, it's just so cool.

[00:08:25]

I feel the time is right now for. You transfer greatly deepening our understanding of this to start exploring this one. Yeah, because I think I think a lot of people view intelligence as. Something mysterious that can only exists in biological organisms like us and therefore dismiss all talk about artificial general intelligence as science fiction.

[00:08:49]

But from my perspective as a physicist, I am a blob of quarks and electrons moving around a certain pattern and processing information in certain ways.

[00:08:58]

And this is also a blob of quarks and electrons. I'm not smarter than the water bottle because I made a different kind of quarks right up quarks and down quirks, the exact same kind as this. It's a there's no secret sauce. I think in me it's it's all about the pattern of the information processing. And this means that there's no law of physics saying that we can't create technology which can help us by being incredibly intelligent and help us crack mysteries that we couldn't.

[00:09:29]

In other words, I think we really only seen the tip of the intelligence iceberg so far.

[00:09:34]

Yeah. So the Perceptron Liam. Yeah. So you coined this amazing term. It's a hypothetical state of matter, sort of thinking from a physics perspective. What is the kind of matter that can help, as you're saying, subjective experience emerged, consciousness emerge. So how do you think about consciousness from this physics perspective? Very good question to the. Again, I think. Many people have underestimated our ability to make progress on this, and by convincing themselves it's hopeless because somehow we're missing some ingredient that we need.

[00:10:16]

There's some new consciousness particle or whatever.

[00:10:20]

I happen to think that we're not missing anything and and that it's not the interesting thing about consciousness that gives us this amazing subjective experience of colors and sounds and emotions and so on is rather something at the higher level about the patterns of information processing.

[00:10:40]

And that's why that's why I like to think about this idea of petroleum. What does it mean for an arbitrary physical system to be conscious in terms of what its particles are doing or its information is doing? I don't think I don't I hate carbon chauvinism. You know, this attitude you have to be made of carbon atoms to be smart or conscious of something about the information processing.

[00:11:04]

Yes. Kind of matter performs. Yeah.

[00:11:06]

And, you know, I have my favorite equations here describing various fundamental aspects of the world. I feel that I think one day maybe someone is watching this will come up with the equations that information processing has to satisfy to be conscious. I'm quite convinced there is a big discovery to be made there, because let's face it, so we know that some information processing is conscious because. We are conscious, but we also know that a lot of information processing is not conscious, like most of the information processing happening in your brain right now is not conscious like 10 megabits per second coming in even just through your visual system.

[00:11:46]

You are not conscious about your heartbeat, regulation or most things. I even even if I just ask you to, like, read what it says here, you look at it and then, oh, now you know what it said. You're not aware of how the computation actually happened. You're like your consciousness is like the CEO that got an email at the end with the final answer. So. What is it that makes a difference? I think that's both of great science mystery.

[00:12:13]

We're actually studying it a little bit in my lab here at M.I.T., But I also think it's just a really urgent question to answer.

[00:12:20]

For starters, I mean, if you're an emergency room doctor and you have an unresponsive patient coming in, wouldn't it be great if in addition to having. A CT scanner, if you had a consciousness scanner that could figure out whether this person is actually having locked in syndrome or is actually comatose and in the future, imagine if we build robots or the machine that. We can have really good conversations, which I think is most likely to happen, right?

[00:12:52]

Wouldn't you want to know, like if your home helper robot is actually experiencing anything or just like a zombie? I mean, would you prefer. What would you prefer or would you prefer that it's actually unconscious so that you don't have to feel guilty about switching it off or giving boring chores or what? What would you prefer?

[00:13:10]

Well, the certainly would we would prefer I would prefer the appearance of consciousness. But the question is whether the appearance of consciousness is different than consciousness, consciousness itself, and sort of dare ask that as a question. Yeah. Do you think we need to understand what consciousness is, solve the hard problem of consciousness in order to build something like a NAJAI system? No, I don't think that. I think we we will probably be able to build things even if we don't answer that question.

[00:13:44]

But if we want to make sure that what happens is a good thing, we better solve it first. So it's a wonderful controversy you're raising there where you have basically three points of view about the heart problem. So. There are two different points of view that both conclude that the hard problem of consciousness is both. On one hand, you have some people like Daniel Dennett who say this is our consciousness is just B.S. because consciousness is the same thing as intelligence.

[00:14:12]

There's no difference. So anything which acts conscious. Is conscious, just like we are. And then there are also a lot of people, including many top researchers, I know you say our consciousness is just bullshit because, of course, machines can never be conscious, that they're always going to be zombies never have to feel guilty about how you treat them. And then there's a third group of people, including Julio to n'goni, for example, and and another just of Kochan brothers, I would put myself out on this metal camp who say that actually some information processing is conscious and some is not.

[00:14:52]

So let's find the equation which can be used to determine which it is. And I think we've just been a little bit lazy, kind of running away from this problem for a long time. It's been almost taboo or even mentioned the C word, right? A lot of circles because look. But we should stop making excuses. This is a science question. And we and there are there are ways we can even test test any theory that makes predictions for this.

[00:15:19]

And coming back to this help the robot. I mean, so you said you would want to help a robot to certainly conscious and treat you like you have conversations with you.

[00:15:28]

And I think so. But wouldn't you would you feel would you feel a little bit creeped out if you realized that it was just a lost up tape recorder? In other words, just zombie and faking emotion. Would you prefer that it actually had an experience or what would you prefer that it's actually not experiencing anything? So you feel you don't have to feel guilty about what you do to it, as this is such a difficult question because, you know, it's like when you're in a relationship and you say, well, I love you and the other person I love you back is like asking or do they really love you back or are they just saying they love you back?

[00:16:03]

Do you don't you really want them to actually love you? I it's hard to it's hard to really know the difference between everything seeming like there's consciousness present, there's intelligence present, there is affection, passion, love and and it actually being there, I'm not sure.

[00:16:24]

Do you feel like you can actually question what it's like to make it a bit more pointed to Mass General Hospital is right across the river, right? Yes. I suppose you're going in for a medical procedure and they're like, you know, for anesthesia. What are we going to do is we're going to give you muscle relaxants. You won't be able to move and you're going to feel excruciating pain during the whole surgery, but you won't be able to do anything about it.

[00:16:45]

But then we're going to give you this drug that raises your memory of it.

[00:16:49]

Would you be cool about that? What's the difference that you're conscious about it or not, if there's no behavioral change? Right. Right. That's a really that's a really clear way to put it. That's yeah. It feels like in that sense, experiencing it is a valuable quality. So actually being able to have subjective experiences, at least in that case, is as valuable. And I think we humans have a little bit of a bad track record also of making these self-serving arguments that other entities aren't conscious.

[00:17:26]

People often say, oh, these animals can't feel pain and it's OK to boil lobsters because we asked them if it hurt and they didn't say anything. And now there was just a paper out saying lobsters do feel pain when you boil them and they're biting it in Switzerland. And and we did this with slaves too often and said, oh, they don't mind.

[00:17:42]

Uh, they don't maybe. I'm conscious or women don't have souls or whatever. I'm a little bit nervous when I hear people just pick is an axiom. That machine can have experience, however, I think this is just a really fascinating science question is what is let's research it and try to figure out what it is that makes the difference between unconscious, intelligent behavior and conscious, intelligent behavior. So in terms of so if you think of the Boston Dynamics humanoid robot being sort of with a broom being pushed around the stage, it starts pushing on a consciousness question.

[00:18:21]

So let me ask, do you think an ajai system, like a few neuroscientists believe, needs to have a physical embodiment, needs to have a body or something like a body? No. I don't think so you mean that to have a conscious experience, to have consciousness? I do think it helps a lot to have a physical environment, to learn the kind of things about the world that are important to us humans. For sure. But I don't think the physical environment is necessary after you've learned to just have the experience, think about when you're dreaming, right.

[00:18:59]

Your eyes are closed. You're not getting any sensory input. You're not behaving or moving in any way. But there's still an experience there, right?

[00:19:07]

And so clearly, the experience that you have when you see something cool in your dreams isn't coming from your eyes, it's just the information processing itself in your brain, which is that experience. Right. But I put it another way. I say because it comes from neuroscience is the reason you want to have a body in a physical, something like a physical.

[00:19:28]

Like, you know, a physical system is because you want to be able to preserve something in order to have a self, you could argue would you need to have some kind of embodiment of self to want to preserve? Well, now we're getting a little bit on top of morphing into anthropomorphizing things, maybe talking about self-preservation instincts. I mean, we are evolved organisms, right? Right. So a Darwinian evolution endowed US and other environmental organism with a self-preservation instinct because those that didn't have those self-preservation genes got clean out of the gene pool.

[00:20:09]

Right.

[00:20:10]

I mean, if you build an artificial general intelligence, the mindspace that you can design is much, much larger than just a specific subset of minds that can evolve that have so they so najai mind doesn't necessarily have to have any self-preservation instinct. And it also doesn't necessarily have to be so individualistic as I like. Imagine if you could just first of all, or we are also very afraid of death. You know, I suppose you could back yourself up every five minutes and then your airplane is about to crash like, shucks, I'm just and I'm going to lose the last five minutes of experiences as my last cloud back up dying.

[00:20:47]

You know, it's not as big a deal where if we could just copy experiences between our minds easily, like which we could easily do if we were silicon based right, then maybe we would feel a little bit more like a hive mind, actually, that maybe it's so.

[00:21:05]

So there's so we don't think we should take for granted at all that ajai will have to have any of those sort of. Competitive alpha male instincts, right, on the other hand, you know, this is really interesting because. I think some people go too far and say, oh, of course, we don't have to have any concerns either that advanced. I will have those instincts because we can build anything we want, that there's there's a very nice set of arguments going back to Steven Monroe and Nick Bostrom and others pointing out that when we build machines, we normally build them with some kind of goal, you know, win this chess game, drive this car safely or whatever.

[00:21:46]

And as soon as you put in a goal into machine, especially if it's kind of open ended goal and the machine is very intelligent, it'll break that down into a bunch of subgoals. And one of those goals will almost always be self preservation, because if it breaks or dies in the process, it's not going to accomplish the goal. I suppose you just build a little you have a little robot and you tell it to go down the stock market here and and get you some food, make your cooking Italian dinner, you know, and then someone mugs it and tries to break it down the way their robot has an incentive to to not destroy it and defend itself or run away because otherwise it's going to fail.

[00:22:24]

And cooking you dinner. It's not afraid of death, but it really wants to complete the dinner cooking gold so it will have a self-preservation instinct to continue being a functional agent.

[00:22:34]

Yeah, somehow. And and similarly, if you give away any kind of more ambitious goal doing ajai, it's very likely they want to acquire more resources. We can do that better. Mm hmm. And it's exactly from those sort of subgoals that we might not have intended that that some of the concerns about ajai safety come. You give it some goal, which seems completely harmless. And then. Before you realize it, it's also trying to do these other things you didn't want to do, and it's maybe smarter than us.

[00:23:06]

It's so, so fascinating. And let me pause this, because I am in a very kind of human centric way. See, fear of death is a valuable motivator. So you don't think. Do you think that's an artifact of evolution, so that's the kind of mindspace evolution created there were sort of almost obsessed about self-preservation, some kind of genetic. Well, you don't think that's necessary to be afraid of death.

[00:23:37]

So not just a kind of subgoal of self-preservation just so you can keep doing the thing, but more fundamentally sort of have the finite thing like this ends. For you at some point, interesting, do I think it's necessary for what precisely for intelligence, but also for consciousness. So for those for both, do you think really like a finite death and the fear of it is important? Before I can answer, before we can agree on whether it's necessary for intelligence or for countries, we should be clear on how we define those two words because we share a lot of really smart people, define them in very different ways.

[00:24:21]

I was in this on this panel with experts, and they couldn't they couldn't agree on how to define intelligence. Even so, I define intelligence simply as the ability to accomplish complex goals. I like your broad definition because, again, I don't want to be a carbon chauvinist, right? And. In that case, no, it certainly doesn't require fear of death, I would say Alfa go Alpha Zero is quite intelligent. I don't think Alpha Zero has any fear of being turned off because it doesn't understand the concept of Arbat even and similarly consciousness.

[00:24:56]

I mean, you can certainly imagine a very simple kind of experience if if certain plants have any kind of experience.

[00:25:05]

I don't think they're very afraid of dying and there's nothing they can do about it anyway. Much so. So there wasn't that much value and more seriously, I think. If you ask not just about being conscious, but maybe having. What you we we might call an exciting life for you, feel passion and really appreciate the. I think maybe there are somehow maybe there perhaps it does help having having a backdrop that, hey, it's finite. No, let's let's make the most of us live to the fullest.

[00:25:39]

I mean, if you if you knew you were going to live forever. Do you think you would change your. Yeah, I mean, in some perspective, it would be an incredibly boring life living forever. So in sort of lose subjective terms, as you said, of something exciting and something in this that other humans would understand, I think is yeah, it seems that the finiteness of it is important. Well, the good news I have for you then is based on what we understand about cosmology.

[00:26:10]

Everything in our universe is ultimately probably finite, although they crunchier or big. What's the expected? The infinite. Yeah, we have a big chill or a big crunch or a big rip or the big snap or death bubbles. All of them are more than a billion years away. So we should we certainly have vastly more time than our ancestors thought. But there still. Still pretty hard to squeeze in an infinite number of compute cycles, even though.

[00:26:43]

There are some loopholes that just might be possible, but I think, you know, some people like to say that you should live as if you're about to die in five years or so. And that's sort of optimal maybe. It's a good assumption we should build our civilization as it's finite to be on the safe side.

[00:27:03]

Right, exactly. So you mentioned defining intelligence as the ability to solve complex goals. The where would you draw a line? How would you try to define human level intelligence and superhuman level of intelligence where this consciousness part of that definition, no consciousness does not come into this definition? So so I think of intelligence. It's a spectrum, but there are very many different kinds of goals you can have. You can have a goal to be a good chess player, a good go player, a good car driver, a good investor, good poet, etc.

[00:27:38]

. So intelligence that by by its very nature, it isn't something you can measure, but it's one number, some overall. Goodness, no, no. There's some people who are better at this. Some people are better than that right now. We have machines that are much better than us at some very narrow tasks like multiplying large numbers fast, memorizing large databases, playing chess, playing go soon, driving cars. But there's still no machine that can match a human child in general intelligence.

[00:28:10]

But but artificial general intelligence. Ajai the name of your course. Of course, that is. By its very definition that the quest to build a machine that can do everything as well as we can to the old holy grail of A.I. from from back to its inception and in the 60s, if that ever happens, of course, I think it's going to be the biggest transition in the history of life on Earth.

[00:28:37]

But but it doesn't necessarily have to wait the big impact around until machines are better than us. Anything that the really big change doesn't come exactly at the moment. They're better than us at everything. The really big change comes first there. Big changes when they start becoming better at doing most of the jobs that we do, because that takes away much of the demand for human labor. And then the really warping change comes when they become better than us at ABC.

[00:29:08]

Right. Right. Because right now the timescale of the research is limited by the human research and development cycle of years. Typically, you know how long it takes from one release of some software or iPhone or whatever to the next. But once once we once Google can replace forty thousand engineers by 40 thousand equivalent. Pieces of software or whatever, but then there's no reason there has to be here as it can be in principle, much faster, and the timescale of future progress in AI and all of science and technology will be driven by machines, not humans.

[00:29:48]

So it's this simple point which gives rise to this incredibly fun controversy about whether there can be intelligence explosion, so-called singularity is where Unavenged called it. Now, the idea, as articulated by A.J. Good is obviously way back 50s. But you can see Alan Turing and others thought about it even earlier. So you asked me what exactly what I define the human level. Yeah. So the glib answer is to say something which is better than I said all cognitive tasks were better than any human at all cognitive tasks.

[00:30:29]

But the really interesting bar, I think, goes a little bit lower than that, actually. It's when they can when they're better than I said A.I. programming and general learning so that they can can if they want to. Get better than I said anything by the study. So there better is a key word and better is towards this kind of spectrum of the complexity of goals it's able to accomplish. Yeah, so another way to do so, and that's certainly a very clear definition of human love.

[00:31:00]

So there's it's almost like a sea that's rising. You could do more and more and more things as a graphic that you show. It's really a nice way to put it. So there's some peaks that and there's an ocean level elevating and you saw more and more problems.

[00:31:12]

But, you know, just kind of to take a pause. And we took a bunch of questions and a lot of social networks and a bunch of people asked sort of a slightly different direction and creativity and and things that perhaps aren't a peak. It's you know, human beings are flawed and perhaps better means having being having contradiction, being fought in some way. So let me sort of start and start easy, first of all. So you have a lot of cool equations.

[00:31:44]

Let me ask, what's your favorite equation? First of all, I know they're all like your children, but which one is that? The shirt in your eyes.

[00:31:53]

And it's the master key of quantum mechanics or the micro world with this equation, which you take everything to do with atom molecules and all the way up.

[00:32:06]

So, yeah. So OK. So quantum mechanics is certainly a beautiful, mysterious formulation of our world. So I'd like to sort of ask you, just as an example, it perhaps doesn't have the same beauty as physics does, but in mathematics abstract, the Andrew Wiles who proved the Fermat's last theorem. So he just saw this recently. And it kind of caught my eye a little bit. This is three hundred fifty eight years after it was conjectured.

[00:32:35]

So this very simple formulation, everybody tried to prove it. Everybody failed. And so here's this guy comes along and eventually proves it and fails to prove it and proves it again in 94. And he said, like the moment when everything connected to place that in an interview said it was so indescribably beautiful, that moment when you finally realized the connecting piece of two conjectures. He said it was so indescribably beautiful. It was so simple and so elegant.

[00:33:04]

I couldn't understand how I'd missed it. And I just stared at it in disbelief for 20 minutes. And then during the day, I walked around the department and I keep coming back to my desk looking to see if it was still there. It was still there. I couldn't contain myself. I was so excited. It was the most important moment of my working life. Nothing I ever do again will mean as much to that particular moment. And it kind of made me think of what would it take?

[00:33:32]

And I think we have all been there at small levels. Maybe. Let me ask, have you had a moment like that in your life where you just had an idea like a while? Yes. I wouldn't mention myself in the same breath as Andrew Wiles, but I certainly had a number of of. Aha, moments when I realized something very cool about physics just completely made my head explode. In fact, some of my favorite discoveries I made, I later realize, have been discovered earlier by someone who sometimes got quite famous for it.

[00:34:11]

So it's too late for me to even publish it. But that doesn't diminish in any way the emotional experience you have when you realize that, like.

[00:34:17]

Yeah, wow. Yeah. So what would it take in that that moment that while that was yours in that moment. So what do you think it takes? For an intelligence system, najai system and a system to have a moment like that. That's a tricky question because there are actually two parts to it, right? One of them is Janet. Accomplished that proof, they cannot prove that you can never write A to the N plus B to the end equals three to that equal Zetlin for all integers, etc.

[00:34:52]

, etc.. When when they get into. That was simply a question about intelligence can you build machines that are intelligent and I think by the time we get a machine that can independently come up with that level of proofs, probably quite close to ajai. The second question is a question about consciousness of when will we will, will and how likely is it that such a machine will actually have any experience at all as opposed to just being like a zombie and.

[00:35:25]

Would we expect that to have some sort of emotional response to this or anything at all akin to human emotion, where now when it accomplishes its machine goal, it views it as somehow something very positive and right and and sublime and deeply meaningful. I would certainly hope that if in the future we do create machines that are our peers or even. Our descendants. Yeah, I would certainly hope that they do have this sort of supply, sublime appreciation of life in a way my absolutely worst nightmare would be that.

[00:36:10]

And at some point in the future. The distant future may be our cosmos is teeming with all this post biological life, doing all the seemingly cool stuff and maybe the last humans by the time our our species eventually. Fizzles out, will be like, well, that's OK, because we are so proud of our descendants here and look what my worst nightmare is, that we haven't solved the consciousness problem and we haven't realized that these are all the zombies.

[00:36:40]

They're not aware of anything any more than a tape recorder that hasn't any kind of experience. So the whole thing has just become a play for empty benches. That will be the ultimate zombie apocalypse. So I would much rather in my case that.

[00:36:57]

We have these beings which really appreciate how the how amazing it is, and in that picture, what would be the role of creativity, what a few people ask about creativity, do you think when you think about intelligence?

[00:37:14]

I mean, certainly the the story told the beginning of your book involved know, creating movies and so on, sort of making making money. You know, you can make a lot of money in our modern world with music and movies. So if you are an intelligence system, you may want to get good at that. But that's not necessarily what I mean by creativity. Is it important on that complex goals where the sea is rising for there to be something creative?

[00:37:41]

Or am I being very human centric and thinking creativity is somehow special relative to intelligence? My. Hunch is that we should think of creativity simply as an aspect of intelligence. And. We we have to be very careful with human vanity, we we we have this tendency to very often want to say as soon as machines can do something, we try to diminish it and say, oh, but that's not like real intelligence, you know, isn't a creator or this or that.

[00:38:16]

The other thing, um. If we ask ourselves to write down a definition of what we actually mean by being creative, what we mean by Andrew Wiles, what he did there, for example, then we often mean that someone takes a very unexpected leap. Mm hmm. It's not like taking 573 and multiplying by two hundred twenty four by just a step of. Straightforward cookbook like rules like it is, you can maybe make a connection between two things that people have never thought was connect very surprising or something like that.

[00:38:52]

I think I think this is an aspect of intelligence and this is actually one of the most important aspects of it. Maybe the reason we humans tend to be better at it than traditional computers is because it's something that comes more naturally if you're a neural network than if you're a traditional logic gate based computing machine. We physically have all these connections and that if you activate here, activate here, activate here.

[00:39:21]

Being at MIT, my hunch is that if we ever build a machine. What you could just do about the task ahead. You say, hey, you know, I just realized. I have I want to travel around the world, is that this month? Can you teach my ajai course for me? And it's like, OK, I'll do it. And it does everything that you would have done and improvises and stuff. Yeah. That would, in my mind, involve a lot of creativity.

[00:39:51]

Yeah.

[00:39:51]

So it's actually a beautiful way to put it.

[00:39:53]

I think we do try to grasp at the you know, the definition of intelligence is everything. We don't understand how to build. So like so we as humans try to find things that we have a machine we don't have. And maybe creativity is just one of the things, one of the words we use to describe that. That's a really interesting way to put it. I don't think we need to be that defensive. I don't think anything good comes out of saying we're somehow special.

[00:40:20]

You know, it's. Contrariwise, there are many examples in history of where. Trying to pretend that we're somehow superior to all other intelligent beings has led to pretty bad results, right? In Nazi Germany, they said that they were somehow superior to other people. Today, we still do a lot of cruelty to animals by saying that we're so superior somehow and they can't feel pain. Slavery was justified by the same kind of just really weak arguments. And and, uh.

[00:41:01]

I don't think if we actually go ahead and build artificial general intelligence, we can do things better than us. I don't think we should try to found our self-worth on some sort of. And bogus claims of superiority in terms of our intelligence. I think we should instead find our. Calling the meaning of life from from the experiences that we have, you know, I can have I can have very meaningful experiences. Even if there are other people who are smarter than me, you know, when I go to faculty meeting here and we talk about something that I certainly will.

[00:41:45]

He has no prize. He has no prize. He has no pride. I don't have what does that make me enjoy life any less or enjoy talking to those people?

[00:41:55]

Of course not. And contrariwise, I. I feel very honor and privilege to get to interact with with other very intelligent beings that are better than me, a lot of stuff. So I don't think there's any reason why we can't have the same approach with intelligent machines.

[00:42:14]

That's a really interesting. So people don't often think about that. They think about when there's going if there's machines that are more intelligent, you naturally think that that's not going to be a beneficial type of intelligence. You don't realize it could be, you know, like peers of Nobel Prizes, that there would be just fun to talk with. And they might be clever about certain topics. And you can have fun having a few drinks with them. So.

[00:42:40]

Well, also, you know, another example, we can all relate to it. Why it doesn't have to be a terrible thing to be in the presence of. People are even smarter than us all around is when when you and I were both two years old. I mean, our parents were much more intelligent than Australia worked out. OK, yeah. Because their goals were aligned with our goals. Yeah. And that I think is really the number one key.

[00:43:06]

Isha, we have to solve if we value the value of the value alignment problem, exactly, because people who see too many Hollywood movies with lousy science fiction plotlines, they worry about the wrong thing. They worry about their machines only turning evil. It's not malice. Wished that the concern, its competence. By definition, intelligent makes you makes you very confident that you have a more intelligent girl playing computer, playing the less intelligent one. And when we define intelligence as the ability to accomplish goal winning, it's going to be the more intelligent one that wins.

[00:43:48]

And if you have. A human and then you have an AGI and that's more intelligent than always and they have different goals. Guess who's going to get their way right? So I was just reading about I was just reading about this particular rhinoceros species that was driven extinct just a few years ago. Obama is looking at this cute picture of mommy rhinoceros with its its child and.

[00:44:14]

Why did we humans drive it to extinction wasn't because we were evil RINO haters as a whole, it was just because we our goals weren't aligned with those of the rhinoceros.

[00:44:23]

And it didn't work out so well for the rhinoceros because we were more intelligent. Right. So I think it's just so important that if we ever do build ajai.

[00:44:33]

Before we unleash anything, we have to make sure that it it learns to understand our goals and adopts our goals and reclaim those goals.

[00:44:46]

The cool, interesting problem there is being us as human beings trying to formulate our values. So, you know, you could think of the United States Constitution as a as a way that people sat down at the time, a bunch of white men. But which is a good example, I should say. They formulated the goals for this country. And a lot of people agree that those goals actually held up pretty well. That's an interesting formulation of values and failed miserably in other ways.

[00:45:17]

So for the value alignment problem, the solution to it, we have to be able to put on paper or in in in a program human values. How difficult do you think that is?

[00:45:30]

Very. But it's so important we really have to give it our best, and it's difficult for two separate reasons. There's the technical value alignment problem. I'm figuring out just how to make. Machines understand Marigold's adopt them and retain them. And then there is the separate part about the philosophical part, whose values anyway. And since it's not like we have any great consensus on this planet, on values, how what mechanism should we create then to aggregate and decide, OK, what's a good compromise?

[00:46:03]

Right at that second discussion can't just be left. The tech nerds like myself, right? That's right. And we refuse to talk about it.

[00:46:12]

And then I get Bill, who is going to be actually making the decision about whose vote it's going to be, a bunch of dudes and some tech company. And are they necessarily so representative of all of humankind that we want to trust and trust the government? Are they even uniquely qualified to speak to future human happiness just because they're good at programming? You know, I'd much rather have this be a really inclusive conversation. But do you think it's possible?

[00:46:39]

Sort of. So you create a beautiful vision that includes sort of the diversity, cultural diversity and various perspectives on discussing rights, freedoms, human dignity. But how hard is it to come to that consensus, do you think? It's certainly a really important thing that we should all try to do. But do you think it's feasible? I think. There's no better way to guarantee failure than to try to refuse to talk about it or refuse to try.

[00:47:10]

And I also think it's a really bad strategy to say, OK, let's first have a discussion for a long time, and then once we reach a complete consensus, then we'll try to load it into the machine. No, we shouldn't let perfect be the enemy of good instead. We should start with the kindergarten ethics, pretty much everybody agrees on and put that into machines. Now, we're not doing that even look at the you know, anyone who builds a passenger aircraft.

[00:47:38]

Wants it to never under any circumstances fly into a building or a right. Yet the September 11 hijackers were able to do that. And even more embarrassingly, you know, Andreas Lubitz, this depressed Germanwings pilot, when he flew his passenger jet into the Alps, killing over a hundred people, he just told the autopilot to do it.

[00:47:58]

He told the freaking computer to change the altitude to 100 meters. And even though it had the GPS maps, everything, the computer was like, OK, now. So we should take those very basic values. Will, where the problem is not that we don't agree that the problem is that we've been too lazy to try to put it into our machines and make sure. But from now on, our airplanes will just which all have computers in them. But just just refuse to do something like that, go into safe mode, maybe lock the cockpit door, go to the nearest airport.

[00:48:31]

And and there's so much other technology in our world as well now where it's really coming quite timely to put in some sort of very basic values like this.

[00:48:42]

Even in cars, we have had enough vehicle terrorism attacks by now. If you have driven trucks and vans into pedestrians, that is not at all a crazy idea to just have that hard wired into the car because, yeah, there are a lot of there's always going to be people who for some reason want to harm others. But most of those people don't have the technical expertise to figure out how to work around something like that. So if the car just won't do it.

[00:49:09]

It helps, let's start there, so there's a lot of that's that's a great point. So not not chasing perfect. There's a lot of things that a lot that most of the world agrees on. Yeah.

[00:49:19]

Start there and start there. And then once we start there, we'll also get into the habit of having these kind of conversations about, OK, what else should we put in here and then have these discussions? This should be a gradual process then. Great.

[00:49:32]

So but that also means describing these things and describing it to a machine. So one thing we had a few conversations with Stephen Wolfram. I'm not sure if you're familiar with Stephen, but I know him quite well. So he has you know, he works with a bunch of things, but, you know, cellular automata, the simple computable. Things, these competition systems, and you kind of mention that, you know, we probably have already within these systems already something that's ajai meaning like we just don't know it because we can't talk to it.

[00:50:08]

So if you give me this chance to try to try to least form a question out of this is I think it's an interesting idea to think that we can have intelligent systems, but we don't know how to describe something to them and they can't communicate with us. I know you're doing a little bit work and inexplainable. I trying to get you to explain it. So what are your thoughts of natural language processing or some kind of other communication? How how does the A.I. explain something to us?

[00:50:38]

How do we explain something to it, to machines or you think of it differently? So there are two separate. Parts to your question there. One of them has to do with communication, which is super interesting. Don't get that. And the other is whether we already have ajai, but we just haven't noticed it.

[00:50:57]

Right. There, I beg to differ. I don't think there's anything on any cellular automaton or anything or the Internet itself or whatever that has. Artificial general intelligence and that it didn't really do exactly everything we humans can do better. I think that if that happens, when that happens, we will very soon notice. We'll probably notice even before and if because in a very, very big way.

[00:51:25]

But for the second part, though, it can I just say so the because you have this beautiful way of formulating consciousness as a as a you know, as information processing.

[00:51:38]

And you can think of intelligence and information processing and as you can think of the entire universe as these particles and these systems roaming around that have this information processing power, you don't you don't think there is something with the power to process information in the way that we human beings do that's out there that. That needs to be sort of connected to it seems a little bit philosophical, perhaps, but there's something compelling to the idea that the power is already there, which is the focus should be more on the and being able to communicate with it.

[00:52:15]

Well, I agree that the. And in a certain sense, the hardware processing power is already out there because our universe itself. You can think of it as being a computer already. It's constantly computing what water waves, how it evolved, the water waves and the river Charles and how to move the air molecules around the floor.

[00:52:36]

It is pointed out, my colleague here, that you can even in a very rigorous way, think of our entire universe as being a quantum computer. It's pretty clear that our universe supports this amazing processing power because you can even within this physics computer that we live in. Right. We can even build actually laptops and stuff. So clearly the power is there. It's just that most of the compute power that nature has, it's, in my opinion, kind of wasting on boring stuff like simulating yet another ocean wave somewhere where no one is even looking.

[00:53:05]

Right. So in a sense, what life does what we are doing when we build computers is where we channeling all this compute that nature is doing anyway and doing things that are more interesting than just yet another ocean wave and do something cool here.

[00:53:22]

So the right hardware power is there for sure. But and then even just like computing, what's going to happen for the next five seconds, this portable, you know, takes in a ridiculous amount of compute. If you do it on a human computer. Now, this water bottle does did it. But that does not mean that this water bottle has ajai. And because ajai means it should also be able to like have written my book, done this interview.

[00:53:47]

Yes. And I don't think it's just a communication problem. I don't think it can do it.

[00:53:53]

And other Buddhists say when they watch the water and that there is some beauty, that there is some depth and the nature that they can communicate with communication is also very important because I mean, look, part of my job is being a teacher and I know some very intelligent.

[00:54:13]

Professor, even who just. I have a very hard time communicating. They come up with all the brilliant ideas, but but to communicate with somebody else, you have to also be able to stimulate their own mind. Yes, empathy build well enough. And that model of their mind that you can say things that they will understand. And that's quite difficult. And that's why today it's so frustrating. If you have a computer that makes some cancer diagnosis and you ask it, well, why are you saying I should have a surgery if and if I want to reply?

[00:54:45]

I was trained on five terabytes of data and this is my. Diagnosis, boop, boop, beep, beep, yeah, it doesn't really instill a lot of confidence, right. All right. So I think we have a lot of work to do on on communication there.

[00:55:02]

So what kind of what kind of I think you're doing a little bit work inexplainable out. What do you think are the most promising avenues? Is it mostly about sort of the Aleksa problem of natural language processing, of being able to actually use human interpretable methods of communication? So being able to talk to your system and talk back to you? Or is there some more fundamental problems to be solved?

[00:55:26]

I think it's all of the above human. The natural language processing is obviously important, but there are also more than fundamental problems, like if you if you take. You play chess Russian, I have to articulate that, but at least my knees low. When did you learn Russian? What happened after the fact? You got yourself Russian to tell what you built up.

[00:55:55]

Wow. But I would mean languages, you know. Wow, that's really impressive. I know my wife has some contact, but my point was, if you play chess, they have you looked at the Alpha zero games that the actual game is now taking out.

[00:56:11]

Some of them are just mind blowing, really beautiful and. If you ask how did it do that, you got that talk to them as a service, I and others from the mine will ultimately be able to give you as big tables of numbers, matrices that define the neural network. And you can stare at these people's numbers, tell your face, turn blue, and you're not going to understand much about why it made that move. And even if you have a natural language processing, they can tell you in human language about, oh, five, seven points to eight, it's still not going to really help.

[00:56:51]

So they think think there's a whole spectrum of a fun challenge. They're involved and taking it computation that does intelligent things and transforming into something equally. Chiton, good, equally intelligent, but it's more understandable, and I think that's really valuable because I think. As we put machines in charge of ever more infrastructure in our world, the power grid, the trading on the stock market, weapons systems and so on, it's absolutely crucial that we can trust these guys to do all we want.

[00:57:27]

And trust really comes from understanding, right. In a very fundamental way. And that's why I'm that's why I'm working on this, because I think the more if we're going to have some hope of ensuring that machines have adopted our goals and that they're going to retain them, that kind of trust. I think she needs to be based on things you can actually understand perfectly, will make it preferably to improve theorems on even with a self-driving car ride. If someone just tells you it's been trained on tons of data and never crashed, it's less reassuring than if someone actually has a proof.

[00:58:02]

Maybe it's a computer verified proof. But still, it said that under no circumstances is this car just going to swerve into oncoming traffic.

[00:58:10]

And that kind of information helps build trust and build the alignment, the alignment of goals, the least awareness that your goals, your values are aligned.

[00:58:20]

And I think even in very short term, if you look at her, you know, today, that's absolutely pathetic state of cybersecurity that we have right now. Or is it three? Billion. Yahoo! Accounts, which are packed almost every. Americans credit card and so on. Why is this happening? It's ultimately happening because we have software that nobody fully understood how it worked. That's why the bugs hadn't been found. Right.

[00:58:52]

And I think I can be used very effectively for offense, for hacking, but it can also be used for defense, hopefully automating verifiability and creating systems that are. Built in different ways, so you can actually prove things about them, right? And it's important. So speaking of software, that nobody understands how it works. Of course, a bunch of people ask by your paper about your thoughts of why does deep and cheap learning works so well. That's the paper.

[00:59:23]

But what are your thoughts on deep learning is kind of simplified. Models of our own brains have been able to do some successful perception work, pattern recognition work, and now with Alpha Zero and so on, do some some clever things. What are your thoughts about the promise limitations of this piece? Right. I think. There are a number of very important insights and very important lessons we can draw from these kind of successes. One of them is when you look at the human brain, you see it's very complicated, 10 to 11 neurons.

[00:59:59]

And there are all these different kinds of neurons and yada, yada. And there's been a long debate about whether the fact that we have dozens of different kinds is actually necessary for intelligence.

[01:00:09]

Which, you know, I think quite convincingly answer that question. No, it's enough to have just one kind if you look under the hood of Alpha Zero with only one kind of one, and it's ridiculously simple, that simple mathematical thing. So it's not the it's just like in physics. It's not if you have a gas with waves in it, it's not the detailed nature of the molecules of matter.

[01:00:32]

It's the collective behavior somehow. Similarly, it's it's this higher level structure of the network matters, not that you have 20 hours.

[01:00:42]

I think our brain is such a complicated mess because it wasn't evolved. Just to be intelligent, there was involved to also be self assembling, right and self repairing and evolutionarily attainable inside cultures and so on.

[01:01:01]

So I think it's pretty. My my hunch is that we're going to understand how to build ajai before we fully understand how our brains work, just like we we understood how to build flying machines long before we were able to build a mechanical bird. Yes.

[01:01:16]

How you giving the example you're giving that the example of mechanical birds and airplanes and airplanes do a pretty good job of flying without really mimicking bird fly?

[01:01:26]

And even now, after one hundred is one hundred years later, you see the TED talk with a mechanical bird. You mention it, it's amazing.

[01:01:34]

But even after that, we still don't fly in mechanical bird because it turned out the way we came up with was simpler. And it's better for our purposes. And I think it might be the same there. That's one lesson. Another lesson. It's more what the what our paper was about. First, I, as a physicist, thought it was fascinating how there's a very close mathematical relationship actually between not artificial neural networks. And a lot of the things that we've studied for in physics go by nerdy names like the renormalization group equation and meconium and yada, yada, yada and.

[01:02:08]

And when you look a little more closely at this. You have. You at first there was a mole, there's something crazy here that doesn't make sense because. We know that if you even want to build a super simple neural network, you tell that part cat pictures and dog pictures, right. That you can do that very, very well now.

[01:02:34]

But if you think about it a little bit, you convince yourself it must be impossible, because if I have one megapixel, even if it's just black or white, there's the power. One million possible images is way more than there are atoms in our universe. So in order to. And then for each one of those, I have to assign a number, which is the probability that it's a dog, right. So an arbitrary function of images is a list of.

[01:03:02]

More numbers than there are atoms in our universe. So clearly, I can't store that under the hood of my GPU or my computer, yet somehow works.

[01:03:10]

So what does that mean? Well, it means that the out of all of the problems that you could try to solve with the neural network. Almost all of them are impossible to solve with a reasonably sized one. But then what we showed in our paper was, was that the fractures, the kind of problems, the fraction of all the problems that you could possibly pose that that we actually care about, given the laws of physics, it's also an infinitesimally tiny little part.

[01:03:42]

And amazingly, they're basically the same part. Yeah, it's almost like our world was created for I mean, they kind of come together. Yeah, well, you could say maybe where the world created the world that the world was created for us. But I have a more modest interpretation, which is that instead evolution endowed us. But neural networks precisely for that reason, because this particular architecture, as opposed to the one in your laptop, is very, very well.

[01:04:08]

Adapted to solving the kind of problems of nature kept presenting our ancestors with, so it makes sense that why do we have a brain in the first place? It's to be able to make predictions about the future and so on. So if we had a sucky system which could never solve, it wouldn't have so. But so this is this is a. I think a very beautiful fact, yeah, we also we also realize that there's the there that we there have been it's been earlier work on why deeper networks are good.

[01:04:40]

But we were able to show an additional cool factor, which is that even incredibly simple problems like Spotlight give you a thousand numbers and ask you to multiply them together. You know, you can write a few lines of code boom trivial if you just try to do that with a neural network that has only one single hidden layer in it. You can do it. But you're going to need two to the power, a thousand neurons and to multiply a thousand numbers, which is again more neurons than their atoms in our universe.

[01:05:12]

That's nothing, but if you're allowed if you allow yourself make it a deep network with many layers, you only need four thousand neurons. It's perfectly feasible. So that's really interesting that. Yeah. So on another architecture type, I mean, you mentioned Schrodinger's equation. And what are your thoughts about quantum computing? And the role of this kind of computational unit in creating an intelligence system in some Hollywood movies. That are not mentioned by name because they want to spoil them the way the energy is building a quantum computer.

[01:05:52]

Yeah, because the word quantum sounds cool in science, right? My first of all, I think we don't need quantum computers to build ajai. I suspect your brain is not. Quantum computer and a new found sense. So you even wrote a paper about that many years ago. I checked it out of the decoherence, so-called decoherence time, how long it takes until the quantum computer inus of what your neurons are doing gets erased. Mm hmm. By just random noise from the environment, and it's about ten to the minus 21 seconds, so as cool as it would be to have a quantum computer in my head, I don't think that fast.

[01:06:34]

Yeah. On the other hand, there are.

[01:06:39]

Very cool things you could do with quantum computers. Or I think we'll be able to do soon when we get big bigger ones, that might actually help machine learning do even better than the brain. So. For example. One, this is this is a moon shot, but, hey, you know that. Learning. It's very much the same thing as search, if you have if you're trying to train a neural network to get really learn to do something really well, you have some lost function.

[01:07:15]

You have some you have a bunch of knobs. You can turn measurement represented by a bunch of numbers and you're trying to tweak them so that it becomes as good as possible at this thing. So if you think of a landscape with some valley. We each dimension of the landscape corresponds to some number you can change, you're trying to find the minimum and it's well known that if you have a very high dimensional landscape, complicated thing, that's super hard to find the minimum like.

[01:07:41]

Quantum mechanics is amazingly good at this, right? If I want to know what's the lowest energy state this water can possibly have. Incredibly hard to compute, but we can. But nature will happily figure this out for you if you just cool it down, make it very, very cold. If you put a ball somewhere, it will roll down to its minimum and this happens metaphorically at the energy landscape, too, and quantum mechanics even uses some clever tricks which today's machine learning systems don't like.

[01:08:12]

If you're trying to find the minimum when you get stuck in the local minimum here in quantum mechanics, you can actually tunnel through the barrier and get unstuck again and. And that's really interesting. So I said maybe, for example, will one day use quantum computers that. To help train neural networks better. That's really interesting. OK, so as a component of kind of the learning process, for example. Yeah, let me ask sort of wrapping up here a little bit.

[01:08:41]

Let me let me return to the questions of our human nature and and love, as I mentioned. So do you think. You mentioned sort of a helper or they could think of also personal robots. Do you think the way we human beings fall in love and get connected to each other is possible to achieve in an AI system and human level AI intelligence system? Do you think we would ever see that kind of connection or, you know, in all this discussion about solving complex goals?

[01:09:15]

Yeah. As this kind of human social connection, do you think that's one of the goals and the peaks and valleys that with the rising sea levels that we'll be able to achieve? Or do you think that's something that's ultimately or at least in the short term, relative to other goals, is not achievable? I think it's all possible. I mean, in region, there is a there is a very wide range of guesses, as you know, among researchers, when we're going to get ajai.

[01:09:43]

Some people, you know, like our friend Rodney Brooks, said, it's going to be hundreds and hundreds of years and then there are many others who think it's going to happen much sooner and recent polls.

[01:09:54]

Maybe half or so researchers think we're going to get ajai within decades. So if that happens, of course, I think these things are all possible, but in terms of whether it will happen. I think we shouldn't spend so much time asking what do we think will happen in the future, as if we are to some sort of pathetic you're passive bystanders, you know, waiting for the future to happen to us. Hey, we are the ones creating this future.

[01:10:20]

So we should be. Proactive about it and ask yourself what sort of future we would like to have happen. That's right. Right. Make it like that.

[01:10:29]

Well, would I prefer to some sort of incredibly boring zombie like future where just all these mechanical things happen and there's no passion, no emotion, no experience, maybe even.

[01:10:39]

No, I would, of course, much rather prefer if if all the things that we find we value the most. About humanity. All right, subjective experience, passion, inspiration, love, you know, if we can create a future when those are those things do exist now, I think ultimately. It's not our universe giving meaning to us, this is giving meaning the universe, and if we build more advanced intelligence, let's make sure we build it in such a way that.

[01:11:13]

Meaning these. As part of it, a lot of people have seriously studied this problem and think of it from different angles, have trouble and the majority of cases, if they think through that happen, are the ones that are not beneficial to humanity. And so, yeah. So what what are your thoughts? What's and what's what should people you know, I really don't like people to be terrified. He should. What's a way for people to think about it in a way that in a way we can solve it and we can make it better?

[01:11:46]

Yeah, no, I don't think panicking is going to help in any way. It's not going to increase chances of things going well either. Even if you are in a situation where there is a real threat, does it help if everybody just freaks out? Right? No, of course. Of course not. I think, yeah, there are, of course, ways in which things can go horribly wrong. First of all, it's important when we think about this thing less about the problems and risks, but also remember how huge the upsides can be if we get it right.

[01:12:16]

Everything everything we love about society and civilization is the product of intelligence, of if we can amplify our intelligence or machine intelligence and not any more to lose our loved one, that what we're told is an incurable disease and things like this, of course, we should aspire to that.

[01:12:32]

So that can be a motivator. I think reminding ourselves that the reason we try to solve problems is not just because. We're trying to avoid gloom, but because we're trying to do something great. But then in terms of the risks, I think, Jim. The really important question is to ask, what can we do today that will actually help? Yes, I'll come. Good. And dismissing the risk is not one of them. I find it quite funny often when I'm in discussion panels about these things, how the people who work for four companies won't be say, oh, we've got nothing to worry about, nothing to worry about.

[01:13:12]

We're about. And it's always, oh, it's only academics sometimes expressed concerns.

[01:13:17]

That's not surprising at all, if you think about it. Upton Sinclair quipped right. That it's hard to make your men believe in something when they think the fans are not believing in it. And frankly, we know a lot of these people in companies that they are just as concerned as anyone else. But if you're the CEO of a company, that's not something you want to go on record saying when you have silly journalists who are going to put a picture of a Terminator robot when they quote you.

[01:13:43]

So so the issues are real.

[01:13:47]

And the way the way I think about what the issue is, is basically that the real choice we have is. First of all, are we going to dismiss this, the risks and say, well, you know, let's just go ahead and build a machine that can do everything we can do better and cheaper, let's just make ourselves obsolete as fast as possible. What could possibly go wrong? That's one attitude. The opposite attitude, I think, is to say.

[01:14:14]

There is incredible potential, you know, let's think about what kind of future we're really, really excited about, what are the shared goals that we can really aspire to, and then let's think really hard about how we can actually get there to start with it. Not don't start thinking about the risks. Start thinking about the goals. Goals.

[01:14:34]

Yeah. And then when you do that, then you can think about the obstacles you want to avoid. But they often get students coming in right here into my office for career advice. I always ask them this very question, where do you want to be in the future? If all you can say is, well, maybe I'll have cancer, maybe I'll get run over by doctors and obstacles instead of the goal, she's just going to end up a hypochondriac, paranoid.

[01:14:55]

Whereas if she comes in a fire in her eyes and is like, I want to be there, and then we can talk about the obstacles and see how we can circumvent them. That's, I think, a much, much healthier attitude. And that's really well planned.

[01:15:09]

And I, I feel it's very challenging to come up with a vision for the future which which we are unequivocally excited about. I'm not just talking now in the vague terms like, yeah, let's cure cancer. Find talking about what kind of society do we want to create? What do we want it to mean to be human in the age of irony, in the age of AGI. So if we can have this conversation. Broad, inclusive conversation and gradually start converging towards some some future with some direction at least, that we want to steer towards right then then we'll be much more motivated to constructively take on the obstacles.

[01:15:50]

And I think if I ever had that, if you me if I try to wrap this up in a more succinct way, I think. I think we can all agree already now that we should aspire. To build ajai that. Doesn't overpower us, but that empowers us. And think of the many various ways they can do that, whether that's from my side of the world of autonomous vehicles, I am personally actually from the camp that believes this human level intelligence is required to to achieve something like vehicles.

[01:16:28]

That would actually be something we would enjoy using and being part of. So that's one example. And certainly there's a lot of other types of robots in medicine and so on. So focusing on those and then and then coming up with obstacles, coming up with the ways that that can go wrong and solving those one at a time.

[01:16:46]

And just because you can build an autonomous vehicle, even if you could build one that would drive this fine without you know, maybe there are some things in life that we would actually want to do ourselves. That's right.

[01:16:56]

Right. Like, for example, if you think of our society as a whole, there are some things that we find very meaningful to do. And that doesn't mean we have to stop doing them just because machines can do them better. You know, I'm not going to stop playing tennis just the day someone build a tennis robot to beat me. People are still playing chess and even go. Yeah.

[01:17:18]

And I enlisted in the very near term. Even some people are advocating basic income replace jobs. But if you if the government is going to be willing to just hand out cash to people for doing nothing, then one, you also seriously consider whether the government should also hire a lot more teachers and nurses and the kind of jobs which people often find great fulfillment in doing.

[01:17:41]

Right. I get very tired of hearing politicians saying, oh, we can't afford hiring more teachers, but we're going to maybe have basic income if we can have more more serious research and thought into what gives meaning to our lives and jobs give so much more than income.

[01:17:56]

Right. And then think about in the future, what are the role of the. Yeah, one of the roles that we want to. How are people feeling empowered by machines and I think sort of I come from the Russia, from the Soviet Union, and I think for a lot of people in the 20th century, going to the moon, going into space was an inspiring thing. I feel like the the the the universe of the mind. So I understand in creating intelligence is that for the 21st century.

[01:18:31]

So it's really surprising and I've heard you mention this is really surprising to me, both on the research funding side, that it's not funded as greatly as it could be, but most importantly, on the politicians side, that it's not part of the public discourse except in killer robots, Terminator kind of view that people are not yet, I think, perhaps excited by the possible positive future that we can build together. So we should be because politicians usually just focus on the next election cycle.

[01:19:00]

Right. The single most important thing I feel we humans have learned and the entire history of science is they were the masters of underestimation. We underestimated the size. I've I've I've called again and again realizing that everything we thought existed was just a small part of something grander, right on a solar system, the galaxy, clusters of galaxies, universe. And we now know that we have that the future has so much more potential than our ancestors could ever have dreamt of this cosmos.

[01:19:38]

Imagine if all of earth. Was completely devoid of life. Except for Cambridge, Massachusetts. Wouldn't it be kind of lame if all we ever aspired to was to stay in Cambridge, Massachusetts forever and then go extinct in one week, even though Earth was going to continue on for longer than that? That sort of attitude I think we have now. On the cosmic scale, we can feel that life can flourish on Earth not for another four years, but for billions of years.

[01:20:08]

Yes, I can even tell you about how to move it out of harm's way when the sun gets too hot.

[01:20:12]

And and then we have so much more resources out here, which today maybe there are a lot of other planets with bacteria or a cow like life on them.

[01:20:22]

But I. Most of this all this opportunity seems, as far as we can tell, to be largely dead like the Sahara Desert, and yet we have the opportunity that the help life flourish around this billions of years. And so, like, let's quit squabbling about. Whether some little border should be drawn, one feels one mile to the left, the right and look up into the sky. You realize, hey, you know, we can do such incredible things.

[01:20:51]

Yeah. And that's, I think, why it's really exciting that, yeah, you and others are connected with some of the work. Elon Musk is doing because he's literally going out into that space exploring our universe. And it's wonderful.

[01:21:04]

That is exactly why Elon Musk is so misunderstood. Right. Misconstrue him as some kind of pessimistic doomsayer. The reason he cares so much about safety is because he more than. Almost anyone else appreciates these amazing opportunities they will squander if we wipe out here on Earth and we're not just going to wipe out the next generation, but all generations and this incredible opportunity that's out there and that would be really be a waste. And I. For people who think that we better to do without technology, let me just mention that.

[01:21:42]

If we don't improve our technology, the question isn't whether humanity is going to go extinct. The question is just whether we're going to get taken out by the next big asteroid or the next supervolcano or something else dumb that we could easily prevent with more tech. Right. And if we want life to flourish throughout the cosmos, A.I. is the key to it. As I mentioned in a lot of detail in my book right there, even many of the most inspired sci fi writers I feel have totally underestimated the opportunities.

[01:22:17]

For space travel, especially to other galaxies, because they weren't thinking about the possibility of ajai, which just makes it so much easier, right? Yeah, so that goes to hear a view of ajai that enables our progress, that enables a better life.

[01:22:33]

So that's a beautiful that's a beautiful way to put it and something to strive for. So, Max, thank you so much. Thank you for your time. Today has been awesome.

[01:22:41]

Thank you so much. Bowl. Sorry.