Transcribe your podcast
[00:00:00]

The following is a conversation with Scott Aaronson, his second time on the podcast. He is a professor at UT Austin, director of the Quantum Information Center, and previously a professor at MIT. Last time we talked about quantum computing, this time we talk about computation, complexity, consciousness and theories of everything. I'm recording this intro, as you may be able to tell in a very strange room in the middle of the night, I'm not really sure how I got here or how I'm going to get out, but.

[00:00:39]

Hunter S. Thompson saying, I think applies to today in the last few days and actually the last couple of weeks.

[00:00:48]

Life should not be a journey to the grave with the intention of arriving safely in a pretty and well-preserved body, but rather to skid in broadside and a cloud of smoke thoroughly used up, totally worn out and loudly proclaiming, wow, what a ride. So I figured whatever I'm up to here. And yes, lots of wine is involved. I'm going to have to improvise, hence this recording. OK, quick mention of his sponsor, followed by some thoughts related to the episode for sponsors.

[00:01:24]

Simply save a home security company I use to monitor and protect my apartment, though, of course, I'm always prepared with a fallback plan. As a man in this world, must always be. Second sponsor is eight sleep, a mattress that calls itself measures heart rate variability, has a nap and has given me yet another reason to look forward to sleep, including the all important power nap. Third sponsor is Express VPN, the VPN I've used for many years to protect my privacy on the Internet.

[00:02:02]

Finally, the fourth sponsor is better help online therapy when you want to face your demons with a licensed professional, not just by doing David Gorgons like physical challenges like I seem to do on occasion.

[00:02:16]

Please check out these sponsors in the description to get a discount and to support the podcast. As a side note, let me say that this is the second time I recorded a conversation outdoors, the first one was with Stephen Wolfram when it was actually sunny out. In this case, it was raining, which is why I found a covered outdoor patio.

[00:02:38]

But I learned a valuable lesson, which is that raindrops can be quite loud and the hard metal surface of a patio cover. I did my best with audio. I hope it still sounds OK to you. I'm learning, always improving. In fact, Scott says if you always win, then you're probably doing something wrong. To be honest, I get pretty upset with myself when I fail. Small or big, but I've learned that this feeling is priceless.

[00:03:08]

It can be fuel when channeled into concrete plans of how to improve. So if you enjoy this thing, subscribe on YouTube review first starting up a podcast, follow on Spotify supporter on page one or connect with me on Twitter, Allex Friedman, as usual. I'll do a few minutes of ads now and no ads in the middle. I try to make these interesting, but I give you time stamps. So if you skip, please still check out the sponsors by clicking on links in the description.

[00:03:37]

It's the very best way to support this podcast. This show is sponsored by Simply Save a Home Security Company. There are no tricky, overpriced contracts. The customer service is amazing. They told me to say the following line and I shall oblige, even though it's ridiculous. While there are a lot of options out there, there's only one no brainer simply safe. I personally have no clue about the actual options out there, but this one happens to be great.

[00:04:07]

It's simple. No contracts. Fifteen bucks a month. Easy setup. I have it already set up in my apartment, but of course I'm also prepared for intruders. One of my favorite movies is Leon or the Professional, which is a movie about a hit man with a minimalist life that resembles my own. Anyway, go to simply save dot com slash likes to get a free HD camera again that simply save dot com slash leks. There are new sponsor and this is a trial run.

[00:04:39]

So you, my dear listeners, know what to do. The show is also sponsored by Eat, Sleep, and it's part mattress that you can check out at asymptotic likes to get two hundred dollars off. It controls temperature with an app. It's packed with sensors and it can cool down to as low as 55 degrees and each side of the bed separately. Anecdotally, it's been a game changer for me. I have air conditioners and heat, but even then it's hard to get the temperature right.

[00:05:10]

Like when I'm fasting, I'm usually cold. When I'm fat and stressed, I'm usually hot. And sleep allows me to adjust to that for perfect sleep. A cool bed surface with a warm blanket after a long day of focused work is an amazing feeling. They can track a bunch of metrics like heart rate variability, but cooling alone is honestly worth the money anyway. Go to a sitcom, slash Lux, get two hundred dollars off. The show is also sponsored by Express.

[00:05:37]

EPN provides privacy in your cyber life. Without a VPN. Your Internet service provider can see every site you've ever visited, even when you're browsing in incognito mode. Even if you clear your history in the United States, they can legally sell your data to ad companies, expressly prevents them from being able to do all that. I've used it for many years on Windows, Linux and Android, but it's available everywhere else, too. It's fast and easy to use to express update councillor's legs pod to get an extra three months free on a one year package that's Express VPN Dotcom's blacklegs pod.

[00:06:18]

Finally, the show, sponsored by Better Help Spelled HELOC help they figure out what you need and match you with a licensed professional therapist in under 48 hours, a chat with the person on there and enjoy it. Of course. I also regularly talk to David Gorgons these days, who is definitely not a licensed professional therapist, but he does help me meet his and my demons and become comfortable to exist in their presence. Everyone is different, but for me, I think suffering is essential for creation.

[00:06:54]

But you can suffer beautifully in a way that doesn't destroy you. Therapy can help in whatever form that therapy takes and better help, I think is an option worth trying. Their easy, private, affordable, available, worldwide. You can communicate by text any time and schedule weekly audio and video sessions. Check it out. Better help. The counselors likes to get ten percent off. That's better help dotcom slash Lex. And now here's my conversation with Scott Aronsohn.

[00:07:46]

Let's start with the most absurd question, but I've read you write some fascinating stuff about it, so let's go there. Are we living in a simulation? What difference does it make? I mean, I'm serious. What difference? Because if we are living in a simulation, it raises the question, how real does something have to be in simulation for it to be sufficiently immersive for us humans?

[00:08:10]

But I mean, even in principle, how could we ever know if we were in one right? A perfect simulation by definition is something that's indistinguishable from the real thing. Well, we didn't say anything about perfect. It could be. No, no, that's that's right. Well, if it was an imperfect simulation, if we could hack it, you know, find a bug in it, then that would be one thing. Right. If if this was like the Matrix and there was a way for me to, you know, do flying kung fu moves or something by hacking the simulation, well, then, you know, we would have to cross that bridge when we came to it, wouldn't we?

[00:08:41]

Right. I mean, at that point, you know, it's it's hard to see the difference between that and just what people would ordinarily refer to as a world with miracles. You know, what about from a different perspective? Thinking about the universe is a computation like a program running and a computer is. That's kind of a neighboring concept. It is. It is an interesting and reasonably well-defined question to ask. Is the world computable? Does the world satisfy what we would call and say?

[00:09:11]

Yes, the the church touring thesis. Yeah, that is, you know, could we take any physical system and simulate it to, you know, any desired precision by a Turing machine, you know, given the appropriate input data? Right. And so far, I think the indications are pretty strong that our world does seem to satisfy the church Turing thesis, at least. If it doesn't, then we haven't yet discovered why not. But now does that mean that our universe is a simulation?

[00:09:43]

Well, you know, that word seems to suggest that there is some other larger universe in which it is running. Right. And the problem there is that if the simulation is perfect, then we're never going to be able to get any direct evidence about that other universe. You know, we will only be able to see the effects of the computation that is running in this universe. Well, let's imagine an analogy. Let's imagine a PC, a personal computer, a computer.

[00:10:13]

Is it possible, with the advent of artificial intelligence for the computer to look outside of itself to see to understand its creator? Hmm. I mean, that's a simple. Is that is that ridiculous? Well, well, well.

[00:10:28]

I mean, with the computers that we actually have, I mean, first of all, we all know that humans have done an imperfect job of, you know, enforcing the abstraction boundaries of computers. Right. Like, you may try to confine some program to a playpen, but, you know, as soon as there's one memory allocation error in the C program, then the program has gotten out of that playpen and it can do whatever it wants. All right.

[00:10:57]

This is how most hacks work, you know, viruses and worms and exploits. And, you know, you would have to imagine that and I would be able to discover something like that. Now, you know, of course, if we could actually discover some exploit of reality itself, then, you know, then this whole I mean, then in some sense we wouldn't have to philosophize about this.

[00:11:22]

Right. This would no longer be a metaphysical conversation. Well, because. Yeah, but that's the question is what is it? What would that hack look like?

[00:11:30]

Yeah, well, I have no idea. I mean, Peter Shaw, you know, the you know, a very famous person in quantum computing, of course, has joked that maybe the reason why we haven't yet integrated general relativity and quantum mechanics is that, you know, the part of the universe that depends on both of them was that what was actually left unspecified? And if we ever tried to do an experiment involving the singularity of a black hole or something like that, then, you know, the universe would just generate an overflow error or something.

[00:12:05]

Yeah, we would just crash the universe. Now, you know, the universe, you know, has seemed to hold up pretty well for, you know, 14 billion years. Right. So, you know, my you know, Occam's razor kind of gas has to be that, you know, it will continue to hold up. You know, that the fact that we don't know the laws of physics governing some phenomenon is not a strong sign that probing that phenomenon is going to crash the universe.

[00:12:35]

All right.

[00:12:36]

But, you know, of course, I could be wrong, but do you think on the physics side of things, you know, there's been recently a few folks, Eric Einstein. And Stephen Wolfram came out with a theory of everything, I think there's a history of physicists dreaming and working on the unification of all the laws of physics. Do you think it's possible that once we understand more physics, not necessarily the unification of the laws, but just understand physics more deeply at the fundamental level?

[00:13:06]

We'll be able to start? You know, I mean, part of this is humorous, but looking to see if there's any bugs in the universe that could be exploited for, you know, traveling at and not just the speed of light, but just traveling faster than our current spaceships can travel, all that kind of stuff.

[00:13:26]

Well, I mean, to travel faster than our current spaceships could travel, you wouldn't need to find any bug in the universe. Right. The known laws of physics, you know, let us go much faster up to the speed of light. Right. And, you know, when people want to go faster than the speed of light. Well, we actually know something about what that would entail. Namely that, you know, according to relativity, that seems to entail communication backwards in time.

[00:13:52]

OK, so then you have to worry about a closed timeline curves and all of that stuff. So, you know, in some sense, we we sort of know the price that you have to pay for these things. Right. But we don't have physics. That's right. That's right. We can't, you know, say that they're impossible. But we you know, we know that sort of a lot else in physics breaks. Right. So now regarding Eric Weinstein and Steven Wolfram, like, I wouldn't say that either of them has a theory of everything.

[00:14:22]

I would say that they have ideas that they hope, you know, could someday lead to a theory of everything. Is that a worthy pursuit?

[00:14:29]

Well, I mean, certainly, let's say by theory of everything, you know, we don't literally mean a theory of cats and of baseball and, you know, but we just mean it in the in the more limited sense of everything. A a fundamental theory of physics. Right. Of all of the fundamental interactions of physics.

[00:14:48]

Of course, such a theory, even after we had it, you know, would would leave the entire question of all the emergent behavior. Right. You know, to to be explored. So it's so it's only everything for a specific definition of everything. OK, but in that sense, I would say, of course, that's worth pursuing. I mean, that is the entire program of fundamental physics. Right? All of my friends who do quantum gravity, who do string theory, who do anything like that, that is what's motivating them.

[00:15:18]

Yeah, it's funny though, but mean Eric Wystan talks about this. It is. I don't know much about the physics world, but I know about my world.

[00:15:25]

And it is a it is a little bit taboo to talk about ajai, for example, on the A.I. side. So really to talk about the the big dream of the community, I would say because it seems so far away, it's almost taboo to bring it up because, you know, it's seen as the kind of people that dream about creating a truly super human level intelligence that's really far out there for people because we're not even close. And it feels like the same thing is true for the physics community.

[00:15:58]

I mean, Stephen Hawking certainly talked constantly about a theory of everything. Right. You know, I mean I mean, people, you know, use those terms who were, you know, some of the most respected people in the in the in the whole world of physics. Right. But I mean, I think that the distinction that I would make is that people might react badly if you use the term in a way that suggests that that you, you know, thinking about it for five minutes, have come up with this major new insight about it.

[00:16:29]

Yeah, right.

[00:16:30]

It's it's difficult. Stephen Hawking is not a great example because I think you can do whatever the heck you want when you get to that level.

[00:16:40]

And I certainly see, like, seeing your faculty, you know, you know, at that point, that's one of the nice things about getting older is is you stop giving a damn. But for me as a whole, they tend to roll their eyes very quickly and stuff that's outside the mainstream.

[00:16:57]

Well well, let me let me put it this way. I mean, if you asked, you know, Ed Witten, let's say who is you know, you might consider leader of the string community and thus, you know, very, very mainstream in a certain sense. But he would have no hesitation in saying, you know, of course, you know, they're looking for a you know, you know, a unified description of nature, of, you know, of general relativity, of quantum mechanics, of all the fundamental interactions of nature right now.

[00:17:26]

You know, whether people would call that a theory of everything, whether they would use that that term, that might vary. You know, Lenny Susskind would definitely have no problem telling you that, you know, if that's what we want. Right.

[00:17:38]

For me, who loves human beings and psychology, it's kind of ridiculous to say. Theory that unifies the laws of physics gets you to understand everything, I would say you're not even close to understanding everything.

[00:17:52]

Yeah, right. Well, yeah, I mean, the word everything is a little ambiguous here, right? Because, you know, and then people will get into debates about, you know, reductionism versus emergent ism and blah, blah, blah. And so in not wanting to say theory of everything, people might just be trying to short circuit that debate and say, you know, look, you know, yes, we want a fundamental theory of, you know, the particles and interactions of nature.

[00:18:18]

Let me bring up the next topic that people don't want to mention, although they're getting more comfortable with, is consciousness. You mentioned you have a talk on consciousness that I watched five minutes of, but the Internet connection is really bad. Was this my talk about, you know, refuting the integrated information theory? Yes. This particular account of consciousness that yeah, I think one can just show it doesn't work so much harder to say what does work without work.

[00:18:42]

Yeah, yeah. Let me ask maybe it would be nice to comment on you talk about also like the semi hard problem of consciousness, almost hard problem. Kind of hard. Pretty, pretty hard. Pretty hard. I think I call it maybe.

[00:18:54]

Can you talk about that, their idea of some of the approach to modeling consciousness and why you don't find it convincing. What is it, first of all?

[00:19:06]

OK, well so so what what what I called the pretty hard problem of consciousness. This is my term, although many other people have said something equivalent to this.

[00:19:14]

OK, but it's just, you know, the the the problem of, you know, giving an account of just which physical systems are conscious and which are not or, you know, if there are degrees of consciousness, then quantifying how conscious a given system is. Oh, so that's the pretty hard. Yeah, that's what I mean. That's it. I'm adopting it. I love it. That's a good a good ring to it. And so, you know, the infamous hard problem of consciousness is to explain how something like consciousness could arise at all, you know, in a material universe.

[00:19:49]

Right. Or, you know, why does it ever feel like anything to to experience anything? Right.

[00:19:55]

And, you know, so I'm trying to distinguish from that problem. Right. And say, you know, no, OK, I am I would merely settle for an account that could say, you know, is a fetus conscious? You know, if so, at which trimester, you know, is a is a dog conscious? You know, what about a frog? Right. Or even as a precondition, you take that both these things are conscious.

[00:20:17]

Tell me which is more conscious.

[00:20:19]

Yeah. For example. Yes, yeah. Yeah. I mean, if consciousness to some multidimensional vector will just tell me in which respects these things are conscious and in which respect they aren't right. And you know, and have some principled way to do it where you're not, you know, carving out exceptions for things that you like or don't like, but could somehow take a description of an arbitrary physical system. And then just based on the physical properties of that system or the informational properties or how it's connected or something like that, just in principle, calculate, you know, its degree of consciousness.

[00:20:56]

Right. I mean, this this would be the kind of thing that we would need, you know, if we wanted to address questions like, you know, what is it take for a machine to be conscious. Right. Or when or, you know, when, when, when should we regard AIDS as being conscious?

[00:21:13]

So now this EITE this integrated information theory, which has been put forward by Julio Tenderoni and a bunch of his collaborators over the last decade or two, this is noteworthy, I guess, as a direct attempt to answer that question to, you know, answer the to address the pretty hard problem.

[00:21:38]

Right. And they give a a criterion that just based on how a system is connected. So you so is up to you to sort of abstract a system like a brain or a microchip as a collection of components that are connected to each other by some pattern of connections, you know, and and to specify how the components can influence each other, you know, like where the inputs go, where they affect the outputs. But then once you've specified that, then they give this quantity that they call phay, you know, the Greek letter fee.

[00:22:12]

And the definition of fee is actually changed over time. It changes from one paper to another. But in all of the variations that involve something about what we in computer science would call graph expansion. So basically what this means is that they want in order to get a large value, a fee, it should not be possible to take your system and partition it into two components that are only weakly connected to each other. OK, so whenever we take our system and sort of try to.

[00:22:45]

We had it up into two, then there should be lots and lots of connections going between the two components. OK, well, I understand what that means. Do they formalize what how to construct such a graph or data structure, whatever, or is this one of the criticism I've heard you kind of say is that a lot of the very interesting specifics are usually communicated through like natural language, things like through words. So the details aren't always well.

[00:23:14]

Well, it's true. I mean, they they they have nothing even resembling a derivation of this fee.

[00:23:21]

OK, so what they do is they state a whole bunch of postulates, you know, axioms that they think that consciousness should satisfy. And then there's some verbal discussion and then at some point V appears. Right, right. And this this was the first thing that really made the hair stand on my neck, to be honest, because they are acting as if there is a derivation. They're acting as if, you know, you're supposed to think that this is a derivation and there's nothing even remotely resembling a derivate.

[00:23:50]

They just pull the figure out of a hat completely.

[00:23:53]

Is one of the key criticisms to you is that details are missing or that that's not even the key criticism. That's just that just a side point, OK? The the core of it is that I think that the you know, they want to say that a system is more conscious the larger its value effect. And I think that that is obvious nonsense. OK, as soon as you think about it for like a minute, as soon as you think about it in terms of could I construct a system that had an enormous value, iffy like, you know, even larger than the brain has, but that is just implementing an error, correcting code, you know, doing nothing that we would associate with, you know, intelligence or consciousness or any of it.

[00:24:34]

The answer is, yes, it is easy to do that. Right. And so I wrote a blog post just making this point that, yeah, it's easy to do that now.

[00:24:42]

You know, Tiffany's response to that was actually kind of incredible, right? I mean, I admired it in a way, because instead of disputing any of it, he just bit the bullet in the sense, you know, he was one of the the most audacious bullet beatings I've ever seen in my career. OK, he said, OK, then fine. You know, this system that just applies, this error correcting code, it's conscious, you know, and if it has a much larger value effect than you or me, it's much more conscious.

[00:25:14]

You know, you just have to accept what the theory says because, you know, science is not about confirming our intuitions. It's about challenging them. And, you know, this is what my theory predicts, that this thing is conscious and, you know, or super duper conscious. And how are you going to prove me wrong? So the way I would argue.

[00:25:33]

Yeah, I guess your blog post is I would say yes, sure. You're right. In general, for naturally arising systems develop through the process of evolution on Earth, the this rule of the larger fee being associated we associate with more consciousness is correct. Yeah.

[00:25:50]

So this is not what he said at all. Right. Right. Because he wants this to be completely general. Right. So we can apply to given computers. Yeah. I mean, I mean the whole interest of the theory is the, you know, the hope that it could be completely general apply to aliens, to computers, to animals, coma patients, to any of it.

[00:26:10]

Right. And so so so he just said, well, you know, Scott is relying on his intuition. But, you know, I'm relying on this theory. And, you know, to me, it was almost like, you know, are we being serious here?

[00:26:24]

Like like like, you know, like like, OK, yes.

[00:26:29]

In science, we try to learn highly non-intuitive things. But what we do is we first test the theory on cases where we already know the answer. Right. Like if someone had a new theory of temperature. Right, then, you know, maybe we could check that it says that boiling water is hotter than ice. And then if it says that the sun is hotter than anything you've ever experienced, then maybe we we trust that extrapolation. Right. But like this this theory, like if if, you know, it's now saying that, you know, a gigantic Garet, like regular grid of exclusive or gaits can be way more conscious than, you know, a person or then than any animal can be, you know, even if it is, you know, is is is is so uniform that it might as well just be a blank wall.

[00:27:20]

Right. And so now the point is, if this theory is sort of getting wrong, the question is a blank wall, you know, more conscious than a person that I would say what is what is there for it to get right.

[00:27:31]

So your sense is a blank wall is not more conscious than a human being.

[00:27:37]

Yeah, I mean I mean I mean, you could say that I am taking that as one of my axioms. I'm saying I'm saying that if a theory of.

[00:27:45]

Consciousness is is getting that wrong then whatever it is talking about at that point, I, I, I'm not going to call it consciousness.

[00:27:55]

I'm going to use a different word. You have to use a different word. I mean, it's all it's possible, just like with intelligence, that as humans conveniently define these very difficult to understand concepts in a very human centric way. That's like the Turing test really seems to define intelligence as a thing that's human like. Right.

[00:28:12]

But I would say that with any concept, you know, there's you know, like we first need to define it.

[00:28:21]

Right. And a definition is only a good definition if it matches what we thought we were talking about, you know, prior to having a definition. Right. Yeah. And I would say that, you know, me as a definition of consciousness fails that test. That is my argument.

[00:28:38]

So, OK, so let's take further steps. You mentioned that the universe might be the toy machine. So like it might be computation or simulated, but by one anyway. Simulated one by one.

[00:28:49]

So what's your sense about consciousness? Do you think consciousness is computation, that we don't need to go to any place outside of the computer universe to, you know, to to understand consciousness, to build consciousness, to measure consciousness, all those kinds of things?

[00:29:09]

I don't know. These are what have been called the vertiginous questions. Right. There's the questions like like, you know, you get a feeling of vertigo and thinking about them. Right? I mean, I certainly feel like I am conscious in a way that is not reducible to computation. But why should you believe me? Right. I mean, and if you said the same to me, then why should I believe you? But as computer scientist, yeah, I feel like a computer could be intel could achieve human level intelligence, but and that's actually a feeling in a hope.

[00:29:46]

That's not a scientific belief. It's just we've built up enough intuition and the same kind of intuition you used in your blog. You know, that's what scientists do. They I mean, some of it is a scientific method, but some of it is just damn good intuition. I don't have a good intuition about consciousness. Yeah.

[00:30:02]

I'm not sure that anyone does or has in the, you know, twenty five hundred years that these things have been discussed.

[00:30:09]

But do you think we will like one of the I got a chance to attend. Can't wait to hear your opinion on this, but attend the NewLink event and one of the dreams there is to, you know, basically push neuroscience forward. And the hope with neuroscience is that we can inspect the machinery from which all this fun stuff emerges and see, are we going to notice something special, some special sauce from which something like consciousness or cognition emerges?

[00:30:38]

Yeah, well, it's clear that we've learned an enormous amount about neuroscience. We've learned an enormous amount about computation, you know, about machine learning, about you now how to get it to work. We've learned an enormous amount about the underpinnings of the physical world, you know, and, you know, from one point of view, that's like an enormous distance that we've traveled along the road to understanding consciousness from another point of view, you know, the distance still to be traveled on the road, you know, maybe seems no shorter than it was at the beginning.

[00:31:11]

Yeah, right. So it's very hard to say. I mean, you know, these are questions like like instead of trying to have a theory of consciousness, there is sort of a problem where it feels like it's not just that we don't know how to make progress, it's that it's hard to specify what could even count as progress. Right. Because no matter what scientific theory someone proposed, someone else could come along and say, well, you've just talked about the mechanism.

[00:31:36]

You haven't said anything about what breathes fire into the mechanism. What really makes there's something that it's like to be right. And that seems like an objection that you could always raise. Yes. No matter how much someone elucidated the details of how the brain works. OK, let's go talk to us in London. I have this intuition. Call me crazy, but we know that a machine to pass the Turing test and is full, whatever the spirit of it is, we can talk about how to formulate the perfect Turing test that the machine has to be conscious.

[00:32:11]

Are we at least have to? I have a very low bar of what consciousnesses I tend to. I tend to think that the emulation of consciousness is as good as consciousness, so that consciousness is just a dance, a social, a social shortcut, like a useful tool. But I tend to connect intelligence consciousness together. So by by that. Do you. Maybe just to ask what what role does consciousness play, do you think, in passing the Turing test?

[00:32:43]

Well, look, I mean, it's almost tautologically. True that if we had a machine that passed the Turing test, then it would be emulating consciousness. Right. So if your position is that, you know, emulation of consciousness is consciousness, then, you know, by definition, any machine that passed the Turing test would be conscious. But it's but I mean, we know that you could say that, you know, that that is just a way to rephrase the original question, you know, is an emulation of consciousness, you know, necessarily conscious.

[00:33:12]

Right. And you can you know, I hear I'm not saying anything new that hasn't been debated ad nauseum in the literature. OK, but, you know, you could imagine some very hard cases, like imagine a machine that passed the Turing test, but it did so just by an enormous cosmological size to look up table that just cached every possible conversation that could be had the old Chinese room. Well, well, yeah. Yeah, but but this is I mean I mean, the Chinese room actually would be doing some computation, at least in Searls version right here.

[00:33:45]

I'm just talking about a table look up. OK, now it's true that for conversations of a reasonable length, this, you know, lookup table would be so enormous that wouldn't even fit in the observable universe. OK, but supposing that you could build a big enough lookup table and then just, you know, pass the Turing test just by looking up what the person said. Right. Are you going to regard that as conscious?

[00:34:08]

OK, let me try to make this. Yeah. Formal and then you can shut it down. I think that the emulation of something is that something if there exists in that system, a black box that's full of mystery. So like full of mystery to whom?

[00:34:27]

To human inspectors.

[00:34:29]

So does that mean that consciousness is relative to the observer? Could something be conscious for us, but not conscious for an alien that understood better what was happening inside the black box?

[00:34:39]

Yes. So that if inside the black box is just to lookup table, the alien that saw that would say this is not conscious to us. Another way to phrase the black box is layers of abstraction, which make it very difficult to see to the actually underlying functionality of the system. And then we observe just the abstraction. And so it looks like magic to us. But once we understand the machinery, it stops being magic. And so, like, that's a prerequisite is that you can't know how it works, but some part of it, because then there has to be in our human mind, entry point for the magic.

[00:35:17]

Hmm.

[00:35:18]

So that's that's the formal definition of the system. Yeah. Well well, look, I mean, I explored a view in this essay I wrote called The Ghost in the Quantum Turing Machine seven years ago. That is related to that, except that I did not want to have consciousness be relative to the observer. Right. Because I think that, you know, if consciousness means anything, it is something that is experienced by the entity that is conscious. Right.

[00:35:43]

You know, like, I don't need you to tell me that I'm conscious. Right. And nor do you need me to tell you that you are right. So, so but but basically what I explored there is, you know, are there aspects of a of a system like like a brain that that just could not be predicted, even with arbitrarily advanced future technologies because of chaos, combined with quantum mechanical uncertainty? You know, things like that.

[00:36:13]

I mean, that that actually could be a property of the brain, you know, if true, that we distinguish it in a principled way, at least from any currently existing computer, not from any possible computer, but from. Yeah, yeah.

[00:36:27]

This is a thought experiment. So, yeah, if I gave you information that you're in the entire history of your life, basically explain away free will with a lookup table, say that this was all predetermined, that everything you experience has already been predetermined. Wouldn't that take away your consciousness. Wouldn't you yourself that wouldn't your experience of the world changed for you in a way that you can't?

[00:36:53]

Well, I mean, let me put it this way. If you could do like in a Greek tragedy where, you know, you would just write down a prediction for what I'm going to do and then maybe you put the prediction in a sealed box and maybe, you know, you you open it later and you show that you knew everything I was going to do or, you know, of course, the even creepier version would be you tell me the prediction and then I try to falsify it at my very effort to falsify it makes it come true.

[00:37:20]

All right. Let's let's let's you know, let's let's even forget that, you know, that version is as convenient as it is for fiction writers. Right. Let's just let's just do the version where you put the prediction into a sealed envelope, OK? But if you could reliably predict everything that I was going. To do I'm not sure that that would destroy my sense of being conscious, but I think it really would destroy my sense of having free will, you know, and much, much more than any philosophical conversation could possibly do that.

[00:37:50]

Right.

[00:37:51]

And so I think it becomes extremely interesting to ask, you know, could such predictions be done, you know, even in principle? Is it consistent with the laws of physics to make such predictions, to get enough data about someone that you could actually generate such predictions without having to kill them in the process to, you know, slice their brain up into little slivers or something? I mean, it's theoretically possible, right?

[00:38:14]

Well, I don't know. I mean I mean, it might be possible, but only at the cost of destroying the person. Right. I mean, it depends on how low you have to go in sort of the substrate. Like if there was a nice digital abstraction layer, if you could think of each neuron as a kind of transistor computing, a digital function, then you could imagine some nano robots that would go in. And we just scan the state of each transistor, you know, of each neuron and then make a a good enough copy.

[00:38:45]

Right. But if it was actually important to get down to the molecular or the atomic level, then, you know, eventually you would be up against quantum effects, you would be up against the clone ability of quantum states. So I think it's a question of how good of a replica, how good does the replica have to be before you're going to count it as actually a copy of you or as being able to predict your actions? That's a totally open question, right?

[00:39:11]

Yeah, yeah, yeah. And especially once we say that, well, look, maybe there's no way to, you know, to make a deterministic prediction because, you know, there's all you know, we know that there is noise buffeting the brain around, presumably even quantum mechanical uncertainty affecting the sodium ion channels, for example, whether they open or they close, you know, there's no reason why over a certain timescale that shouldn't be amplified, just like we imagine happens with the weather or with any other, you know, chaotic system.

[00:39:46]

So so if that stuff is is important. Right, then then then, you know, we would say, well, you know, you you can't you know, you're never going to be able to make an accurate enough copy. But now the hard part is, well, what if someone can make a copy that sort of no one else can tell apart from you? Right. It says the same kinds of things that you would have said, maybe not exactly the same things, because we agree that there is noise, but it says the same kinds of things.

[00:40:17]

And maybe you alone would say, no, I know that that's not me. You know, it's it doesn't share my I haven't felt my consciousness leap over to that other thing. I still feel it localized in this version. Right. Then why should anyone else believe you?

[00:40:32]

What are your thoughts? I'd be curious. You're the person to ask which is Penrose's. Roger is work on consciousness saying that, you know, there is some with axons and so on. There might be some biological places where quantum mechanics can come into play and through that create consciousness somehow.

[00:40:51]

Yeah, OK, well I'm familiar with his work and of course, you know, I read Penrose's books as a teenager. They had a huge impact on me five or six years ago. I had the privilege to actually talk these things over with Pedros, you know, at some length at a conference in Minnesota. And, you know, he is an amazing personality. I admire the fact that he was even raising such audacious questions at all. But, you know, to to to answer your question, I think the first thing we need to get clear on is that he is not merely saying that quantum mechanics is relevant to consciousness.

[00:41:27]

Right.

[00:41:28]

That would be like, you know, that would be tame compared to what he is saying. Right. He is saying that, you know, even quantum mechanics is not good enough. Right. If because if supposing, for example, that the brain were a quantum computer, that's still a computer, you know, in fact, a quantum computer can be simulated by an ordinary computer. It might merely need exponentially more time in order to do so. Right.

[00:41:52]

So that's simply not good enough for him.

[00:41:54]

OK, so what he wants is for the brain to be a quantum gravitational computer or he wants the brain to be exploiting as yet unknown laws of quantum gravity, which would which would be uncomparable. And that's the key point. OK, yes. Yes. That would be literally uncomparable. And I've asked him, you know, to clarify this, but uncomparable, even if you had an oracle for the halting problem or, you know, and in or, you know, as high up as you want to go and the sort of high the usual hierarchy of computability, he wants to go beyond all of that.

[00:42:33]

OK, so so, you know, just just to be. We're like, you know, if we're keeping count of how many speculations, you know, there's probably like at least five or six of them, right. There's first of all, that there is some quantum gravity theory that would involve this kind of, um, computability. Right. Most people who study quantum gravity would not agree with that. They would say that what we've learned, what little we know about quantum gravity from the this adds CFT correspondence, for example, has been very much consistent with the broad idea of nature being computable.

[00:43:06]

Right. But but all right.

[00:43:09]

But but supposing that he's right about that, then you know, what most physicists would say is that whatever new phenomena there are in quantum gravity, you know, they might be relevant at the singularities of black holes. They might be relevant at the Big Bang. They are plainly not relevant to something like the brain, you know, that is operating at ordinary temperatures, you know, with ordinary chemistry. And, you know, the the the physics underlying the brain.

[00:43:43]

They would say that we have, you know, the fundamental physics of the brain. They would say that we've pretty much completely known for four generations now. Right. Because, you know, quantum field theory lets us sort of parameterize our ignorance. Right? I mean, Sean Carroll has made this case and, you know, in great detail. Right. That sort of whatever new effects are coming from quantum gravity, you know, they are sort of screened off by quantum field theory.

[00:44:09]

Right. And this is this, you know, brings us to the whole idea of effective theories. Right. But that we have, you know, like in the standard model of elementary particles. Right. We have a quantum field theory that seems totally adequate for all of the terrestrial phenomena. Right. The only things that it doesn't explain are, well, first of all, you know, the details of gravity. If you were to probe it like that at, you know, extremes of, you know, curvature like incredibly small distances, it doesn't explain dark matter.

[00:44:44]

It doesn't explain black hole singularities. Right. But these are all very exotic things, very far removed from our life on Earth. Right. So for Penrhos to be right, he needs, you know, these phenomena to somehow affect the brain.

[00:44:59]

He needs the brain to contain antenna that are sensitive to the black hole, to this as yet unknown physics. Right. And then he needs a modification of quantum mechanics. OK, so he needs quantum mechanics to actually be wrong, OK?

[00:45:14]

He needs what he wants is what he calls an objective reduction mechanism or an objective collapse. So this is the idea that once quantum states get large enough, then they somehow spontaneously collapse. Right. That you know, and this is an idea that lots of people have explored. You know, there's something called the GW proposal that tries to, you know, say something along those lines, you know, and these are theories that actually make testable predictions. Right.

[00:45:47]

Which is a nice feature that they have. But, you know, the very fact that they're testable may mean that in the you know, in the coming decades, we may well be able to test these theories and show that they're they're wrong. Right.

[00:46:00]

You know, we may be able to test some of Penrose's ideas, if not not as ideas about consciousness, but at least his ideas that about an objective collapse of quantum states. And people have actually, like Dick Baumeister, have actually been working to try to do these experiments. They haven't been able to do it yet to test Penrose's proposal. OK, but Penrhos would need more than just an objective collapse of quantum states, which would already be the biggest development in physics for a century since quantum mechanics itself.

[00:46:31]

He would need for a consciousness to somehow be able to influence the direction of the collapse so that it wouldn't be completely random. But, you know, your dispositions would somehow influence the quantum state to collapse more likely this way or that way.

[00:46:48]

OK, finally, Penrose says that all of this has to be true because of an argument that he makes based on Girdles Incompleteness Theorem.

[00:46:57]

OK, right now, like I would say, the overwhelming majority of computer scientists and mathematicians who have thought about this, I don't think that girdles and completeness theorem can do what he needs it to do. You're right. I don't think that that argument is sound OK. But that is, you know, that is sort of the tower that you have to ascend to. If you're going to go where Penrose goes and he uses with the completeness theorem, is that basically that there's important stuff that's not computable that way.

[00:47:28]

It's not just that, because, I mean, everyone agrees that there are problems that are uncomparable. Right. That's a mathematical theorem. But what Penrose wants to say is that, you know, the you know, for example, there are statements, you know, you know, given any formal system, you know, for doing math, right. There will be true statements of arithmetic that that formal system, you know, if it's adequate for math at all, if it's consistent and so on, will not be able to prove a famous example being the statement that that system itself is consistent.

[00:48:05]

Right and right. No. You know, good formal system can actually prove its own consistency. That can only be done from a stronger formal system, which then can't prove its own consistency and so on forever. OK, that's Girdles Theorem. But now why is that relevant to consciousness? Right.

[00:48:25]

Well, you know, I mean I mean, the idea that it might have something to do with consciousness as an old one girdle himself apparently thought that it's not really you know, Lucas thought so I think in the 60s.

[00:48:39]

And Penrhos is really just, you know, sort of updating.

[00:48:41]

But what what what they and others had said. I mean, you know, the idea that girl's theorem could have something to do with consciousness was, you know, in 1950 when Alan Turing wrote his article about the Turing test, he already, you know, was writing about that as like an old and well-known idea and as one that he well, as a wrong one that he wanted to dispense with.

[00:49:05]

OK, but the basic problem with this idea is, you know, Penrose wants to say that and all of his predecessors, you're, you know, want to say that, you know, even though, you know, this given formal system cannot prove its own consistency, we as humans sort of looking at it from the outside can just somehow see its consistency. Right.

[00:49:28]

And the you know, the rejoinder to that, you know, from the very beginning has been, well, can we really. Yeah, I mean, maybe or maybe maybe maybe.

[00:49:37]

Maybe he Penrhos can, but, you know, can the rest of us. Right. And, you know, I noticed that that, you know, I mean, it is perfectly plausible to imagine a computer that could say, you know, it would not be limited to working within a single formal system. Right. They could say, I am now going to adopt the hypothesis that that my formal system is consistent. Right. And I'm now going to see what can be done from that stronger vantage point and and so on.

[00:50:07]

And yet when I'm going to add new axioms to my system, totally plausible. There's absolutely Gardel's theorem has nothing to say about against an A.I. that could repeatedly add new axioms. All it says that there is no absolute guarantee that when the AI adds new axioms, that it will always be right. OK. And that's, of course, the point that Penrhos pounces on. But the reply is obvious, and it's one that that Alan Turing made seventy years ago.

[00:50:36]

Namely, we don't have an absolute guarantee that we're right when we add a new axiom, we never have and plausibly we never will.

[00:50:44]

So on Alan Turing, you took part in a love no prize.

[00:50:48]

I'm not really known for it was I didn't I mean, there was this kind of ridiculous claim that was made some almost a decade ago about a chat bot called Eugene Graphology.

[00:51:01]

I guess you didn't participate as a judge in the laboratory. Did you participate as a judge in that? I guess there was an exhibition event or something like that?

[00:51:09]

Or was Eugene Eugene Goostman that was just me writing a blog post because some journalist called me to ask about it.

[00:51:17]

Did you have a chat with him? I thought I did chat with Eugene Goostman. I mean, it was available on the Web. The.

[00:51:22]

Oh, interesting. I did. So yeah. So all that happened was that, you know, a bunch of journalists started writing breathless articles about, you know, what know, first chat bot that passes the Turing test. Right. And it was this thing called Eugene Goostman that was supposed to simulate a 13 year old boy. And, you know, and apparently someone had done some tests where, you know, people couldn't, you know, you know, where we're less than perfect, let's say, distinguishing it from a human.

[00:51:51]

And they said, well, if you look at Turing's paper and you look at, you know, the percentages that he that he talked about, then, you know, it seems like we're past that threshold. Right.

[00:52:01]

And. You know, I had a sort of know different way to look at it instead of the legalistic way, like let's just try the actual thing out and let's see what it can do with questions like, you know, is Mount Everest bigger than a shoe box? OK, or just, you know, like the most obvious questions. Right. And then and, you know, and the answer is, well, it just kind of parries you because it doesn't know what you're talking about.

[00:52:26]

Right. So just clarify exactly in which way. They're obvious. They're obvious in the sense that you convert the sentences into the meaning of the objects they represent and then do some basic obvious we mean common sense reasoning with the objects that the senses represent. Right. Right.

[00:52:45]

It was not able to answer or even intelligently respond to basic common sense questions. But let me say something stronger than that. There was a famous chat bot in the 60s called Alysa. Right, that that managed to actually fool, you know, a lot of people. Right. Or people would pour their hearts out into this, Alysa, because it stimulated a therapist. Right. And most of what we did was that we just throw back at you whatever you said.

[00:53:11]

Right. And this turned out to be incredibly effective. Right. Maybe, you know, therapists know this. This is, you know, one of their tricks.

[00:53:20]

But it you know, it really had some people convinced. But, you know, this this thing was just like I think it was literally just a few hundred lines of lisp code. Right. It was not only was it not intelligent, it wasn't especially sophisticated. It was like a it was a simple little hobbyist program. And Eugene Goostman, from what I could see, was not a significant advance compared to a Alysa. Right. So so and that was that was really the point I was making.

[00:53:52]

And this was you know, you didn't in some sense, you didn't need a like a computer science professor to sort of say this like anyone who was looking at it and who just had, you know, an ounce of sense could have said the same thing. Right. Well, but because, you know, these journalists were, you know, calling me, you know, like, the first thing I said was, well, you know, now, you know, I'm a quantum computing person.

[00:54:16]

I'm not an AI person. You know, you shouldn't ask me. But then they said, look, you can go here and you can try it out.

[00:54:22]

Is it all right? All right. So I'll try it out.

[00:54:25]

But now, you know, this whole discussion, I mean, it got a whole lot more interesting in just the last few months. Yeah. I'd love to hear your thoughts about it. Yeah. Yeah. And the last few months we've had you know, we've we've the world has now seen a chat engine or a text engine, I should say, called GP2 three.

[00:54:46]

You know, I think it's still you know, it does not pass a Turing test. You know, there are no real claims that it passes the Turing test. Right. You know, this comes out of the group at Open Eye. And, you know, they're you know, they've been relatively careful in what they've claimed about the system. But I think this this. This. As clearly as Eugene Goostman was not in advance over Alysa, it is equally clear that this is a major advance over, over, over Alysa or really over anything that the world has seen before.

[00:55:19]

This is a text engine that can come up with kind of on topic, you know, reasonable sounding completions to just about anything that you ask. You can ask it to write a poem about Topic X in the style of poet Y, and it will have a go at that.

[00:55:38]

And it will do, you know, not a not a great job, not an amazing job, but, you know, a passable job, you know, definitely, you know, as good as you know, you know, in many cases I would say better than I would have done. Right. You know, you can ask it to write, you know, an essay like a student essay about pretty much any topic. And it will get something that I am pretty sure would get at least a B minus, you know, and most, you know, high school or even college classes.

[00:56:07]

Right. And, you know, in some sense, you know, the way that it did this, the way that it achieves this, you know, Scott Alexander of the you know, the much more the blog Slate star codecs had a wonderful way of putting it. He said that they basically just ground up the entire Internet into a slurry. OK.

[00:56:26]

Yeah. And, you know, and to tell you the truth, I had wondered for a while why nobody had tried that, right? Like, why not write a chat bot by just doing deep learning over a corpus consisting of the entire web? Right. And so, so, so now they finally have done that. Right.

[00:56:45]

And, you know, the results are very impressive. You know, it's not clear that, you know, people can argue about whether this is truly a step toward General AI or not. But this is an amazing capability that, you know, we didn't have a few years ago that, you know, a few years ago, if you had told me that we would have it now, that would have surprised me. Yeah. And I think that anyone who denies that is just not engaging with with what's there.

[00:57:13]

So their model, it takes a large part of the Internet and compresses it in a small number of parameters relative to the size of the Internet and is able to, without fine tuning, do a basic kind of a scoring mechanism, just like you describe. You specify a kind of poet and then you want to write a poem and somehow was able to do basically look up on the Internet. Well, of relevant things. I mean, that's what I mean.

[00:57:41]

I mean I mean, how else do you explain it? Well, OK. I mean, I mean, I mean the, the training involved, you know, massive amounts of data from the Internet and actually took lots and lots of computer power, lots of electricity. Right. You know, there are some some very prosaic reasons why this wasn't done earlier. Right. Right. But, you know, it cost some tens of millions of dollars, I think, you know, just approximately like a few million dollars.

[00:58:05]

OK, OK. Really, OK. More like five. Oh, all right. All right. Thank you. I mean, as they as they scale it up, you know, it will cost, but the hope is cost comes down, all that kind of stuff.

[00:58:18]

But basically, you know, it is a neural net, you know. So I mean I mean what's now called a deep net.

[00:58:24]

But, you know, they're basically the same thing. Right. So it's a it's a form of, you know, algorithm that people have known about for decades. Right. But it is constantly trying to solve the problem, predict the next word. Right. So it just trying to predict what comes next. It's not trying to decide what what it should say, what ought to be true.

[00:58:49]

It's trying to predict what someone who had said all of the words up to the preceding one would say next, although the push back on that, that's how it's trained.

[00:58:59]

But that's right. No, it's arguable. Arguable. Yeah. That are very cognition could be a mechanism as that simple source.

[00:59:07]

Of course, I never said that it wasn't right.

[00:59:09]

But but yeah. I mean I mean some sense that that is, you know, if there is a deep philosophical question that's raised by three, then that is it. Right. Are we doing anything other than, you know, this this predictive processing? Just trying to constantly trying to fill in a blank of what would come next after what we just said up to this point is that what I'm doing right now is impossible.

[00:59:33]

So the intuition that a lot of people have, look, this thing is not going to be able to reason the Mt. Everest question. Do you think it's possible to Djibouti five, six and seven would be able to, with this exact same process, begin to do something that looks like is indistinguishable to us humans from reasoning? I mean, the truth is that we don't really know what the limits are.

[00:59:58]

Right, exactly. Because, you know, what we've seen so far is that, you know, three was basically the same thing. He, too, but just with a much larger network, you know, more training time, bigger training corpus, right. And it was, you know, very noticeably better than its immediate predecessor. So, you know, we don't know where you hit the ceiling here. Right. I mean, that's the that's the amazing part.

[01:00:27]

And maybe also the scary part. Right. That, you know, now, my guess would be that, you know, at some point, like, there has to be diminishing returns. Like it can't be that simple, can it? Right. Right.

[01:00:39]

But I wish that I had more to base that guess on. Right. Yeah. I mean, some people say that there would be a limitation on we're going to hit a limit on the amount of data that's on the Internet. Yes. Yeah.

[01:00:51]

So, so sure. So so there's certainly that limit.

[01:00:54]

I mean, there's also, you know, like if you are looking for questions that will stump GP2 three, right.

[01:01:01]

You can come up with some without, you know, like, you know, even getting it to learn how to balance parentheses. Right. Like it can, you know, it doesn't do such a great job. Right.

[01:01:12]

You know, like like you know and you know and its failures are ironic. Right. Like like basic arithmetic. Right. And you think and you think, you know, isn't that what computers are supposed to be best that isn't that where computers already had us beat a century ago. Yeah. Right. And yeah. And yet that's where GBG three struggles. Right. But it's it's amazing, you know, that it's almost like a young child in that way.

[01:01:34]

Right. That but but somehow, you know, because it is just trying to predict what what comes next. It doesn't know when it should stop doing that and start doing something very different, like some more exact logical reasoning. Right. And so so, you know, the you know, one one is naturally you to guess that our brain sort of has some element of predictive processing, but that it's coupled to other mechanisms. Right. That it's coupled to, you know, first of all, visual reasoning, which GPG three also doesn't have any of the demonstration that there's a lot of promise there.

[01:02:14]

So, yeah, I can complete images. That's right. And using the exact same kind of transformer mechanisms to like watch videos on YouTube and so the same the same cell supervised mechanism to be able to it'd be fascinating to think what kind of completions you could do.

[01:02:30]

Oh, yeah, no, absolutely. Although like if we ask it to, like, you know, a word problem that involves reasoning about the locations of things in space, I don't think it does such a great job on those right. To make an example. And so so the guess would be, well, you know, humans have a lot of predictive processing, a lot of just filling in the blanks. But we also have these other mechanisms that we can couple to or that we can sort of call a subroutines when we need to.

[01:02:55]

And that may be maybe, you know, to go further that that one would want to integrate other forms of reasoning.

[01:03:02]

Let me go on another topic. That is amazing, which is complexity what? And then start with the most absurdly romantic question of what's the most beautiful idea in computer science or theoretical computer science to you like what just early on in your life or in general have captivated you and just grabbed you? I think I'm going to have to go with the idea of universality. You know, if you're really asking for the most beautiful I mean, so universality is the idea that, you know, you put together a few simple operations, like in the case of Boolean logic, that might be the end game.

[01:03:43]

The organ, the not great.

[01:03:45]

Right. And then your first guess is, OK, this is a good start. But obviously, as I want to do more complicated things, I'm going to need more complicated building blocks to express that.

[01:03:56]

Right. And that was actually my guess when I first learned what programming was. I mean, when I was, you know, an adolescent and I someone showed me Apple Basic and, you know, GW Basic, if anyone listening remembers that, OK, but, you know, I thought, OK, well, now, you know, I mean I mean, I thought I felt like this was a revelation. You know, it's like finding out where babies come from.

[01:04:21]

It's like that level of, you know, why didn't anyone tell me this before? Right.

[01:04:25]

But I thought, OK, this is just the beginning. Now I know how to write a basic program, but, you know, really write an interesting program like, you know, a video game which had always been my my dream as a kid, you know, create my own Nintendo games. Right. You but, you know, obviously, I'm going to need to learn in some way more complicated form of programming than that.

[01:04:46]

OK, but, you know, eventually I learned this incredible idea of universality and that says that, no, you throw in a few rules and then you can you already have enough to express everything, OK?

[01:05:00]

So, for example, the and the or not great can or in fact even just the and not great or even just even just the nannygate, for example, is already enough to express any boolean function on any number of bits.

[01:05:15]

You just have to string together enough of them that you can build a universe with Nannygate, you can build the universe out of NAND gates. Yeah. You know, the the simple instructions of basic are already enough, at least in principle. You know, if we ignore details like how much memory can be accessed and stuff like that, that is enough to express what could be expressed by any programming language whatsoever. And the way to prove that is very simple.

[01:05:40]

We simply need to show that in basic or whatever, we could write a an interpreter or a compiler for whatever other programming language we care about, like C or Java or whatever. And as soon as we had done that, then ipso facto anything that's expressible in C or Java is also expressible and basic, OK. And so this idea of universality, you know, goes back at least to Alan Turing in the 1930s when he wrote down this incredibly simple, pared down model of a computer, the Turing machine.

[01:06:18]

Right. Which he pared down the instruction set to just read a you know, go write a simple move to the left, move to the right hall, change your internal state. Right. That's it.

[01:06:32]

OK, and anybody proved that, you know, this could simulate all kinds of other things, you know? And so so, in fact, today we would say, well, we would call restoring universal model of computation. That is, you know, just as it has just the same expressive power, that basic or Java or C++ or any of those other languages have, because anything in those other languages could be compiled down tutoring machine. Now, Turing also proved a different related thing, which is that there is a single Turing machine that can simulate any other Turing machine if you just described that other machine on its tape.

[01:07:16]

Right. And likewise, there is a single Turing machine that will run any C program. You know, if you just put it on its tape that that's a second meaning of universality, first of all, that he can visualize.

[01:07:28]

And that was in the 30s and 30s. That's before computers, really. I mean, I don't know how I wonder what that felt like. Yeah.

[01:07:40]

You know, learning that there's no Santa Claus or something, because I don't know if that's empowering or paralyzing because it doesn't give you any it's like you can't write a software engineering book and make that the first chapter and say we're done. Well, I mean.

[01:07:57]

I mean, right. I mean I mean, in one sense, it was this enormous flattening of the universe. Yes. I would imagine that there was going to be some infinite hierarchy of more and more powerful programming languages, you know, and then I. Kicked myself for having such a stupid idea, but apparently girl had had the same conjecture in the 30s and then you're in good company. Yeah. And and then tour and then girl read Turing's paper and he kicked himself and he said, yeah, I was completely wrong about that.

[01:08:26]

OK, but but, you know, I had thought that maybe maybe where I can contribute will be to invent a new, more powerful programming language that lets you express things that could never be expressed in basic. Yeah, right. And you know, and you know, how would you do that? Obviously, you couldn't do it itself in basic. Right.

[01:08:44]

But but, you know, there is this incredible flattening that happens once you learn what is universality.

[01:08:51]

But then it's also like an opportunity because it means once you know these rules, then, you know, the sky is the limit. Right? Then you have kind of the same weapons at your disposal that the world's greatest programmer has. It's now all just a question of how you wield them. Right, exactly.

[01:09:11]

But so every problem is solvable, but some problems are harder than others and.

[01:09:16]

Well, yeah, there's the question of how much time, you know, of how hard is it to write a program. And then there's also the questions of what resources does the program need? You know, how much time, how much memory. Those are much more complicated questions, of course, ones that we're still struggling with today.

[01:09:33]

Exactly. So you you've I don't know if you created Complexity Zoo or. I did create the Complexity Zoo.

[01:09:38]

What is it was complexity. Oh, all right. All right, all right.

[01:09:41]

Complexity theory is the study of sort of the inherent resources needed to solve computational problems. Right. So it's easiest to give an example. Like let's say we want to add to two numbers. If I want to add them. You know, if the numbers are twice as long, then it only it will take me twice as long to add them, but only twice as long. Right.

[01:10:08]

It's no worse than that model computer for a computer or for a person using pencil and paper, for that matter, if you have a good algorithm.

[01:10:16]

Yeah, that's right. I mean, even if you just if you just use the elementary school algorithm of just carrying, you know, then it takes time that is linear. And the length of the numbers right now, multiplication, if you use the elementary school algorithm, is harder because you have to multiply each digit of the first number by each digit of the second one. Yeah, yeah. And then deal with all the carries. So that's what we call a quadratic time algorithm.

[01:10:41]

Right. If the numbers become twice as long now you need four times as much time.

[01:10:46]

OK, so. Now, as it turns out, we people discovered much faster ways to multiply numbers using computers, and today we know how to multiply two numbers that are n digits long, using a number of steps that's nearly linear in it. These are questions you can ask. But now let's think about a different thing. The people you know, they've encountered in elementary school are factoring in number. OK, take a number and find its prime factors.

[01:11:18]

Right. And here, you know, if I give you a number with 10 digits, I ask you for its prime factors. Well, maybe it's even so. You know, the two is a factor. Maybe it ends in zero. So, you know that 10 is a factor. Right. But, you know, other than a few obvious things like that, you know, if the prime factors are all very large, then it's not clear how you even get started.

[01:11:39]

Right. You know, you it seems like you have to do an exhaustive search among an enormous number of factors now.

[01:11:47]

And as many people might know, the for for for better or worse, the security, you know, of most of the encryption that we currently use to protect the Internet is based on the belief and this is not a theorem, it's a belief that that factoring is an inherently hard problem for our computers.

[01:12:09]

We do know algorithms that are better than just trial division and just trying all the possible divisors, but they are still basically exponential and exponential is hard.

[01:12:21]

Yeah, exactly.

[01:12:22]

So that's the the the fastest algorithms that anyone is discovered, at least publicly discovered. You know, I'm assuming that the NSA doesn't know something better. Yeah, OK. But they take time that basically grows exponentially with the cube root of the size of the number that you're factoring.

[01:12:39]

Right. So that cube root, that's the part that takes all the cleverness. OK, but there's still an exponential X. There's still an exponential ality there. But what that means is that like it when people use a thousand bit keys for their cryptography, that can probably be broken using the resources of the NSA or the world's other intelligence agencies.

[01:12:59]

You know, people have done analyses that say, you know, with a few hundred million dollars of computer power, they could totally do this. And if you look at the documents that Snowden released, you know, it look, it looks a lot like they are doing that or something like that.

[01:13:14]

It would kind of be surprising if they weren't OK. But, you know, if that's true, then in some ways that's reassuring, because if that's the best that they can do, then that would say that they can't break 2000 bit numbers, right? Exactly right.

[01:13:29]

Then two thousand bit numbers would be would be beyond whatever they could. They haven't found an efficient algorithm. That's where all the worries and the concerns of quantum computing came in, that there was some kind of shortcut around that. Right.

[01:13:40]

So complexity theory is a you know, is is a huge part of, let's say, the theoretical core of computer science. You know, it started in the 60s and 70s, as, you know, sort of an autonomous field.

[01:13:54]

So it was, you know, already, you know, I mean, you know, it was well developed even. But by the time that I was born, but I in 2002, I made a website called The Complexities. So to answer your question, where I just try to catalogue the different complexity classes, which are classes of problems that are solvable with different kinds of resources. OK, so these are kind of you know, you could think of complexity classes as like being almost two to two theoretical computer science, like what the elements are to chemistry right there.

[01:14:31]

Sort of you know, there are our most basic objects in a certain way.

[01:14:36]

I feel like the elements have to have a characteristic to them where you can't just add an infinite number.

[01:14:44]

Well, you could but be beyond a certain point, they become unstable. Right. Right. So it's like, you know, in theory you can have atoms with, you know, and look look. I mean. I mean. I mean, a neutron star is a nucleus with, you know, billions of of of of neutrons in it, of hadrons.

[01:15:05]

It OK, but, you know, for for sort of normal atoms. Right. Probably you can't get much above one hundred atomic weight 150 or so or. Sorry. Sorry. I mean, I mean beyond 150 or so protons without it, you know, very quickly. Fissioning with complexity classes.

[01:15:24]

Well yeah.

[01:15:25]

You can have an infinity of complexity classes, but you know, maybe there's only a finite number of them that are particularly interesting. Right. Just like with anything else, you know, you you care about some more than about others.

[01:15:38]

So what kind of interesting classes are there? Yeah. I mean, you can have just maybe say what are the if you take any kind of computer science class, what are the classes you learn? Good. Let me let me tell you sort of the the the biggest ones, the ones that you would learn first. So, you know, first of all, there is. That's what it's called.

[01:15:57]

It stands for polynomial time. And this is just the class of all of the problems that you could solve with a conventional computer like your iPhone or your laptop, you know, by a completely deterministic algorithm. Right. Using a number of steps that grows only like the size of the input raised some fixed power.

[01:16:21]

OK, so if your algorithm is linear time, like, you know, for adding numbers like that, that problem is in P. If you have an algorithm that's quadratic time, like the elementary school algorithm for multiplying two numbers, that's also in P, even if it was the size of the input to the tenth power or to the fiftieth power, well, that wouldn't be very good in practice. But, you know, formally, we would still count that.

[01:16:47]

That would still be in P. OK, but if your algorithm takes exponential time, meaning like if every time I add one more data point to your input, if the time needed by the algorithm doubles, if you need time like two to the power of the amount of input data, then that is that we call an exponential time algorithm. OK, and that that is not polynomial. OK, so P is all of the problems that have some polynomial time algorithm.

[01:17:19]

OK, so that includes most of what we do with our computers on a day to day basis.

[01:17:24]

You know, all the, you know, sort sorting, basic arithmetic, you know, whatever is going on in your email reader or an Angry Birds, OK, it's all in P.

[01:17:34]

Then the next superimportant class is called ENPI. That stands for Non-Deterministic Polynomial does not stand for not polynomial, which is a common confusion.

[01:17:47]

BNP was basically all of the problems where if there is a solution, then it is easy to check the solution if someone shows it to you. OK, so actually a perfect example of a problem in ENPI is factoring the one I told you about before. Like if I gave you a number with thousands of digits and I told you that, you know, I asked you, does this does this have at least three non-trivial divisors? Right. That might be a super hard problem to solve.

[01:18:21]

It might take you millions of years using any algorithm that's known at least running on our existing computers.

[01:18:27]

OK, but if I simply showed you the divisors, I said here are three divisors of this number, then it would be very easy for you to ask your computer to just check each one and see if it works. Just divide it in, see if there's any remainder. Right. And if they all go in, then you've checked. Well, I guess there were. Right.

[01:18:47]

So so any problem where wherever there's a solution, there is a short witness that can be easily like a polynomial size witness that can be checked in polynomial time that we call and problem. OK. And yeah. So so every problem that's in P is also in ENPI. Right. Because, you know, you could always just ignore the witness and just, you know, if the problem isn't going to solve it yourself. Right.

[01:19:16]

OK, but now the end comes as the central, you know, mystery of theoretical computer science is every ENPI problem in. So if you can easily check the answer to a computational problem, does that mean that you can also easily find the answer, even though there's all these problems that appear to be very difficult to find the answer?

[01:19:39]

It's still an open question whether grants exist.

[01:19:42]

So no one has proven that there's no way to do it. It's arguably the most I don't know, the most famous, the most maybe. Interesting. Maybe you disagree with that problem in theoretical computer science. So what's your most famous for sure? Because then p yeah. If you were to bet all your money, where do you put your money? That's an easy one. P is not equal to.

[01:20:02]

And so I like to say that if we were physicists we would have just declared that to be a law of nature, you know, just like, just like thermodynamics there is giving ourselves Nobel Prize for Discovery. Yeah, yeah. No. And look, if later if later it turned out that we were wrong, we just give ourselves no more. No more Nobel Prize.

[01:20:20]

Yeah. I mean, you know, but yeah. Because I mean no I mean, I mean, I mean it's really just because we are mathematicians or descended from mathematicians, you know, we have to call things conjectures that other people would just call empirical facts or discoveries. Right. But one shouldn't read more into that difference in language, you know, about the underlying truth. So you're a good investor in good spend your money. So then I don't know.

[01:20:48]

Let me ask another way, is it possible at all and what would that look like if indeed it goes on? Well, I do think that it's possible.

[01:20:59]

I mean, in fact, you know, when people really pressed me on my blog for what odd's what I put I bought, you know, two or three percent odds that I was pretty good at peak was ENPI. Yeah, just. Well, because, you know, when I mean I mean, you really have to think about like, if there were 50, you know, mysteries like P versus NP and if I made a guess about every single one of them, what I expect to be right 50 times.

[01:21:23]

Right. And the truthful answer is no. OK, yeah. So, you know, and that's what you really mean. And saying that, you know, you have, you know, better than 98 percent odds for something, OK, but so, so, yeah.

[01:21:37]

You know, I mean, there could certainly be surprises. And look, if P equals ENPI, well then there would be the further question of, you know, is the algorithm actually efficient in practice? I mean, I Don Knuth, who I know that you've interviewed as well. Right. He likes to conjecture that P equals NP, but that the algorithm is so inefficient that it doesn't matter anyway. Right now, I don't know. I've listened to him say that.

[01:22:03]

I don't know whether he says that just because he has an actual reason for thinking it's true or just because it sounds cool. Yeah, OK.

[01:22:10]

But but, you know, that's a logical possibility, right, that the algorithm could be end of the ten thousand time or it could even just be N squared time, but with a leading constant of it could be a Google times and squared or something like that. In that case, the fact that P equals ENPI. Well would it would, you know, ravage the whole theory of complexity, we would have to, you know, rebuild from the ground up.

[01:22:36]

But in practical terms, that might mean very little. Right. If the algorithm was too inefficient to run, if the algorithm could actually be run in practice, like if it if it had small enough constants, you know, or if you could improve it to where it had small enough constants that was efficient in practice, then that would change the world. You'd think it would have.

[01:22:59]

Like what kind of impact? Well, OK. I mean I mean, here's an example.

[01:23:01]

I mean, you could well, OK, just for starters, you could break basically all of the encryption that people use to protect you. You could you could break Bitcoin in every other cryptocurrency or, you know, mind as much Bitcoin as you wanted. Right. You know, become a you know, become a super duper billionaire. Right. And then and then plot your next move.

[01:23:24]

Right. OK, just for starters. Right, right. Right.

[01:23:27]

Now, your next move might be something like, you know, you now have like a theoretically optimal way to train any neural network defined parameters for any neural network. Right. So you could now say, like, is there any small neural network that generates the entire content of Wikipedia? Right. If you know and now the question is not can you find it? The question has been reduced to does that exist or not? Yes. If it does exist, then the answer would be yes, you can find it.

[01:23:56]

If if you had this algorithm in your hands, OK, you could ask your computer, you know, I mean I mean, P versus NP is one of these seven problems that carries this million dollar prize from the foundation. You know, if you solve it, you know, others are the Riemann Hypothesis, the punker conjecture, which was solved, although the solver turned down the prize. Right. And and for others. But what I like to say, the way that we can see that P versus ENPI is the biggest of all of these questions is that if you had this fast algorithm, then you could solve all seven of them.

[01:24:31]

OK, you just ask your computer, you know, is there a short proof of the Riemann hypothesis? Right. You know, that a machine in a language where a machine could verify it and provided that such a proof exists, then your computer finds it in a short amount of time without having to do a brute force search. So, I mean I mean, those are the stakes that what we're talking about. But I hope that also helps to give your listeners some intuition of why I and most of my colleagues would put our money on Pignotti going in.

[01:25:01]

Is it possible just a really dumb question, but is it possible to the proof will come out that P equals on P, but an algorithm that makes P because then P is impossible to find? Is that like, crazy?

[01:25:18]

OK, well well, if P equals NP, it would mean that there is such an algorithm that it exists there, but. You know, it would mean that it exists now, you know, in practice, normally, the way that we would prove anything like that would be by finding the or finding someone else. But there is such a thing as a non-constructive proof that an algorithm exists.

[01:25:42]

You know, this is really only reared its head. I think, a few times in the history of our field.

[01:25:47]

Right. But, you know, it is it is theoretically possible that that that that such a thing could happen. But, you know, there are so even here, there are some amusing observations that one could make. So there is this famous observation of Leonid Levin, who is, you know, one of the original discoverers of ENPI completeness. Right. And he said, well, consider the following algorithm that, like, I guarantee will solve the energy problems efficiently, just as provided that P equals ENPI.

[01:26:17]

Here is what it does. It just runs you know, it enumerates every possible algorithm in a gigantic infinite list. Right. From like in alphabetical order. Right. You know, and many of them maybe won't even compile. So we just ignore those. OK, but now we just, you know, run the first algorithm. Then we run the second algorithm. We run the first one a little bit more. Then we run the first three algorithms for a while, the first for a while.

[01:26:44]

This is called dovetailing, by the way, this is a known trick in theoretical computer science.

[01:26:51]

OK, but we do it in such a way that, you know, whatever is the algorithm out there in our list that solves NP complete the energy problems efficiently will eventually hit that one. Right. And now the key is that whenever we hit that one, you know, by assumption, it has to solve the problem. That's to find the solution. And once it claims to find a solution, then we can check that ourself. Right. Because there are these problems, then we can check it.

[01:27:20]

Now, this is utterly impractical. All right. You know, you'd have to do this enormous, exhaustive search among all the algorithms. But from a certain theoretical standpoint, that is merely a constant or at least possible. It's merely a multiplier of your running time.

[01:27:36]

So there are tricks like that one can do to say that in some sense the algorithm would have to be constructive. But, you know, in the in the human sense, you know, it is possible that to you know, it's conceivable that one could prove such a thing via a non-constructive method. Is that likely? I don't think so. Not not not personally.

[01:27:57]

So that's P and P, but the complex is full of wonderful creatures. Well, it's got about 500 of the 500. So how do you get. Yeah. Yeah.

[01:28:08]

How do you get more. How do you. Yeah. Well I mean, I mean, I mean, I mean just for starters there is everything that we could do with a conventional computer with a polynomial amount of memory, but possibly an exponential amount of time because we get to reuse the same memory over and over again. OK, that is called P space. OK, and that's actually a we think an even larger class than NP. OK, well P is contained in ENPI, which is contained in P space.

[01:28:39]

And we think that those containments are strict and the constraint there is on the memory, the memory has to grow with polynomial with the size of the.

[01:28:48]

That's right. That's right.

[01:28:49]

But in P space, we now have interesting things that were not in ENPI like as a as a famous example, you know, from a given position in chess, you know, does white or black have the win, let's say assuming provided that the game lasts only for a reasonable number of moves or likewise for go OK.

[01:29:10]

And, you know, even for the generalisations of these games to arbitrary size boards, because with an eight by eight board, you could say that's just a constant size problem. You just you know, in principle, you just solve it. And I of one time. Right.

[01:29:22]

But so we really mean the the generalisations of, you know, games to arbitrary size boards here or another thing in P space would be like, I give you some really hard constraint, satisfaction problem, like, you know, you know, traveling salesperson or, you know, packing boxes into the trunk of your car or something like that.

[01:29:46]

And I ask not just is there a solution which would be an end problem, but I ask, how many solutions are there, OK, that, you know, count the number of valid solutions that that that actually give those problems lie in a complexity class called sharp or like it looks like a hashtag like hash tag p got it.

[01:30:07]

OK, which sits between ENPI and P space. There's all the problems that you can do in exponential type.

[01:30:15]

OK, that's called EXPE.

[01:30:17]

So and by the way, it was proven in the 60s that X is larger than P, that we know that.

[01:30:27]

We know that there are problems that are solvable in exponential time, that are not solvable in polynomial time. In fact, we even know we know that there are problems that are solvable and cube time that are not solvable and squared time and that those don't help us with the controversy because unfortunately, it seems not or certainly not yet.

[01:30:48]

Right. The the the techniques that we use to establish those things, they're very, very related to help to improve the solvability of the halting problem. But they seem to break down when we're comparing two different resources like time versus space or like, you know, P versus N.P..

[01:31:06]

OK, but, you know, I mean, there's there's what you can do with a randomized algorithm, right. That can sometimes, you know, with some has some probability of making a mistake. That's called BPP bounded error, probabilistic polynomial time.

[01:31:21]

And then, of course, there's one that's very close to my own heart. What you can efficiently do do in polynomial time using a quantum computer. And that's called B cupie. Right. And so, you know, what's understood about that class is a combo.

[01:31:36]

P is contained in BPP, which is contained in BQ P, which is contained in P space. OK, so anything you can in fact in in like in something very similar to Sharpey BGP is basically, you know, well it's contained in like P with the magic power to solve sharp problems.

[01:31:56]

What was backup contained in P space? Oh, that's an excellent question. So there is what I mean, one has to prove that. But the proof you could you could think of it as using Richard Feynman's picture of quantum mechanics, which is that you can always you know, we haven't really talked about quantum mechanics in this in this conversation. We we did in our previous day, last time. But yeah, we did last time. OK, but but basically, you can always think of a quantum computation as like a branching tree of possibilities where each path, each possible path that you could take through the space has a complex number attached to it called an amplitude.

[01:32:45]

OK, and now the rule is, you know, when you make a measurement at the end where you see a random answer. OK, but quantum mechanics is all about calculating the probability that you're going to see one potential answer versus another one. Right. And the rule for calculating the probability that you'll see some answer is that you have to add up the amplitudes for all of the paths that could have led to that answer.

[01:33:11]

And then, you know, that's a complex number so that, you know, how could that be a probability? Then you take the squared absolute value of the result. That gives you a number between zero and one. OK, so I just I just summarise quantum mechanics and say, yes, OK.

[01:33:28]

But now you know, what this already tells us is that anything I can do with a quantum computer, I could simulate with a classical computer. If I only have exponentially more time, then why is that? Because if I have exponential time, I could just write down this entire branching tree and just explicitly calculate each of these amplitudes. Right. You know, that will be very inefficient, but it will work, right.

[01:33:55]

It's enough to show that quantum computers could not solve the halting problem or, you know, they could never do anything that is literally on computable enduring sense.

[01:34:06]

But now, as I said, there is even a stronger result which says that P is contained in P space.

[01:34:13]

The way that we prove that is that we say if if all I want is to calculate the probability of some particular output happening, which is all I need to simulate a quantum computer, really, then I don't need to write down the entire quantum state, which is an exponentially large object. All I need to do is just calculate what is the amplitude for that final state.

[01:34:37]

And to do that, I just have to sum up all the amplitudes that lead to that state.

[01:34:43]

OK, so that's an exponentially large sum, but I can calculate it just reusing the same memory over and over for each term in the sentence has to be in the piece.

[01:34:54]

Yeah. Yeah.

[01:34:55]

So what out of that whole complex isoo it could be BCP. What do you find is the most the class that captured your heart the most is the most beautiful class. There's just. Yeah I used as my email address BQ P Q Poly at Gmail dot com just because a BQ P slash Q Polli. Well you know amazingly no one had taken it amazing. But you know, but this is a class that I was involved.

[01:35:26]

And sort of defining, proving the first theorems about two thousand three or so, so it was kind of close to my heart.

[01:35:34]

But this is like if we extended BQ pay, which is the class of everything we can do efficiently with a quantum computer to allow quantum advice, which means imagine that you had some special initial state, OK, that could somehow help you do computation and maybe such a state would be exponentially hard to prepare. OK, but, you know, maybe somehow these states were formed in the Big Bang or something and they've just been sitting around ever since. Right.

[01:36:04]

If you found one and if this state could be like ultra power, there were no limits on how powerful it could be accepted. This state doesn't know in advance which input you've got. Right. It only knows the size of your input, you know, and that that that's backup. So that's that's one that I just personally happen to love. OK, but, you know, if you're asking, like, what's the you know, there's there's a class that I think is is is way more beautiful than, you know, or fundamental than a lot of people even within this this field realize that it is that class is called STK or statistical zero knowledge.

[01:36:45]

And, you know, there's a very, very easy way to define this class, which is to say, suppose that I have two algorithms that each sample from probability distributions. So each one just outputs random samples according to, you know, possibly different distributions. And now the question I ask is, you know, you know, let's say distributions over strings of N bits, so over an exponentially large space.

[01:37:10]

Now, I ask, are these two distributions close or far as probability distributions?

[01:37:17]

Any problem that can be reduced to that, you know, they can be put into that form is an SDK problem. And the way that this class was originally discovered was completely different from that and was kind of more complicated. It was discovered as the class of all of the problems that have a certain kind of what's called zero knowledge proof, the zero knowledge proofs, or one of the central ideas in cryptography, Shafi Goldwasser and Sylvio McColley, one, the touring award for inventing them.

[01:37:48]

And they're at the core of even some some crypto currencies that, you know, people people use nowadays.

[01:37:55]

But there are zero knowledge, proofs or ways of proving to someone that something is true, like, you know, that there is a solution to this, you know, optimization problem or that these two graphs are isomorphic to each other or something. But without revealing why it's true, without revealing anything about why it's true. OK, STK is all of the problems for which there is such a proof that doesn't rely on any cryptography. OK, and if you wonder, like, how could such a thing possibly exist.

[01:38:31]

Right. Well, like, imagine that I had two graphs and I wanted to convince you that these two graphs are not isomorphic. I mean, you know, I cannot permute one of them so that it's the same as the other one. Right. You know, that might be a very hard statement to prove. Right. I might need you know, you might have to do a very exhaustive enumeration of, you know, all the different permutations before you were convinced that it was true.

[01:38:55]

But what if there were some all knowing Wizzard that said to you, look, I'll tell you what. Just pick one of the graphs randomly, then randomly permuted, then send it to me and I will tell you which graph you started with.

[01:39:09]

Guy and I will do that every single time, right? I blow that in, OK? That's how I got. Yeah. And let's say that that wizard did that a hundred times and it was right every time. Yeah. Right now, if the graph's were isomorphic then, you know, it would have been flipping a coin each time. Right. It would have had only a one and two to the one hundred power chance of, you know, of guessing right.

[01:39:32]

Each time. But, you know, so. So if it's right every time, then now you're statistically convinced that these graphs are not isomorphic, even though you've learned nothing new about why they are so fascinating.

[01:39:43]

So, yeah, so so STK is all of the problems that have protocols like that one, but it has this beautiful other characterization. It's shown up again and again in my in my own work and, you know, a lot of people's work. And I think that it really is one of the most fundamental classes. It's just that people didn't realize that when it was first discovered.

[01:40:04]

So we're living in the middle of a pandemic currently. Yeah, how has your life been changed or no? Better to ask. Like how has your perspective of the world changed with this world changing event of a pandemic overtaking the entire world?

[01:40:19]

Yeah, well, I mean I mean, all of our lives have changed, you know, like I guess as with no other event since I was born, you know, you would have to go back to World War Two for something, I think, of this magnitude, you know, on, you know, the way that we live our lives.

[01:40:34]

As for how it has changed my worldview, I think that the the failure of institutions, you know, like like like the CDC, like, you know, other institutions that we sort of thought were were trustworthy, like a lot of the media was staggering, was was absolutely breathtaking. It is something that I would not have predicted. Right. I think I wrote on my blog that, you know, the you know, it's it's it's fascinating to watch the movie Contagion from a decade ago.

[01:41:09]

Right. That correctly first saw so many aspects of, you know, what was going on. You know, a an airborne virus originates in China, spreads to, you know, much of the world, you know, shuts everything down until a vaccine can be developed. You know, everyone has to stay at home, you know. You know, it gets, you know, an enormous number of things. Right. OK, but the one thing that they could not imagine, you know, is that like in this movie, everyone from the government is like hyper competent, hyper dedicated to the public.

[01:41:43]

Good. Right. And best of the best, you know. Yeah. They're the best of the best. You know, they could you know, and there are these conspiracy theorists, right, who think, you know, you know, this is all fake news. There's no there's not really a pandemic. And those are some random people on the Internet who are the hyper competent government. People have to, you know, oppose. Right. You know, in trying to envision the worst thing that could happen.

[01:42:08]

Like, you know, the the there was a failure of imagination. The movie makers did not imagine that the conspiracy theorists and the you know, and the incompetence and the nut cases would have captured our institutions and be the ones actually running things.

[01:42:23]

So you had a certain yeah, I love competence in all walks of life. I love it so much energy. I'm so excited. But people do amazing job. And I like you. Or maybe you can clarify, but I had maybe not intuition, but I hope that government at its best could be ultra competent. What first of all, two questions like how do you explain the lack of confidence and the other maybe on the positive side, how can we build a more competent government?

[01:42:52]

Well, there's an election in two months.

[01:42:55]

I mean, you have a faith that the election I you know, it's not going to fix everything. But, you know, it's like I feel like there is a ship that is sinking and you could at least stop the sinking. But, you know, I think that there are there are much, much deeper problems. I mean, I think that, you know, it is it is plausible to me that, you know, a lot of the the failures, you know, with the CDC, with some of the other health agencies, even, you know, you know, predate Trump, you know, predate the, you know, right wing populism that has sort of taken over much of the world now.

[01:43:30]

And, you know, I think that, you know, it is you know, it is very I'm actually you know, I've actually been strongly in favor of, you know, rushing vaccines of, you know, I thought that we could have done, you know, human human challenge trials, you know, which were not done right. We could have, you know, had volunteers, you know, to actually, you know, be, you know, get vaccines, get, you know, exposed to covid so know, innovative ways of accelerating what you've done previously over a long time.

[01:44:06]

I thought, you know, each each month that you that that a vaccine is closer is like trillions of dollars. Are you subsidisation and of course, lives, you know, you know, hundreds of thousands of lives. Are you surprised that it's taken this long? We still don't have a plan. There's still not a feeling like anyone is actually doing anything in terms of a alleviating like any kind of plan. So there's a bunch of stuff, this vaccine.

[01:44:30]

But you could also do a testing infrastructure where, yeah, everybody's tested non-stop contact tracing, all that kind of. Well, I mean, I'm as surprised as almost everyone else.

[01:44:40]

I mean, this is a historic failure. It is one of the biggest failures in the two hundred forty year history of the United States. Right.

[01:44:48]

And we should be, you know, crystal clear about that.

[01:44:51]

And, you know, one thing that I think has been missing, you know, even even from the more competent side, is like, you know, he's sort of the the World War two mentality.

[01:45:02]

Right? The. The mentality of, you know, let's just, you know, you know, if we can by breaking a whole bunch of rules, you know, get a vaccine and, you know, and even half the amount of time as we thought, then let's just do that because, you know, you know, like like we have to we have to weigh all of the moral qualms that we have about doing that against the moral qualms of not doing.

[01:45:29]

And one key little aspect that that's deeply important to me and will go on that topic next is the World War Two mentality wasn't just about, you know, breaking all the rules to get the job done. There was a togetherness to it. There's yes. So I would if I were president right now, it seems quite elementary to unite the country because we're facing a crisis. It's easy to make the virus the enemy. And it's very surprising to me that the the division has increased as opposed to decrease.

[01:46:02]

Yeah, well, that that's that's heartbreaking.

[01:46:04]

Yeah, well, look, I mean, it's been said by others that this is the first time in the country's history that we have a president who does not even pretend to want to unite the country. Right? Yeah. Yeah. I mean I mean I mean Lincoln, who fought a civil war, you know, you said he wanted to unite the country. Right.

[01:46:23]

You know, and and I do I do worry enormously about what happens if the results of this election are contested, you know, and, you know, will there be violence as a result of that? And will we have a clear path of succession? And, you know, look, I mean, you know, this is all world. We're going to find out the answers to this in two months. And if none of that happens, maybe I'll look foolish.

[01:46:46]

But I am willing to go on the record and say I am terrified about that.

[01:46:50]

You have been reading the rise and fall of the Third Reich. So if I can this this is like one little voice to put out there that I think November will be a really critical month for people to breathe and put love out there.

[01:47:08]

Do not, you know, anger in those in that context, no matter who wins, no matter what is said, will destroy our country, may destroy our country, may destroy the world because of the power of the countries. So it's really important to be patient, loving, empathetic. Like, one of the things that troubles me is that even people on the left are unable to have a love and respect for people who voted for Trump. They can't imagine that there's good people that could vote for the opposite side.

[01:47:38]

And that's oh, I know there are, because I know some of them. Right?

[01:47:41]

I mean, you know, it's still, you know, maybe it baffles me, but, you know, I know such people.

[01:47:47]

Let me ask you this. It's also heartbreaking to me on the topic of cancer culture in the machine learning community. I've seen it a little bit that there is. Aggressive attacking of people who are trying to have a nuanced conversation about things, and it's troubling because it feels like a nuanced conversation is the only way to talk about difficult topics. And when there's a thought police and speech police on any new conversation that everybody has to like in Animal Farm chant that racism is bad and sexism is bad, which is things that everybody believes and they can't possibly say anything nuanced.

[01:48:30]

It feels like it goes against any kind of progress for my kind of shallow perspective. But you've written a little bit about cancer. Culture is you have thoughts there? Well, I mean I mean to say what I mean to say that I am opposed to, you know, the this trend of cancellations or of, you know, shouting people down rather than engaging them, that would be a massive understatement. Right. And I feel like, you know, I have put my money where my mouth is, you know, not as much as some people have.

[01:48:59]

But, you know, I've tried to do something. I mean, I have defended, you know, some unpopular people and unpopular, you know, ideas on my blog. I've, you know, tried to defend, you know, norms of of of open discourse, of, you know, reasoning with our opponents. Even when I've been shouted down for that on social media, you know, called a racist, called a sexist, all of those things.

[01:49:26]

And which, by the way, I should say, you know, I would be perfectly happy to, you know, if we had time to say, you know, you know, ten thousand times, you know, my hatred of racism, of sexism, of homophobia. Right.

[01:49:41]

But what I don't want to do is to cede to some particular political faction the right to define exactly what is meant by those terms, to say, well, then you have to agree with all of these other extremely contentious positions or else you are a misogynist or else you are a racist. Right? I say that. Well, no, you know, you don't like. Don't I? Or, you know, don't people like me also get a say in the discussion about, you know, what is racism, about what is going to be the most effective way to combat racism?

[01:50:16]

Right. And, you know, this this this cancellation mentality, I think, is spectacularly ineffective at its own professed goal of combating racism and sexism.

[01:50:28]

What's a positive way out? So I, I try to I don't know if you see what I do on Twitter, but I on Twitter, I mostly and in my whole in my life, I've actually swam to the cause, like I really focus on the positive and I try to put love out there in the world and still.

[01:50:48]

I get attacked and I look at that and I wonder, like you, too. I didn't know, like I haven't actually said anything difficult and nuanced. You talk about somebody like Steven Pinker. Yeah. Who I actually don't know the full range of things that he's attacked for, but he tries to say difficult. He tries to be thoughtful about difficult topics. He does. And obviously he just gets slaughtered by. Well, I mean. I mean.

[01:51:16]

I mean. I mean.

[01:51:17]

Yes, but it's also amazing how well Steve has withstood it. I mean, he just survived that attempt to cancel him just a couple of months ago. Right.

[01:51:26]

Psychologically, he survived it to which worries me. He I don't think I can. Yeah. I've gotten to know Steve a bit. He is incredibly unperturbed by this stuff. That's I admire that and I envy it. I wish that I could be like that. I mean, my impulse when I'm getting attacked is I just want to engage every single, like anonymous person on Twitter and Reddit who is saying mean stuff about me.

[01:51:50]

And I want to say, well, look, can we just talk this over for an hour? And then, you know, you'll see that I'm not that bad. You know, sometimes that even works. The problem is then there's the you know, the twenty thousand other ones. And that's not.

[01:52:04]

But psychologically, does that where I knew it does. It does. But yeah, I mean, in terms of what is the solution, I mean, I wish I knew. Right. And, you know, in a certain way these problems are maybe harder than P versus and.

[01:52:17]

Right. I mean, you know, but but I think that part of it has to be for, you know, that I think that there is a lot of sort of silent support for what I'll call the the the open discourse side, the, you know, reasonable enlightenment side. And I think that that that support has to become less silent. Right. I think that a lot of people there's sort of, you know, like agree that, you know, a lot of these cancellations and attacks are ridiculous but are just afraid to say so.

[01:52:48]

Right. Or else they'll get they'll get shouted down as well. Right. That's just the standard witch-hunt dynamic, which, you know, of course, this you know, this faction understands and exploits to its great advantage. But more people, you know, said, you know, like we're not going to stand for this. Right. You know, and, you know, this is this is you know what? Guess what? We're against racism, too.

[01:53:12]

But you know, this you know, what you're doing is ridiculous right now. And the hard part is like it takes a lot of mental energy. It takes a lot of time, you know, even if you feel like you're not going to be canceled or, you know, you're staying on the safe side, like it takes a lot of time to to to phrase things in exactly the right way and to respond to everything people say. So but I think that, you know, the more people speak up than, you know, from from from all political persuasions, you know, from like all walks of life, then, you know, the easier it is to move forward.

[01:53:49]

Since we've been talking about love, can you last time I talked to you about the meaning of life a little bit, but here has it's a weird question to ask a computer scientist, but has love for other human beings for for things for the world around. You played an important role in your life, have you? You know, it's easy for a world class computer scientist, you can call stuff like a physicist, everything to be lost in the books is the connection to other humans.

[01:54:24]

Love for the humans played an important role. I love my kids. I love my wife. I love my parents. You know, I, uh, I'm probably not not different from most people in loving their families and in that being very important in my life, now, I should remind you that, you know, I am a theoretical computer scientist. If you're looking for deep insight about the nature of love, you're probably looking in the wrong place to ask me.

[01:54:59]

But but sure, it's been important. But is it is there something from a computer science perspective to be said about love? Is there or is that is that even beyond into the realm of beyond the realm of consciousness? There was there was this great cartoon.

[01:55:15]

I think it was one of the classic X CDs where it shows like a heart and it's like squaring the heart, taking the four year transform of the heart, you know, integrating the heart, you know, uh, you know, each each thing. And then it says, you know, my normal approach is useful this year.

[01:55:34]

I'm so glad I asked this question. I think there's no better way to to end this. I hope we get a chance to talk again. This is an amazing experiment to do it outside. Yeah. I'm really glad you made it out. Yeah. Well, I appreciate it a lot. It's been a pleasure. And I'm glad you were able to come out to Austin.

[01:55:49]

Thanks. Thanks for listening to this conversation with Scott Aaronson and thank you to our sponsors, eat, sleep simply safe express VPN and better help. Please check out these sponsors in the description to get a discount and to support this podcast. If you enjoy this thing, subscribe on YouTube review starting up a podcast, follow on Spotify, support on Patron or connect with me on Twitter, Allex Friedman. And now let me leave you with some words from Scott Eggertsson that I also gave to you in the introduction, which is, if you always win, then you're probably doing something wrong.

[01:56:31]

Thank you for listening and for putting up with the intro and outro in the strange room in the middle of nowhere in this very strange chaos of a time we're all in. And I very much hope to see you next time. In many more ways than one.