Happy Scribe Logo

Transcript

Proofread by 0 readers
Proofread
[00:00:00]

The following is a conversation with Eric Weinstein. He's a mathematician, economist, physicist and the managing director of TIO Capital. He coined the term and you can say is the founder of the Intellectual Dark Web, which is a loosely small group of public intellectuals that includes Sam Harris, Jordan Peterson, Steven Pinker, Joe Rogan, Michael Shermer and a few others. This conversation is part of the artificial intelligence podcast at MIT and beyond.

[00:00:30]

If you enjoy it, subscribe on YouTube, iTunes or simply connect with me on Twitter at Lex Friedman spelled F our ID. And now here's my conversation with Eric Weinstein. Are you nervous about this scared shitless, OK, thebus, because you mentioned Kung Fu Panda is one of your favorite movies.

[00:01:11]

It has the usual profound, massive student dynamic going on, so who was who has been a teacher, that significant influence the direction of your thinking and life's work? So if you are the for panda, who was your shifu? Oh, well, it's interesting because I didn't see Shifu as being the teacher who was the teacher master, the turtle or the turtle. Right.

[00:01:34]

They only meet twice in the entire film and the first conversation sort of doesn't count. So the magic of the film, in fact, its point, yeah. Is that the teaching that really matters is transferred? During a single conversation. And it's very brief and so who played that role in my life, I would say either my grandfather, Harry Rubin and his wife, Sophie Rubin, my grandmother, or Tom Lehrer.

[00:02:11]

Tom Lehrer, yeah. In which way, if you give a child Tom Lehrer records, what you do is you destroy their ability to be taken over by later malware. And it's so irreverent, so witty, so clever, so obscene that it destroys the ability to lead a normal life for many people. So if I meet somebody who's usually.

[00:02:40]

Really shifted from any kind of neurotypical presentation, I'll often ask them, are you a Tom Lehrer fan? And the odds that they will respond are quite high. Now, Tom Lehrer is poisoning pigeons in the park. Tom Lehrer. That's very interesting. There are a small number of Tom Lehrer songs that broke into the general population poisoning pigeons in the park. The Element song, perhaps the Vatican rag. Hmm. So when you meet somebody who knows those songs but doesn't know you're judging me right now, aren't you harshly?

[00:03:12]

No, but you're Russian, so that is the unknown, Nikolayevna. It's a little bychowski that says, yeah. Yeah.

[00:03:18]

So that was a song about plagiarism that was, in fact plagiarized, which most people don't know from Danny Kaye, where Danny Kaye did a song called Stanislavski of the Musky Arts.

[00:03:29]

And so Tom Lehrer did this brilliant job of plagiarizing a song about and making it about plagiarism and then making it about this mathematician who worked in non Euclidian geometry that was like giving heroin to a child. It was extremely addictive and eventually led me. To a lot of different places, one of which may have been a Ph.D. in mathematics. And he was also a lecturer in mathematics, I believe, at Harvard, something like that. I just had dinner with him.

[00:04:00]

In fact, when my son turned 13, we didn't tell him.

[00:04:05]

But his bar mitzvah present was dinner with his hero, Tom Lehrer. And Tom Lehrer was 88 years old, sharp as a tack, irreverent and funny as hell and just. You know, there are very few people in this world that you have to meet while they're still here, and that was definitely one for our family. So that wit is a reflection. Of intelligence in some kind of deep way, like where that would be a good test of intelligence, whether you're Tom Lehrer fan, so what do you think that is about wit, about that kind of humor, ability to see the absurdity in existence?

[00:04:45]

Well, do you think that's connected to intelligence or are we just two Jews on a mike that appreciate that kind of humor?

[00:04:51]

No, I think that it's absolutely connected to intelligence. So you can see it. There's a place where Tom Lehrer decides that he's going to lampoon Gilbert of Gilbert and Sullivan and he's going to outdo Gilbert with clever, meaningless wordplay, and he has forget the quality he's doing, Clementine, as if Gilbert and Sullivan wrote it. And he says that I Mr. Depressed. He insisted Mr. This Mr. De Pesters, she tried pestering sisters a festering blister, your best to resist or say I, the sister persisted the Mr.

[00:05:21]

Isisford Castro loyalty slip when he when she said I could have her her sisters. Cadaverine must surely have turned in a script that's so dense. It's so insane. Yeah. That that's clearly intelligence because it's hard to construct something like that.

[00:05:36]

If I look at my favorite Tom Lehrer Tom Lehrer lyric. You know, there's a perfectly absurd one, which is once all the Germans were warlike and mean, but that couldn't happen again, we taught them a lesson in 1918 and they've hardly bothered us since then. Right. That is a different kind of intelligence.

[00:05:53]

You know, you're taking something that is so horrific and you're you're sort of making it palatable and funny and demonstrating also just your humanity.

[00:06:04]

I mean, I think the thing that came through as as Tom Lehrer wrote all of these terrible, horrible lines was just what a sensitive and beautiful soul he was, who was channeling pain through humor and through grace. I've seen throughout Europe, throughout Russia, that same kind of humor emerge from the generation of World War Two, it seemed like that humor is required to somehow deal with the pain and the suffering of that that war created.

[00:06:32]

Well, you do need the environment to create the broad Slavic soul. I don't think that many Americans really appreciate Russian humor. How you had to joke during the time of, let's say, Article 58 under Stalin, you had to be very, very careful. You know, the concept of a Russian satirical magazine like Crocodylus doesn't make sense. So you have this cross-cultural problem that.

[00:07:02]

There are certain areas of human experience that it would be better to know nothing about, and quite unfortunately, Eastern Europe knows a great deal about them, which makes the, you know, the songs of Vladimir Vysotsky so potent, the, you know, the prose of Pushkin, whatever it is, you have to appreciate the depth of the Eastern European experience. And I would think that perhaps Americans knew something like this around the time of the Civil War or maybe, you know, under slavery and Jim Crow or even the harsh tyranny of the coal and steel employers during the labor wars.

[00:07:45]

But in general, I would say it's hard for us to understand and imagine the collective culture unless we have the system of selective pressures that, for example, Russians were subjected to.

[00:07:59]

So if there is one good thing that comes out of war, it's literature, art and humor, music? Oh, I don't think so.

[00:08:09]

I think almost everything is good about war except for death and destruction. Right. Without the death it would bring.

[00:08:17]

And the romance of it, the whole thing is nice. Well, this is why we're always caught up in war.

[00:08:22]

And we have this very ambiguous relationship to it is that it makes life real and pressing and meaningful and.

[00:08:30]

At an unacceptable price and the price has never been higher, so jumping into Ehi a little bit, you are in one of the conversation you had or one of the videos you describe that one of the things A.I. systems can't do and biological systems can his self replicate in the physical world?

[00:08:51]

No, no, no. In the physical world, well, yes, the physical robots can self replicate, but but this is a very tricky point, which is that the only thing that we've been able to create that's really complex, that has an analogue of our reproductive system is software, but nevertheless, software replicates itself.

[00:09:18]

If we're speaking strictly for a replication in this kind of digital space, I just to begin and you ask a question, do you see a protective barrier or a gap between the physical world and the digital world?

[00:09:31]

Let's not call it digital. Let's call it the logical world versus the physical world. Why logical? Well, because even though we had, let's say, Einstein's brain preserved, it was meaningless to us as a physical object because we couldn't do anything with what was stored in it at a logical level. And so the idea that something may be stored logically and that it may be stored physically are not necessarily we don't always benefit from Anonymizer. I'm not suggesting that there isn't a material basis to the logical world, but that it does warrant identification with a separate layer that need not invoke logic, gates and zeros and ones.

[00:10:16]

And so connecting those two worlds are the logical world in the physical world, or maybe just connecting to the logical world inside our brain to his brain. You mentioned the idea of about.

[00:10:29]

Our televisions, artificial intelligence, artificial intelligence. Yes, this is the only essay that John Brockman ever invited me to write that he refused to publish an edge.

[00:10:42]

Why?

[00:10:43]

Well, maybe it wasn't it wasn't well-written, but I don't know.

[00:10:47]

The idea is quite compelling, is quite unique and new, at least from my view of standpoint. Maybe you can explain it. Sure.

[00:10:56]

What I was thinking about is why it is that we're waiting to be terrified by artificial general intelligence when in fact artificial life is terrifying in and of itself and it's already here. So in order to have a system of selective pressures, you need three distinct elements you need. Variation within a population, you need heritability and you need differential success. So what's really unique, and I've made this point, I think, elsewhere. About software is that if you think about what humans know how to build, that's impressive.

[00:11:36]

So I always take a car and I say, does it have an analogue of each of the physical physiological systems? Does it have a skeletal structure? That's its frame. Does it have a neurological structure? It has an onboard computer. Has it the justice system? The one thing it doesn't have is a reproductive system, but if you can call Sporn. On a process, effectively, you do have a reproductive system. And that means that you can create something with variation, heritability and differential success.

[00:12:10]

Now, the next step in the chain of thinking was where do we see inanimate?

[00:12:17]

Not intelligent life, outwitting intelligent life, and I have two favorite systems, and I try to stay on them so that we don't get distracted, one of which is the phrase orchid subspecies or clade. I don't know what to call it as a type of flower. Yeah, it's a type of flower that mimics the female of a pollinator species in order to dupe the males into engaging. It was called pseudo copulation with the fake female, which is usually represented by the lowest petal.

[00:12:50]

And there's also a fair amount component for the males into thinking they have a opportunity. But the flower doesn't have to give up any energy in the form of nectar as a lure because it's tricking the males. The other system is a particular species of mussel, Lampa Selous in the clear streams of Missouri, and it fools Barsa into biting a fleshy lip that contain its young.

[00:13:18]

And when the bass see this fleshy lip, which looks exactly like a species of fish that the bass like to eat, the the young explode and clamp onto the gills and parasitized the bass and also lose the bass to redistribute them as they eventually release both of these systems.

[00:13:37]

You have a highly intelligent. Dupe being fooled by a lower life form. And what is sculpting this, these convincing lures? It's the intelligence of previously duped. Targets for these strategies. So when the target is smart enough to avoid the strategy, those weaker mimics fall off. They have terminal lines and only the better ones survive. So it's an arms race between the target species that is being parasitized. Getting smarter and this other less intelligent or not intelligent object, getting as if smarter.

[00:14:25]

And so what you see is, is that artificial intelligence, artificial general intelligence is not needed to parasitize us. It's simply sufficient for us to outwit ourselves. So you could have a program, let's say, you know, one of these Nigerian scams that writes letters and. Uses whoever sends it Bitcoin to figure out which aspects of the program should be kept, which should be varied and thrown away, and you don't need it to be in any way intelligent in order to have a really nightmare scenario of being parasitized by something that has no idea what it's doing.

[00:15:03]

So you've raised a few concept really eloquently. So let me try to. As a few directions, this goes a one four four, so the way we write software today, it's not common that we allow it to self modify, but we do have that ability now.

[00:15:19]

We have the ability. It's just not common. It's just common.

[00:15:22]

So so your your thought is that that is a serious worry if there becomes a modifying code is available now.

[00:15:35]

So there are different types of self modification. Right. There is personalization. You know, your email app, your Gmail is self modifying to you after you log in or whatever.

[00:15:48]

You can think of it that way. But ultimately, it's central. All the information is centralized, but you're thinking of ideas where you're completely this is a unique entity operating at a selective pressures and it changes.

[00:16:03]

Well, you just if you think about the fact that our immune systems don't know what's coming at them next, but they have a small set of spanning components. And if it's if it's a sufficiently expressive system in that any shape or binding region can be approximated with with the Lego that is present, then you can have confidence that you don't need to know what's coming at you because the combinatorics are sufficient to reach any configuration needed. So that's a beautiful thing.

[00:16:42]

Well, terrifying thing to worry about because it's so within our reach. Whenever I suggest these things, I do always have a concern as to whether or not I will bring them into being by talking about them. So there's this thing from opening I eyes next next week to talk to the founder of Open Air. I had this idea that their next generation, the new the new stuff they have for generating taxes, they didn't want to bring it. They didn't want to release it because they're worried about the.

[00:17:14]

I'm delighted to hear that. But they're going to end up release. Yes.

[00:17:17]

So that's the thing is, I think talking about it well, at least from my end, I'm more a proponent of technology preventing technology. So further innovation, preventing the detrimental effects of innovation. Well, we're a we're sort of tumbling down a hill at accelerating speed. So whether or not we're proponents or it doesn't really it may not matter, but I do not. Well, I do feel that there are people who have held things back and, you know, died poorer than they might have otherwise been.

[00:17:52]

And we don't even know their names.

[00:17:54]

I don't think that we should discount the idea that having the smartest people. Showing off how smart they are by what they've developed, maybe a terminal. Process, I'm very mindful in particular of a beautiful letter that Edward Teller, of all people, wrote to Leo Zijlaard where Zijlaard was trying to figure how to control the use of atomic weaponry at the end of World War two and tell her rather strangely, because many of us view him as a monster, showed some very advanced moral thinking, talking about the slim chance we have for survival and that the only hope is to make war unthinkable.

[00:18:36]

I do think that not enough of us feel in our gut what it is we are playing with when we are working on technical problems.

[00:18:43]

And I would recommend to anyone who hasn't seen it a movie called The Bridge Over the Bridge on the River Kwai about, I believe, captured British WS who just in a desire to do a bridge, well end up over collaborating with their Japanese captors.

[00:18:59]

Well, now you're making me question the unrestricted open discussion of ideas. And I. I'm not saying I know the answer. I'm just saying that. I could make a decent case for either our need to talk about this and to become technologically focused on containing it or need to stop talking about this and try to hope that the relatively small number of highly adept individuals who are looking at these problems is small enough that we should, in fact, be talking about how to contain them.

[00:19:30]

Well, the way ideas, the way innovation happens, what new ideas develop Newton with calculus, whether if he was silent, the idea would be would emerge elsewhere. Well, in the case of Newton, of course.

[00:19:44]

But, you know, in the case of A.I., how small is the set of individuals out of which such ideas would arise?

[00:19:54]

Well, the ideas that the researchers we know and those that we don't know who may live in countries that don't wish us to know where at what level they're currently at are very disciplined in keeping these things to themselves out.

[00:20:07]

Of course, I will point out that there is a religious school in Kerala that developed something very close to the calculus, certainly in terms of infinite series in in, I guess, religious prayer and in Roman prose.

[00:20:27]

So, you know, it's not that Newton had any ability to hold that back. And I don't really believe that we have the ability to hold back.

[00:20:34]

I do think that we could change the proportion of the time we spend worrying about the effects of what if we are successful rather than simply trying to succeed and hope that we'll be able to contain things later if we put so on.

[00:20:46]

The idea of intelligence, what form treading cautiously as we've agreed as we tumble down the hill.

[00:20:54]

What else can we we can.

[00:20:56]

We cannot, uh, what form do you see it taking? So one example. Facebook, Google of do want to I don't know, a better word. You want to influence users to behave a certain way. And so that's one kind of example of how intelligence is systems perhaps modifying the behavior of these intelligent human beings in order to sell more product of different kind.

[00:21:25]

But do you see other examples of this actually emerging in just take any parasitic system?

[00:21:32]

You know, make make sure that there's some way in which that there's differential success, heritability. And in variation, and those are the magic ingredients, and if you really wanted to build the nightmare machine, make sure that the system that expresses the variability has a spanning set so that it can learn to arbitrary levels by making it sufficiently expressive.

[00:21:58]

That's your nightmare. So it's your nightmare. But it could also be it's a really powerful mechanism by which to create, well, powerful systems.

[00:22:08]

So are you more worried about the negative direction that might go versus the positive? So you said parasitic, but that doesn't necessarily need to be what the system converges towards.

[00:22:21]

It could be what does it matter?

[00:22:24]

It isn't. The dividing line between parasitism and symbiosis is not so clear.

[00:22:30]

That's what they tell me about marriage. I'm still single, so I know.

[00:22:34]

Well, yeah. We could go into that, too, but no, I think. We have to appreciate, you know, are you infected by your own mitochondria, right? Right. Yeah.

[00:22:52]

So, you know, in marriage, you fear the loss of independence, but. Even though the American therapeutic community may be very concerned about codependents, what's to say that codependents isn't what's necessary to have a stable relationship in which to raise children who are maximally case selected and require incredible amounts of care? Because you have to wait 13 years before there's any reproductive power and most of us don't want our 13 year olds having kids. That's a very tricky situation to analyze.

[00:23:21]

I would say that.

[00:23:24]

Predators and parasites drive much of our evolution, and I don't know whether to be angry at them or thank them. Well, ultimately, the I mean, nobody knows the meaning of life or what even happiness is, but there is some metrics that tell you they didn't. They didn't. That's why all the poetry books are about you know, there is some metrics under which you can kind of measure how good it is that these systems are roaming about.

[00:23:51]

So you're more you're more nervous about software than you are optimistic about ideas of self replicating, learning.

[00:24:01]

I don't think we've really felt where we are.

[00:24:07]

You know, occasionally we get a wake up. 9/11 was so anomalous compared to everything we've everything else we've experienced on American soil that it came to us as a complete shock, that that was even a possibility. What it really was, was a highly creative and determined R&D team deep in the bowels of Afghanistan, showing us that we had certain exploits that we were open to, that nobody had chosen to express. I can think of several of these things that I don't talk about publicly that just seem to have to do with how.

[00:24:45]

Relatively unimaginative, those who wish to cause havoc and destruction have been up until now, but the great mystery of our time. Of this particular little era is how remarkably stable we've been since 1945 when we demonstrated the ability to use nuclear weapons in anger and we don't know why things like that haven't happened since then. We've had several close calls, we've had mistakes, we've had brinksmanship. And what's now happened is that we've settled into a sense that, oh, it's it'll always be nothing.

[00:25:27]

It's been so long since something was at that level.

[00:25:33]

Of danger that we've got a wrong idea in our head, and that's why when I went on the Bench Shapiro show, I talked about the need to resume above ground testing of nuclear devices because we have people whose developmental experience suggests that when, let's say, Donald Trump and North Korea engage on Twitter, oh, it's nothing. It's just posturing. Everybody's just in it for money. There's there's a sense that people are in a video game mode, which has been the right call since 1945.

[00:26:06]

We've been mostly in video game mode. It's amazing.

[00:26:09]

So you're worried about a generation which has not seen any existential.

[00:26:13]

We've lived under it. You see, you're younger. I don't know if if.

[00:26:18]

And again, you came from from Moscow. Yeah. There was a a TV show called The Day after it had a huge effect on a generation growing up in the US and it talked about what life would be like after a nuclear exchange. We have not gone through an embodied experience collectively where we've thought about this, and I think it's one of the most irresponsible things that the elders among us have done, which is to provide this beautiful garden. In which.

[00:26:56]

The thorns are cut off of the of the rose bushes and all of the edges are rounded and sanded, and so people have developed this totally unreal idea, which is everything's going to be just fine.

[00:27:10]

And do I think that my leading concern is ajai or might the leading concern is a thermonuclear exchange or gene drives or any one of these things?

[00:27:20]

I don't know. But I know that our time here. In this very long experiment here is finite because the toys that we've built are so impressive and the wisdom to accompany them has not materialized and. I think we actually got a wisdom uptick since 1945. We had a lot of dangerous skilled players on the world stage who nevertheless, no matter how bad they were, managed to not embroil us. In something that we couldn't come back from the Cold War, yeah, and the distance from the Cold War.

[00:27:59]

You know, I'm very mindful of there was a Russian tradition, actually, of on your wedding day going to visit a memorial to those who gave their lives. Can you imagine this? Were you on the happiest day of your life? You go and you pay homage to the people who fought and died in the Battle of Stalingrad.

[00:28:25]

I'm not a huge fan of communism, I got to say, but there were a couple of things that the Russians did that were. Really positive in the Soviet era, and I think trying to let people know how serious life actually is is the Russian model of seriousness is better than the American model.

[00:28:45]

And maybe, like you mentioned, there was a small echo of that after 9/11, but we would we wouldn't let it fall. We talk about 9/11, but it's nine 12 that really moved the needle. When we were all just there and nobody wanted to speak. We some we witnessed something super serious and we didn't want to run to. Our computers and blast out our deep thoughts and our feelings. And it was profound because we woke up briefly, you know, I talk about the gated institutional narrative and that sort of programs our lives that I've seen it break three times in my life, one of which was the election of Donald Trump.

[00:29:31]

Another time was the fall of Lehman Brothers, when everybody who knew that Bear Stearns wasn't that important knew that. Lehman Brothers met, AIG was next, and the other one was 9/11, and so if I'm 53 years old and I only remember three times that the global narrative was really interrupted, that tells you how much we've been on top. Of developing events, you know, I mean, we had the Murrah Federal Building explosion, but it didn't cause the narrative to break wasn't profound enough.

[00:30:05]

Around nine 12, we started to wake up out of our slumber and. The powers that be did not want to coming together, they you know, the admonition was go shopping and the powers that be was what does that force as opposed to blaming individual, we don't know. So whatever that whatever that force is, there's a holding of it that's emergent and there's a component of it that's deliberate to give yourself a portfolio with two components.

[00:30:35]

Some amount of it is emergent. But some amount of it is also an understanding that if people come together, they become an incredible force. And what you're seeing right now, I think, is there are forces that are trying to come together and their forces that are trying to push things apart. And, you know, one of them is the globalist narrative versus the national narrative. Where to the global globalist perspective, the nations are bad things. In essence, that they're temporary, they're nationalistic, they're jingoistic.

[00:31:10]

It's all negative to people in the national, more in the national idiom. They're saying, look, this is where I pay my taxes. This is where I do my army service. This is where I have a vote. This is where I have a passport. Who the hell are you to tell me that? Because you've moved into some place that you can make money globally, that you've chosen to abandon other people to whom you have a special and elevated duty?

[00:31:33]

And I think that these competing narratives have been pushing towards the global perspective from the elite. And a larger and larger number of disenfranchised people are saying, hey, I actually live in a place and I have laws and I speak a language, I have a culture. And who are you to tell me that? Because you can profit in some faraway land. That my obligations to my fellow countrymen are so, so much diminished, so these tensions between nations and so on, ultimately you see being proud of your country and so on, which creates potentially the kind of things that led to wars and so on.

[00:32:10]

They ultimately it is human nature and it is good for us for wake up calls of different kinds.

[00:32:15]

Well, I think that these are tensions. And my point isn't I mean, nationalism run amok is a nightmare and internationalism run amuck is a nightmare. And. The problem is we're trying to push these pendulum's to some place where they're somewhat balanced, where we we have a higher duty of care to those who share our laws, our laws and our citizenship. But we don't forget our duties of care to the global system. I would think this is elementary, but the problem that we're facing.

[00:32:52]

Concerns the ability for some to profit at the event by abandoning their obligations to others within their system, and that's what we've had for decades.

[00:33:05]

He mentioned nuclear weapons. I was hoping to get answers from you, since one of the many things you've done as economics, maybe you can understand human behavior. Why the heck we haven't blown each other up yet, but OK, so we'll get to know the answer. Yes, sir. It's a farce. It's it's really important to say that we really don't know.

[00:33:24]

And a mild uptick in wisdom, a mild uptick in wisdom.

[00:33:27]

That's what Steven Pinker, who I've talked with, has a lot of really good ideas about why. But I don't trust his optimism.

[00:33:38]

Listen, I'm Russian, so I never trust a guy who is that optimist. No, no, no. It's just that you're talking about a guy who's looking at a system in which more and more of the kinetic energy like war has been turned into potential energy like unused nuclear weapons. Beautifully put. And, you know, now I'm looking at that system and I'm saying, OK, well, if you don't have a potential energy term, then everything's just getting better and better.

[00:34:02]

Yeah. Wow, that's that's beautiful.

[00:34:05]

But only if it's OK.

[00:34:07]

Uh, not a physicist.

[00:34:09]

Well, is that a dirty word? No, no.

[00:34:12]

I wish I were a physicist. Uh, me too. My dad's a physicist. I'm trying to live up that probably for the rest of my life. He's probably got to listen to this, too. So he did. Yeah.

[00:34:24]

So your friend Sam Harris worries a lot about the existential threat of A.I.. Not in the way that you've described, but in the more well, he hangs out with Ellen, I don't know.

[00:34:37]

So are you worried about that kind of, you know, about the about either robotic systems or, you know, traditionally defined A.I. systems, essentially becoming superintelligent, much more intelligent than human beings and getting what they already are?

[00:34:57]

And they're not. When when seen as a collective, you mean well, I mean, I can mean all sorts of things, but certainly many of the things that we thought. Were peculiar to general intelligence or do not require general intelligence, so that's been one of the big awakenings that. You can write a pretty convincing sports story from stats alone without needing to have watched the game. So, you know, is it possible to write lively prose about politics?

[00:35:30]

Yeah, no, not yet. So we were sort of all over the map, one of the one of the things about chess is there's a question I once asked on Quora that didn't get a lot of response, which was what is the greatest brilliancy ever produced by a computer in a chess game, which was different than the question of what is the greatest game ever played. So if you think about brilliance, this is what really animates many of us to think of chess as an art form.

[00:35:57]

Mm hmm. Those are those moves in combinations that just show such flair, panache and then soul computers weren't really great at that. They were great positional monsters. And, you know, recently we've started seeing brilliance. And so if you grandmasters have identified with with Alpha Zero that things were quite brilliant. Yeah. So that that's you know, that's an example of something. We don't think that that's ajai. But in a very restricted set set of rules like chess, you're starting to see poetry of a high order.

[00:36:32]

And and so I'm not I don't like the idea that we're waiting for Aja. Aja is sort of slowly infiltrating our lives. In the same way that I don't think a worm should be, you know, elegans shouldn't be treated as nonconscious because it only has 300 neurons, maybe just has a very low level of consciousness because we don't understand what these things mean as they scale up. So am I worried about this general phenomenon? Sure. But I think that one of the things that's happening is that a lot of us are fretting about this in part because of human needs.

[00:37:10]

We've always been worried about the golomb, right? Well, the games, the artificially created life, you know, it's like Frankenstein. Or characters. It's a Jewish version and. Frank Heinberg, Frank. Yeah, that makes sense. All right, so the but we've always been worried about creating something like this and it's getting closer and closer. And there are ways in which. We have to realize that the whole thing is the whole thing that we've experienced are the context of our lives is almost certainly coming to an end.

[00:37:49]

And I don't mean to suggest that we won't survive. I don't know. And I don't mean to suggest that it's coming tomorrow could be 300, 500 years.

[00:38:00]

But there's no plan that I'm aware of if we have three rocks that we could possibly inhabit that are sensible within current technological dreams, the Earth, the moon and Mars. And we have a very competitive civilization that is still forced into violence to sort out disputes that cannot be arbitrated. It is not clear to me that we have a long term future until we get to the next stage, which is to figure out whether or not the Einsteinian speed limit can be broken.

[00:38:34]

And that requires our source code. Our source code, the stuff in our brains to figure out what do you mean by our source code, the source code of the context, whatever it is that produces the quarks, the electrons, the neutrino, all our source code. I got it.

[00:38:50]

So this is the best stuff that's written in higher level language. Yeah, that's right. You're talking about the low level bits, right?

[00:38:58]

That's what is currently keeping us. Here, we can't even imagine. You know, we have harebrained schemes for staying within the Einsteinian speed limit. You know, maybe if we could just drug ourselves and go into a suspended state or we could have multiple generations of, I think all that stuff is pretty silly. But I think it's also pretty silly to imagine that our wisdom is going to increase to the point that we can have the toys we have and we're not going to use them for 500 years.

[00:39:30]

Speaking of Einstein. I had a profound breakthrough when I realized there's just one letter away from the guy. Yeah, but I'm also one letter away from Feinstein. It's well, you get to pick, OK, so unified theory, you know, you've worked you you enjoy the beauty of geometry. Well, I don't actually know if you enjoy it. You certainly are quite good at it. Tremble before tremble before it. If you're religious, that is one of the they're going to have to be religious.

[00:39:58]

It's just so beautiful. You will tremble.

[00:40:00]

Anyway, I just read Einstein's biography and one of the ways one of the things you've done is try to explore a unified theory.

[00:40:11]

I'm talking about a 14 dimensional observance that has the fauji space time continuum embedded in it.

[00:40:19]

I just curious how you think and how philosophically at a high level about something more than four dimensions, how do you try to what does it make you feel? Talking in the mathematical world about dimensions that are greater than the ones we can perceive, is, is there something that you take away that's more than just the math?

[00:40:43]

Well, first of all, stick out your tongue at me. OK, now on the front of that, yeah, there was a sweet receptor. And next to that were salt receptors and two different sides a little bit farther back, there were sour receptors and you wouldn't show me the back of your tongue where your bitter receptor was on the good side. Always OK, but that was for dimension's. Of taste receptors. But you also had pain receptors on that tongue and probably heat receptors on that tongue.

[00:41:17]

So let's get one of each. That would be six dimensions. So when you eat something, you eat a slice of pizza. And it's got some some some hot pepper on it, maybe some opinion, you're having six dimensional experience, do do you think we overemphasize the value of time as one of the dimensions or space?

[00:41:41]

Well, we certainly overemphasize the value of time because we like things to start and we really don't like things to end. But they seem to. Well, what if you flipped one of the spatial dimensions into being a temporal dimension? And you and I were to meet in New York City and say, well, where and when should we meet? I say, how about I'll meet you on thirty six and Lexington at 2:00 in the afternoon and 11 o'clock in the morning.

[00:42:09]

That would be very confusing. Well, so it's convenient for us to think about time, you mean we happened to be in a delicious situation in which we have three dimensions of space and one of time, and they're woven together in this sort of strange fabric where we can trade off a little space for a little time, but we still only have one dimension that is picked out relative to the other three. It's very much Gladys Knight and the Pips.

[00:42:32]

So which one developed for who do we develop for these dimensions or dimensions or were they always there? And it doesn't? Would you imagine that there isn't a place where there are four temporal dimensions or two or two of space and time or three of time in one of space, and then would time not be playing the role of space? Why do you imagine that the sector that you're in is all that there is? I certainly do not, but I can't imagine otherwise.

[00:42:57]

I mean, I haven't done ayahuasca or any any of those drugs. I hope to one day.

[00:43:02]

But doing ayahuasca, you could just head over to building to that's where the mathematicians are. That's where they hang just to look at some geometry.

[00:43:10]

Well, just ask about sort of remaining in geometry. That's what you're interested in.

[00:43:14]

OK, or you could talk to a shaman and end up in Peru and then spend extra money, but you won't be able to do any calculations if that's how you choose to go about it. Well, a different kind of calculation.

[00:43:25]

So, yeah, one of my favorite people, Edward Frankel, Berkeley professor, author of Love and Math, great title for a book, said that you're quite a remarkable intellect to come up with such beautiful original ideas.

[00:43:40]

In terms of the unified theory and so on, but you are working outside academia, so one question in developing ideas are truly original, truly interesting. What's the difference between inside academia and outside academia when it comes to developing such? Oh, it's a terrible choice. Terrible choice. So if you do it inside of academics, you are forced to constantly. Show great loyalty to the consensus, and you distinguish yourself with small, almost microscopic heresies to make your reputation in general.

[00:44:23]

And you have very competent people and brilliant people who are working together who are. Who form very deep social networks and have a very high level. Of behavior, at least within mathematics and at least technically within physics, theoretical physics. When you go outside, you meet lunatics and crazy people. Mad Men, and these are people who do not usually subscribe to the consensus position and almost always lose their way, and the key question is, will progress likely come from someone who is miraculously managed to stay within the system and is able to take on a larger amount of heresy?

[00:45:15]

That is sort of unthinkable, in which case that will be fascinating, or is it more likely that somebody will maintain a level of discipline from outside of academics? And be able to make use of the freedom that comes from not having to constantly affirm your loyalty to the consensus of your field.

[00:45:38]

So you've characterized in ways that I could academia in this particular sense is declining.

[00:45:45]

You posted the plot.

[00:45:46]

The older population of the faculty is getting larger, the younger is getting smaller and so on.

[00:45:53]

So what's which direction of the two are you more hopeful about?

[00:45:57]

Well, the baby boomers can't hang on forever. Well, first of all, in general, true. And second of all, in academia. But that's really what I think what this time is about is the baby boomers.

[00:46:07]

We didn't we're used to like financial bubbles that last a few years in length and then pop the baby boomer bubble. Is this really long lived thing?

[00:46:18]

And all of the ideology, all of the behavior patterns, the norms, for example, string theory is an almost entirely baby boomer phenomenon. It was something that baby boomers were able to do because it required a very high level of mathematical ability.

[00:46:36]

So you don't you don't think of string theory as an original idea? Oh, I mean, it was original to Veneziano probably is older than the baby boomers. And there are people who are younger than the baby boomers who are still doing string theory. And I'm not saying that nothing discovered within the large string theory at a complex is wrong. Quite the contrary. A lot of brilliant mathematics and a lot of the structure of physics was elucidated by string theorists.

[00:47:03]

What do I think of the deliverable nature of this product that will not ship called string theory? I think that it is largely an affirmative action program for highly, mathematically and geometrically talented baby boomer physics physicists so that they can say that they're working on something. Within the constraints of what they will say is quantum gravity. Now there are other schemes, you know, there's like asymptotic safety. There are other things that you could imagine doing. I don't think much of any of the major programs.

[00:47:37]

But to have inflicted this level of. Loyalty through a shibboleth, well, surely you don't question X Y question almost everything in the string program, and that's why I got out of physics when you called me a physicist. It was a great honor. But the reason I didn't become a physicist wasn't that I fell in love with mathematics, as I said. Well, in 1984, in 1983, I saw the field going mad and I saw that mathematics, which has all sorts of problems, was not going insane.

[00:48:09]

And so instead of studying things within physics, I thought it was much safer to study the same objects within mathematics. And there's a huge price to pay for that. You lose physical intuition. But the point is, is that it wasn't a North Korean reeducation camp either.

[00:48:24]

Are you hopeful about cracking open the Einstein Unified Theory in a way that has been really, really understanding whether the stuff of uniting everything together with quantum theory and so on?

[00:48:38]

I mean, I'm trying to play this role myself to do it well to the extent of handing it over to the more responsible, more professional, more competent community. So I think that they're wrong about a great number of their belief structures. But I do believe I mean, I have a really profound love hate relationship with this group of people. I think the physics side. Oh, yeah.

[00:49:05]

Because the mathematicians actually seem to be much more open minded and. Well, they are.

[00:49:10]

And there aren't there are open minded about anything that looks like great math. Right. They'll study something that isn't very important physics. But if it's beautiful mathematics, then they'll have they have great intuition about these things. As good as the mathematicians are.

[00:49:24]

And I might even intellectually add some horsepower level, give them the edge.

[00:49:29]

The theoretical theoretical physics community is bar none, the most profound intellectual community that we have ever created. It is the number one. There is nobody in second place as far as I'm concerned, look in their spare time. And the spare time they invented molecular biology, what was the origin of molecular biology? You're saying for somebody like Francis Crick? I mean, a lot of a lot of the early molecular biologists were physicists.

[00:49:56]

Yeah. I mean, you know, Schrodinger wrote What is life?

[00:49:59]

That was highly inspirational. I mean, you have to appreciate that.

[00:50:05]

There is no community like the basic research community in theoretical physics, and it's not something I'm highly critical of these guys. I think that they were just wasted the decades of time with and near religious devotion to their conceptualization of where the problems were in physics. But this has been the greatest intellectual collapse ever witnessed within academics. You see it as a collapse or just a lull? Oh, I'm terrified that we're about to lose the vitality.

[00:50:42]

We can't afford to pay these people. We can't afford to give them an accelerator just to play with in case they find something at the next energy level, these people created our economy. They gave us the rad lab and radar, they gave us two atomic devices to end World War Two that created the semiconductor and the transistor to power our economy through Moore's Law as a positive externality of particle accelerators that created the World Wide Web.

[00:51:13]

And we have the insulin's to say, why should we fund you with our taxpayer dollars? Now, the question is, are you enjoying your physics dollars? Right. These guys signed the world's worst licensing agreement.

[00:51:28]

Right.

[00:51:29]

And if if they simply charged for every time you used a transistor or a U.

[00:51:35]

RL or enjoyed the piece that they have provided during this period of time through the terrible weapons that they developed or your communications devices, all of the things that power our economy, I really think came out of physics, even to the extent the chemistry came out of physics and molecular biology came out of physics.

[00:51:54]

So first of all, you have to know that I'm very critical of this community. Second of all, it is our most important community. We have neglected it. We've abused it. We don't take it seriously. We don't even care to get them to rehab after a couple of generations of failure. No. One, I think the youngest person to have really contributed to the Standard Model at a theoretical level was born in 1951.

[00:52:21]

All right, Frank, we'll check. And almost nothing has happened that in theoretical physics after 1973 74, that sent somebody to Stockholm. For theoretical development, the predicted experiment. So we have to understand that we are doing this to ourselves now, with that said, these guys have behaved abysmally, in my opinion, because they haven't owned up to where they actually are, what problems they're really facing, how definite they can actually be. They haven't shared some of their most brilliant discoveries which are desperately needed in other fields, like Gaige theory, which at least the mathematicians can can share, which is an upgrade of the differential calculus of Newton and Leibnitz.

[00:53:05]

And they haven't shared the importance of renormalization theory, even though this should be standard operating procedure for people across the sciences, dealing with different layers and different levels of phenomena.

[00:53:17]

And so share. Do you mean communicate in such a way that it disseminates throughout the different sides?

[00:53:23]

These guys are sitting, both theoretical physicists and mathematicians are sitting on top of a giant stockpile of intellectual gold. All right. They have so many things that have not been manifested anywhere. I was just on. Twitter, I think I mentioned the Hobman switch pitch that shows the self duality of the tetrahedron realized there's a linkage mechanism. Now, this is like a triviality and it makes an amazing toy that's built in a market, hopefully a fortune for Chuck Hoberman.

[00:53:55]

Well, you have no idea how much great stuff that these priests have in their monastery. So it's truly a love and hate relationship for you.

[00:54:04]

Yeah, well, it sounds like it's more on the love, this building that we're in right here. This is the building in which I really put together the conspiracy between the National Academy of Sciences, the National Science Foundation, through the Government University Industry Research Roundtable, to destroy the bargaining power of American academics using foreign labor with online graffiti in the basement. Oh, yeah, that was done here in this building. And then weird. And I'm truly speaking with a revolutionary and radical.

[00:54:34]

No, no, no, no, no, no, no, no, no, no, no. At an intellectual level, I am absolutely garden variety. I'm just straight down the middle. The system that we are in this this university is functionally insane. Yeah. Harvard is functionally insane, and we don't understand that when we get these things wrong, the financial crisis made this very clear. There was a long period where every grown up, everybody with a tie who spoke in baritone tones with the right degree at the end of their name, were talking about how he'd banish volunteer volatility.

[00:55:16]

We're in the Great Moderation. OK, they were all crazy and who was who was right? It was like Nassim Taleb, right? Nouriel Roubini. Now what happens is that they claimed the market went crazy, but the market didn't go crazy. The market had been crazy. And what happened is, is that it suddenly went insane. Well, that's where we are with academics. Academics right now is mad as a hatter. And it's absolutely evident.

[00:55:42]

I can show you a graph after graph. I can show you the internal discussions. I can show you the conspiracies. Harvard's dealing with one right now over its admissions policies for people of color who happen to come from Asia. All of this madness is necessary to keep the game going. What we're talking about, just while we're on the topic of revolutionaries, as we're talking about the danger of an outbreak of sanity. Yeah, you're the guy pointing out the elephant in the room here and the elephant has no clothes.

[00:56:14]

So how that goes? I was going to talk a little bit. To Joe Rogan about this ran out of time.

[00:56:24]

Well, I think you're you have some you just listening to you, you could probably speak really eloquently to academia on the difference between the different fields.

[00:56:33]

So you think there's a difference between science, engineering and then the humanities and academia in terms of tolerance that they're willing to tolerate? So from my perspective, I thought.

[00:56:46]

Computer science and maybe engineering is more tolerant to radical ideas, but that's perhaps innocent of me, is I always you know, all the battles going on now are a little bit more on the humanities side and gender studies and so on.

[00:56:59]

Have you seen the American Mathematical Society's publication of an essay called Get Out the Way?

[00:57:06]

I have not was. What's the idea is that white men who hold positions. Within universities and mathematics should vacate their positions so that young black women can take over something like this.

[00:57:20]

That's in terms of diversity, which I also want to ask you about.

[00:57:22]

But in terms of diversity of strictly ideas. Sure.

[00:57:27]

Do you think because you're basically saying physics as a community has become a little bit intolerant to some degree to new radical ideas, or at least you you said that's changed a little bit recently, which is that even string theory is now admitting, OK, we don't look very promising in the short term.

[00:57:48]

Right.

[00:57:49]

So the question is, what compiles? If you want to take the computer science metaphor, what will get you into a journal, will you spend your life trying to push some paper into a journal or will it be accepted easily? What do we know about the characteristics of the submitter? And what gets taken up and what does not? All of these fields are experiencing pressure because no field is performing so brilliantly well that it's revolutionizing. Our way of speaking and thinking in the ways in which we've become accustomed, but don't you think even in theoretical physics, a lot of times, even with theories like string theory, you could speak to this.

[00:58:37]

It does eventually lead to what are the ways that this theory would be testable?

[00:58:43]

And so ultimately, although, look, there's this thing about Popper and the scientific method that's a cancer and a disease in the minds of very smart people. That's not really how most of the stuff gets worked out. It's how it gets checked.

[00:58:59]

And there is a dialogue between theory and experiment. But everybody should read Paul Dirac's 1963 American.

[00:59:08]

Scientific American article where he you know, it's very interesting, he talks about it as if it was about the Schrodinger equation and Schrodinger's failure to advance his own work because of his failure to account for some phenomena. The key point is that if your theory is a slight bit off, it won't agree with experiment, but it doesn't mean that the theory is actually wrong. But Dirac could as easily have been talking about his own equation in which he predicted that the electrons should have an antiparticle.

[00:59:38]

And since the only positively charged particle that was known at the time was the proton, Heisenberg pointed out, Well, shouldn't your antiparticle, the proton have the same mass as the electron? And doesn't that invalidate your theory?

[00:59:50]

So I think the direct was actually being quite potentially quite sneaky and talking about the fact that he had been pushed off of his own theory to some extent by Heisenberg. But look, we fetishize the scientific method and popper and falsification because it protects us from crazy ideas entering the field. So, you know, it's a question of balancing type one and type two error. And we're pretty we were pretty maxed out in one direction. The opposite of that.

[01:00:19]

Let me say what comforts me sort of biology or engineering at the end of the day, does the thing work? Yeah, you can test the crazies away, the crazy. Well, see, now you're saying.

[01:00:33]

But some ideas are actually crazy and some are actually correct. So there's pretty correct. Currently crazy. Yeah, right.

[01:00:42]

And so you don't want to get rid of everybody who's pretty correct and currently crazy. The problem is, is that we don't have standards in general for trying to determine who has to be put to the sword in terms of their career and who has to be protected as some sort of giant time suck pain in the ass who may change everything. Do you think that's possible, creating a mechanism of those select?

[01:01:08]

Well, you're not going to like the answer, but here it comes. Boy, it has to do with very human elements.

[01:01:16]

We're trying to do this at the level of like rules and fairness.

[01:01:19]

It's not going to work. Because the only thing that really understands this. You read the read the Double Helix, it's a book. Oh, you have to read this book. Not only did Jim Watson have discovered this three dimensional structure of DNA, he's also one hell of a writer.

[01:01:40]

Before he became an ass that I know, he tried to destroy his own reputation.

[01:01:46]

I knew about the ass I didn't know about. The good writer. Jim Watson is one of the most important people now living.

[01:01:52]

And as I've said before, Jim Watson is too important a legacy to be left to Jim Watson.

[01:02:01]

That book tells you more about what actually moves the dial. And there's another story about him which I don't agree with, which is that he stole everything from Rosalind Franklin. I mean, the problems that he had with Rosalind Franklin are real, but we should actually honor that tension in our history by delving into it rather than having a simple solution. Jim Watson talks about Francis Crick being a pain in the ass that everybody secretly knew was super brilliant.

[01:02:29]

And there's an encounter between Charge F, who came up with the equal moler relations between the nucleotides, who should have gotten the structure of DNA and Watson and Crick and.

[01:02:43]

You know, he talks about missing a shiver in the heartbeat of biology and stuff so gorgeous, it just makes you tremble even thinking about it.

[01:02:52]

Look, we know very often who is to be feared and we need to fund the people that we fear. The people who are wasting our time need to be excluded from the conversation, you see. And, you know, maybe we'll make some errors in both directions, but we have known our own people, we know the pains in the asses that might work out, and we know the people who are really just blowhards who really have very little to contribute.

[01:03:22]

Most of the time it's not 100 percent. But you're not going to get there with rules, right?

[01:03:27]

It's using some kind of instinct. I mean, to be honest, I'm going to make you roll your eyes for a second, but mean the first time I heard that there is a large community of people who believe the earth is flat.

[01:03:40]

Actually made me pause and ask myself the question, why would there be such a community? Yeah, is it possible the earth is flat? So I had to like, wait a minute.

[01:03:49]

I mean, then you go through the process that I think is really healthy.

[01:03:54]

It ultimately ends up being a geometry thing, I think it's an interesting it's an interesting thought experiment at the very least, but I don't I do a different version of it.

[01:04:03]

I say, why is this community stable?

[01:04:05]

Yeah, that's a good way to analyze it. Interesting that whatever we've done has not erased the community. So, you know, they're taking a long shot bet that won't pan out, you know. Maybe we just haven't thought enough about the rationality of the square root of two and somebody's brilliant will figure it out. Maybe we will eventually land one day on the surface of Jupiter and explore it.

[01:04:27]

Right. These are crazy things that will never happen so much as social media operates by algorithms. You talked about this a little bit, uh, recommending the content, you see.

[01:04:38]

So on this idea of radical thought, how much should I show you things you disagree with on Twitter and so on in the Twitter verse about Internet squares?

[01:04:51]

Yeah. Yeah. Because you don't know the answer. No, no, no, no. Look, we've been that they've pushed out this cognitive legault to us that will just lead to madness. It's good to be challenged with things that you disagree with. The answer is no. It's good to be challenged with interesting things with which you currently disagree. But that might be true.

[01:05:13]

So I don't really care about whether or not I disagree with something or don't disagree. I need to know why that particular disagreeable thing is being pushed out. Is it because it's likely to be true, is it because is there some reason because I can write I can write a computer generator to come up with an infinite number of disagreeable statements that nobody needs to look at.

[01:05:34]

So please, before you push things at me that are disagreeable, tell me why there is an aspect in which that question is quite dumb, especially because it is being used to almost a very generically by these different networks to say, well, we're trying to work this out.

[01:05:52]

But, you know, basically, how much do you see the value of seeing things you don't like, not you disagree with?

[01:06:01]

Because it's very difficult to know exactly what you articulated, which is the stuff that's important for you to consider that you disagree with. That's really hard to figure out. The bottom line is this stuff you don't like, if you're a Hillary Clinton supporter, you may not want to you may not make you feel good to see anything about Donald Trump. That's the only thing girls can really optimize for currently. They really know they can do better. This is where we're working.

[01:06:28]

So now we're engaged in some moronic back and forth where. I have no idea why people who are capable of building Google, Facebook, Twitter are having us in these incredibly low level discussions, do they not know any smart people? Do they not have the phone numbers of people who can elevate these discussions? They do, but this is for a different thing. And they are pushing those people out of those rooms. They're they're optimizing for things we can't see.

[01:07:03]

And yes, profit is there no, but it nobody's questioning that, but they're also optimizing for things like. Political control or the fact that they're doing business in Pakistan and so they don't want to talk about all the things that they're going to be bending to in Pakistan.

[01:07:19]

So we're involved in a fake discussion. You think so? You think these conversations at that depth are happening inside Google? You don't think they have some basic metrics under user engagement?

[01:07:32]

You're having a fake conversation with us, guys. We know you're having a fake conversation. I do not wish to be part of your fake conversation. You know how to cool. You know, these units, you know, high availability, like nobody's business by Gmail, never goes down. Almost see, you think just because they can do incredible work on the softer side with infrastructure, they can also deal with some of these difficult questions about human behavior, human understanding.

[01:08:03]

You're not seeing me.

[01:08:04]

I've seen I've seen the developers screens that people take shots of inside of Google. Yeah. And I've heard stories inside of Facebook and Apple. We're not we're engaged. They're engaging us in the wrong conversations. We are not at this low level. Here's one of my favorite questions. Why is every piece of hardware that I purchase in tech space equipped as a listening device? Where's my physical shutter to cover my lens? We had this in the 1970s, it cameras that had lens caps.

[01:08:41]

You know, how much would it cost to have a security model pay five extra bucks? Why is my indicator light software controlled WI when my camera is on? Do I not see that the light is on by putting it is something that cannot be bypassed? Why have you set up my all my devices at some difficulty to yourselves as listening devices and we don't even talk about this. This is, this thing is total fucking bullshit.

[01:09:08]

Well I hope these discussions are happening about privacy. This is more difficult than you give. It's not just privacy.

[01:09:14]

Yeah. It's about social control. We're talking about social control. Why do I not have controls over my own levers?

[01:09:23]

Just have a really cute UI where I can switch, I can dial things, or I can at least see what the algorithms are. But you think that there is some deliberate choices being made? Hillary's emergence and there is intention. There are two dimensions and the vector does not collapse under either axis. But the idea that anybody who suggests that intention is completely absent is a child that's really beautifully put.

[01:09:52]

And like many things you said, is going to make me. Can I can I turn this around slightly? Yeah, I sit down with you and you say that you're obsessed with my feet.

[01:10:01]

Uh huh. I don't even know what my feet is. What are you seeing that I'm not.

[01:10:06]

I was obsessively looking through your feed on Twitter because I was really enjoyable because as the Tom Lehrer element is the humor in it, by the way, that feed is Eric Garner once and Twitter and Eric R. Weinstein, it answers why.

[01:10:23]

Why did I find it enjoyable or what was I seeing?

[01:10:26]

What are you looking for? Why are we doing this? What is this podcast about? I know you've got all these interesting people. I'm just some guy who is sort of a podcast guest.

[01:10:36]

It's sort of like this. You're not even wearing a tie.

[01:10:39]

I mean, not even we're not doing a serious interview and searching for meaning, for happiness, for a dopamine rush. So short term and long term.

[01:10:51]

And how are you finding your way to me? What what what is I don't honestly know what I'm doing to reach you. The representing ideas, which are few common sense to me and that many people are speaking.

[01:11:04]

So it's kind of like the the intellectual dark web folks, right?

[01:11:09]

They are. These folks from Sam Harris to Jordan Peterson to yourself are saying things where it's like you're like saying, look, there's an elephant. He's not wearing any clothes. And I say, yeah, yeah, let's have more of that conversation, that's how I'm finding it, I'm desperate to try to change the conversation we're having. I'm very worried we've got an election in 2020. I don't think we can afford four more years of a misinterpreted message, which is what Donald Trump was.

[01:11:42]

And I don't want the destruction of our institutions. They all seem hell bent on destroying themselves. So I'm trying to save theoretical physics, trying to save The New York Times, trying to save.

[01:11:53]

Our various processes, and I think it feels delusional to me that this is falling to a tiny group of people who are willing to speak out without getting so freaked out that everything they say will be misinterpreted and that their lives will be ruined through the process.

[01:12:09]

I mean, I think we're in an absolutely bananas period of time, and I don't believe it should fall to such a tiny number of shoulders to shoulder this weight.

[01:12:19]

So I have to ask you, on the capitalism side, you mentioned that technology is killing capitalism or has effects that are unin well, not unintended, but not what economists would predict or speak of capitalism creating. I just want to talk to you about, in general, the effect of even then artificial intelligence or technology, automation, taking away jobs in these kinds of things.

[01:12:44]

And what you think is the way to alleviate that, whether the Andranik. Presidential candidate with universal basic income, UBI, with your thoughts there, how do we fight off the negative effects of technology that aren't your software, right?

[01:13:00]

Yep. A human being is a worker is an old idea. Yes, a human being has a worker is a different object. Right. So if you think about object oriented programming is a paradigm.

[01:13:14]

A human being has a worker and a human being has a soul.

[01:13:18]

We're talking about the fact that for a period of time, the worker that a human being has was in a position to feed the soul that a human being has. However, we have to separate claims on the value in society. One is as a worker and the other is as a soul, and the soul needs sustenance, it needs dignity, it needs meaning, it needs purpose. As long as your means of support is not highly repetitive, I think you have a while to go before you need to start worrying.

[01:13:53]

But if what you do is highly repetitive and it's not terribly generative, you are in the crosshairs of four for loops and while loops. And that's what computers excel at, repetitive behavior. And when I say repetitive, I mean I may mean things that have never happened, but through combinatorial possibilities. But as long as it has a looped characteristic to it, you're in trouble.

[01:14:15]

We are seeing a massive push towards socialism because capitalists are slow to address the fact that a worker may not be able to make claims. A relatively undistinguished median member of our society is still has needs to reproduce, needs to have to dignity. And when capitalism abandons the median individual or, you know, the bottom tenth or whatever it's going to do, it's flirting with revolution. And what concerns me is that the capitalists aren't sufficiently capitalistic to understand this.

[01:14:56]

You really want to court authoritarian control in our society because you can't see that people may not be able to defend themselves in the marketplace because the marginal product of their labor was to too low to feed their dignity as a soul. So my great concern is that our free society has to do with the fact that we are self organized.

[01:15:19]

I remember looking down from my office in Manhattan when Lehman Brothers collapsed and thinking, who's going to tell all these people that they need to show up at work when they don't have a financial system to incentivize them to show up at work?

[01:15:34]

So my complaint is, first of all, not with the socialists, but with the capitalists, which is you guys are being idiots. You're courting revolution by continuing to harp on the same old ideas that, well, you know, try try harder, bootstrap yourself. Yeah. To an extent that works to an extent. But we are clearly headed in place that there's nothing that ties together our need to. Contribute and our need to consume, and that may not be provided by capitalism because it may have been a temporary phenomenon.

[01:16:06]

So check out my article on anthropic capitalism and the new gimmick economy.

[01:16:12]

I think people who are late getting the wake up call and we would be doing a better job saving capitalism from itself because I don't want this done under authoritarian control. And the more we insist that everybody who's not thriving in our society during their reproductive years in order to have a family is failing at a personal level. I mean, what a disgusting thing that we're saying, what a horrible message, who to who the hell have we become that we've so bought into the Chicago model that we can't see the humanity that we're destroying in that process.

[01:16:44]

And it's I hate I hate the thought of communism. I really do. My family has flirted with it decades past. It's a wrong, bad idea. But we are going to need to figure out how to make sure that those souls are nourished and respected and capitalism better have an answer. And I'm betting on capitalism, but I got to tell you, I'm pretty disappointed with my team.

[01:17:06]

So you're still on the capitalism team, just the theme here, radical radical capital, hyper capitalism.

[01:17:14]

I want I think hyper capitalism is going to have to be coupled to hyper socialism. You need to allow the most productive people to create wonders and you've got to stop bogging them down with all of these extra nice requirements. You know, nice is dead good, has a future nice doesn't have a future because Nice ends up with with gulags. Damn, that's a good line. OK, last question you tweeted today, a simple, quite insightful equation, saying, imagine that every unit F of fame you picked up as stalkers and H haters.

[01:17:52]

So I imagine SNH are dependent on your path to fame, perhaps a little bit.

[01:17:56]

Well, it's not as simple as people always take these things literally when you have like two hundred and eighty characters to explain yourself.

[01:18:04]

That's not a mathematical.

[01:18:06]

No, there's no law. OK, all right. I just said when I put the word immagine because I still have a mathematician's desire for precision.

[01:18:12]

Imagine if this were true, but it was a beautiful way to imagine that there is a law that has those variables in it. And you've become quite famous these days.

[01:18:23]

So how do you yourself optimize that equation with the peculiar kind of fame that you have gathered along the way? I want to be kinder. I want to be kinder to myself. I want to be kinder to others. I want to be able to have heart. Compassion, these things are really important and I have a pretty spectrum kind of approach to analysis, I'm quite literal.

[01:18:47]

I can go full rain man on you at any given moment. No, I can. I can.

[01:18:51]

It's facultative autism, if you like, and people are going to get angry because they want autism to be respected, but. When you see me coding or you see me doing mathematics. I'm you know, I speak with speech apnea, right, David, you know. We have to try to integrate ourselves and those tensions between, you know, it's sort of back to us as a worker and us as a soul, many of us are optimising one to be at the expense of the other.

[01:19:22]

And I struggle with social media and I struggle with people making threats against our families. And I struggle with just how much pain people are in. And if there's one message I would like to push out there, irresponsible everybody, all of us, myself included, with struggling struggle, struggle mightily because it's nobody else's job to do your struggle for you. Now, with that said, if you're struggling and you're trying and you're trying to figure out how to better yourself and where you failed, where you've let down your family, your friends, your workers, all this kind of stuff, give yourself a break.

[01:20:00]

You know, if if if it's not working out, I have a lifelong relationship with failure and success. There's been no period of my life where both haven't been present in one form or another.

[01:20:12]

And I do wish to say that a lot of times people think this is glamorous. I'm about to go, you know, do a show with Sam Harris. People are going to listen in on two guys having a conversation on stage. It's completely crazy. And I'm always trying to figure out how to make sure that those people get maximum value. And that's why I'm doing this podcast. You know, just give yourself a break. You owe us you owe us your struggle.

[01:20:37]

You don't owe your family or your co-workers or your lovers or your family members success. As long as you're in there and you're picking yourself up, recognize that this this new situation with the economy that doesn't have the juice to sustain our institutions has caused the people who risen to the top of those institutions to get quite brutal and cruel. Everybody is lying at the moment. Nobody's really a truth teller. Try to keep your humanity about you. Try to recognize that if you're failing, if things aren't where you want them to be and you're struggling and you're trying to figure out what you're doing wrong, which you could do, it's not necessarily all your fault.

[01:21:18]

We are in a global situation. I have not met the people who are honest, kind, good, successful. Nobody that I've met is Chick is checking all the boxes. Nobody's getting all tense. So I just think that's an important message that doesn't get pushed out enough either. People want to hold society responsible for their failures, which is not reasonable. You have to struggle. You have to try or they want to say you're 100 percent responsible for your failures, which is total nonsense.

[01:21:48]

Beautifully put, Eric, thank you so much for talking today. Thanks for having me, buddy.