Transcribe your podcast
[00:00:00]

We tend to think about ourselves as the smartest animals on the planet. This is why we rule the place and it's interesting to realize that it's much more complicated than that. Yes, we are intelligent, but what really makes us the kind of rulers of the planet is actually our ability to believe nonsense, not our super smart, intelligent minds.

[00:00:24]

Welcome to your undivided attention today. Our guest is Yuval Noah Harari, author of Sapiens Amadeus 21 Lessons for the 21st Century and the new graphic novel of Sapience, which just came out in the fall. Yuval is a very dear friend of mine. We actually met on a climate change trip in Chile in 2016, and we're so delighted to have him on the podcast because we're about to go upstream of nearly every problem we've discussed on the show so far.

[00:00:50]

We've already explored the countless ways technology is shredding our sense of shared reality. But we haven't asked a more fundamental question. How do we get a sense of shared reality to begin with? You've all being evol. He can sum up how we've done it over the course of millions of years, from Paleolithic tribes to city states to kingdoms to modern nations. And along the way he can describe the moments when a new technology has shattered our sense of reality, only to restore it at an even greater scale.

[00:01:16]

If the events of January 6th had made one thing painfully clear, it's that a world where technology is manipulating human feelings into narrower and narrower colt factories, self reinforcing systems of beliefs, rumors, gossip and outrage that build upon layer after layer into a certain view in the intensity of people's actions that we saw on January 6th reflect the intensity of the beliefs and worldviews that they hold. In many ways, this is because the institutions we trust have placed the individual and individual feelings alone at the center of our economic and political universe.

[00:01:53]

The voters always write, the customer knows best and we must fend for ourselves in an increasingly poisoned information environment among predatory business models that don't have our best interests at heart.

[00:02:04]

What is the legitimacy of the voter, of the consumer, of the market when essentially our minds can get hijacked? And what happens when our feelings get increasingly decoupled from reality? As another friend of mine, Michael Vasseur, says, the existential risk to humanity might be marketing because marketing represents the decoupling of how we see the world from what the world actually is.

[00:02:30]

And that's at the heart of the almost Copernican revolution that Yuval is suggesting here, that at the center of our moral and political universe cannot be something that is hackable. This is an urgent problem and we could clearly use some help. But as Yuval asks, if the customer isn't always right and if the voter doesn't know best, then who does? Today in the show will think through some possibilities, and they're not all dystopian.

[00:02:56]

In fact, the less dystopian ones are just the hardest to imagine in almost all the conversations I have with get stuck on the dystopia and we never explore that, no less problematic questions of what happens when we avoid the dystopia. We are still talking about a situation when we could see the collapse of human agency in a good way.

[00:03:22]

You know, somebody out there know us so well that they can tell us what to study, how to marry everything. They are not manipulating us. They are not using it to build some dystopian totalitarian regime. It's really done to help us. But it still means that our entire understanding of human life is needs to change.

[00:03:52]

I'm Tristan Harris, and I'm Raskin, and this is your undivided attention. Thank you all so much for making time to do this interview. Thank you for inviting me. It sounds like a great opportunity to discuss some interesting things. Yeah.

[00:04:22]

So let's jump right in. So tell us a little bit about why you wanted to create a graphic novel version of sapiens and the history of our species and and our ancient emotions and evolutionary heritage?

[00:04:35]

Well, actually, the initial idea came from my husband, Itzik, who taught comics to kids. And the main aim was to bring science to more people. We saw knowis covid-19 the danger of what happens if you leave the arena open to all these conspiracy theories and fake news and so forth. It's important that everybody, not just academics, have a good grasp of the latest scientific findings about humanity. And the problem with science is, first of all, the scientific reality is often complex.

[00:05:10]

It's complicated. And secondly, that scientists tend to speak in a difficult language, you know, numbers and statistics and models and graphs. But humans are storytelling animals. They think in story. We think in stories. So the whole idea was how to stay loyal to the basic facts and to the core values of science, but discover new ways of telling science. And it was the most fun project I ever worked on. We kind of threw out all the academic conventions of how you tell science, and we experimented with many different ways of telling the history of our species.

[00:05:56]

One of the things that I think you've all unites us in the work that you're doing and the work that we're doing at the Center for Human Technology is looking at the human social animal in this kind of historical context and really examining the history of how do we really work. I mean, I know in your book there's a point in which the character meets Robert Dunbar and talks about Dunbar tribes and the notion that there really is an ergonomics to what makes humans kind of work well and cooperate at different scales.

[00:06:22]

And that, you know, we're our natural size is about one hundred and fifty people in our tribe. We actually have a story from a friend who worked at Facebook in the day that when they let Facebook run on its own without doing anything else, people would sort of average around one hundred and fifty friends if you let them stay there. But then, of course, Facebook was co-opted by the need to grow and grow like venture capitalist style growth, which is like one hundred X group.

[00:06:45]

And so they actually injected sort of social growth hormone into our number of relationships. And they started recommending friends for you to invite and get you to join and add because that meant you would be more addicted to the platform. And that actually surged people's, you know, number of friends into the thousands range, obviously now. But I think what unites your work and ours is a humble view of our Paleolithic instincts and where we really come from and an honest appraisal.

[00:07:11]

I think, you know, we've talked in the past about the kind of problem statement that guides our work is E.O. Wilson's line, the social biologist from Harvard, that the fundamental problem of humanity is we have Paleolithic emotions, medieval institutions and accelerating God like technology. And when those things operate at different clock rates, because our Paleolithic ancient brains and evolutionary instincts are baked and they're not changing, our medieval institutions update, you know, relatively slowly and the election timeline and how long it takes to legislate.

[00:07:44]

And then you have technology creating new issues in society much faster than both of those things are able to keep up. And how do we align those different things?

[00:07:53]

And I think in the history of your work, what I really love in sapiens is the way you build up to a view of the present about how we got here.

[00:08:02]

And I think what I'd love for you to do is maybe just take us through the role of how do we get from Paleolithic instincts to democracy and the authority of human choice and how what role does technology play in that? Because I think that's what's going to take us into what's maybe breaking down right now in the 21st century around our brains and technology.

[00:08:21]

Yeah, so, I mean, the first thing is that we need to acknowledge that we are still working with these what you called Paleolithic emotions. If you think, for example, about this cost, which is one of the most important emotions, humans are not the only ones that feel disgust. All mammals and even other animals have discussed. And it's it protects you. I mean, usually you are disgusted by something, something that can endanger your life, like the source of a disease, like a disease person or food, which is bad for you now humans, because we are omnivores, we eat a lot of different things.

[00:09:02]

And because we are social animals, we can't have discussed just baked into the genes. We eat so many different things that you can't have a gene from disgust for everything. That's bad for you and also because, again, we are social animals. You need also to know which people to be aware of if they have some sickness in covid-19 is the perfect time to talk about it. So even though we all have the ability to be disgusted, the object of disgust is something we learn.

[00:09:32]

We are not born with it. Some things are universally disgusting, like feces and things like that. But most things that disgust us we need to learn. And this on this mechanism, simple mechanism. So much of human identity in politics is built because religions and nations and ethnic groups over thousands of years have learned that in order to shape your identity, one of the most important things is to hijack your discussed mechanism and teach you to be disgusted by the wrong kind of people, not people who are diseased, but by foreigners or ethnic minorities or certain genders or whatever.

[00:10:17]

And when you look at history, it's amazing to see the immense importance of disgust there. If you think about the treatment of untouchables in India, about the treatment of women in Judaism and other religions, the treatment of African-Americans in the United States, their attitudes towards gay people, at the core there is the disgust mechanism, what people call purity and pollution. It works on that when people feel that untouchables are polluting, that gays are polluting, that they are disgusting.

[00:10:54]

It all works on that. And that goes back to the Stone Age. You need to understand that to understand even modern politics.

[00:11:04]

Just to add one small thing in here and just how hackable are feeling. Disgust is and hackable. It's my favorite example of this is when you feed someone ginger, Ginger lowers the sense of nausea and people judged things less morally harshly after they've been given ginger than before. That is, her body is or a mind is coming from a body to understand when it should feel moral disgust. And that shows you how not in control of something we think is so core to who we are, what we get disgusted by, and how we judge things morally.

[00:11:39]

We really are.

[00:11:41]

The words ginger neutralises some of our sense of disgust. And so if you want to hack a human without technology and you just secretly give someone some ginger tea or something like that.

[00:11:50]

Exactly. Yeah. And these techniques of how to activate or deactivate the sense of disgust, they go back thousands of years. I mean, you can't really build a tribe, a nation in a religion without some at least intuitive understanding of this mechanism of disgust. And then you usually don't use the word disgust. You talk about purity and impurity and pollution, but it's the same thing. And the idea that some people are a source of pollution and therefore they should be kept away from holy places, they should be kept away from important positions.

[00:12:32]

They should be kept away from your house or from your children. It all goes back to this mechanism of disgust. And if we really fast forward and we try to understand the rise of modern politics and modern systems of governments, then it's always the question of how you can connect people together. That's the core question of politics. It always was. The big issue in politics is not how to feed people. It's not how to manufacture tools, but how to get lots of people to agree on something.

[00:13:09]

Now, initially, humans lived in very, very small bands of a couple of dozen people, which were the most democratic societies that ever existed. And, you know, in the big discussion about human nature, whether we are democratic or dictatorial by nature or whatever, it very, very clear that originally there were no authoritarian regimes for most of human evolution for millions of years, it was absolutely impossible to build an authoritarian regime. There were no dictators, because when you live in a small, intimate band of 50 or 100 hunter gatherers in the Stone Age, there is no opportunity for a single leader to oppress everybody.

[00:13:57]

Yes, there are people who have more charisma. There are people who are better doctors or healers or they are better at finding food. But this is not enough. You always depend on the cooperation of other people. And if somebody even if he or she are the best at something, if they try to gain too much power, then people always have the ultimate sanction of voting with their feet going away. You know, I mean, there are no fields.

[00:14:29]

There are no houses. The only thing you need in order to survive or the two things you need to survive in the Stone Age, you need good personal skills, how to climb trees and pick apples, and you need good social skills. You depend on your friends, but you can take that and go somewhere else. So if somebody tries to set himself up as a dictator, the bend, I mean, they can, of course, unite and kill that person, but they can also just walk away, vote with their feet.

[00:14:59]

Once you have the switch to agriculture, then you also begin to see the rise of kings and authoritarian regimes and hierarchies and dictatorships and democracies go into decline and almost disappear. And for thousands of years, as human societies grew larger, it was impossible to have large scale democracies. You do have some cases of democracies in cities, states like Athens and Rome, ancient Athens and ancient Rome. And even then it was very limited. It was just say, 10 percent of the population in Athens were real citizens with full political rights.

[00:15:43]

Most people, women and slaves and so forth. They had no political rights. But even the Athenian democracy, it was limited to the city of Athens. You don't have any example of a large scale democracy until the late 18th century or even the 19th century with the rise of the United States and later democracies in Western Europe. And it was just impossible. You could not have, let's say, the kingdom of France in the 12th century as a democracy.

[00:16:20]

Why? Because you didn't have the preconditions, not you to have a large scale democracy. You need the you need an educated public, and you also need the ability to have a large scale public discussion. All the people in 12th century France talking to one another in real time in order to make up their minds about whether to make peace or war and economic policies or whatever. And this was simply impossible. So there is no point accusing the kings of France in the 12th century.

[00:16:58]

Why don't you turn France into a democracy? It's impossible. What made it possible is the emergence of new technologies for mass scale communication in the 18th and 19th century, first with newspapers and then with the telegraph and later radio and so forth. And it's not deterministic. The same technologies can also be used to build totalitarian regimes, which were also impossible before the modern age. The kingdom of France in the 12th century was not a totalitarian regime. The Roman Empire was not a totalitarian regime.

[00:17:37]

By totalitarian regime, I mean a regime which is total, which intervenes in the totality of your life, which constantly follows you and monitors you and tells you how to live your life. This was impossible in the Middle Ages because, again, you don't have the communication technology. You don't have the ability to process all the data. It's unthinkable that the king of France would pay tens of thousands of agents to go around the kingdom, collect information, go back to Paris, analyze that information, send back commands, impossible.

[00:18:12]

It becomes possible only with the modern technologies of the 19th and 20th century. And that's when we see the emergence of these two new political systems. On the one hand, liberal democracies, on the other hand, totalitarian regimes which were impossible before. And they are still built on the basic Paleolithic emotions. But the new technology makes it possible to create new kinds of Large-Scale cooperation sort of thing.

[00:18:42]

I hear you saying, first of all, the central point of your work is the thing that makes humans different is our ability to tell stories and to create stories of reality that we decoherence into a common belief structure and that those stories depend on using those Palaeolithic biases and instincts in such a way that bring our societies together and cohere. And that's where you get nationalism and so on.

[00:19:05]

Yeah, I mean, I skip that part that I know. I asked you to summarize way too much history in a very brief time, so I apologize for that. Yes. Yes. Maybe a. The most important thing, if you look at it, homosapiens, it's our species, what makes us really unique compared to any other animal on the planet is our ability to cooperate really in unlimited numbers. Chimpanzees, elephants, dolphins, they can cooperate maybe in a few dozen individuals, but you can never find a thousand chimpanzees or ten thousand dolphins cooperate on anything.

[00:19:37]

And that's because their cooperation is built on intimate knowledge, one after the other. If you're a chimpanzee and I'm a chimpanzee and we want to hunt together what we want to fight together against a neighboring group, we need to have intimate knowledge. I mean, who are you? What's your personality? Can I trust you? And you can't know more than, say, a 100 or 150 individuals. A lot of research that the famous Dunbar number that a lot of research on humans shows that you just don't the human brain is incapable of really coming into contact and storing enough information on, say, a thousand people to have a thousand intimate friends.

[00:20:18]

It doesn't matter how many friends you have on Facebook, you can't really have more than one hundred and fifty real friends and acquaintances. So the big question of human history and the first question of human history is how do you get hundreds and then thousands and finally hundreds of millions of humans to cooperate? And which is our secret of success as a species. This is how we overcame the Neanderthals. They were bigger than us. They were stronger than us.

[00:20:49]

They had bigger brains than us. But we rule the world and not the Neanderthals because they couldn't cooperate in larger numbers than, again, 50 or 100 we could. And what made it possible is not intelligence, it's imagination, and in particular, the ability to invent and believe fictional stories.

[00:21:10]

I think one of the key points here in your work is it's not about telling bigger and bigger, more complex truths that that unite us. It's as you said, it's equals empty squared. It's actually simple fictions that are able to tell us we will go to monkey heaven if we ignore whatever the different stories that we can get ourselves to believe. Kahir us. Exactly.

[00:21:29]

It's not the truth. You don't need to tell the truth in order to get a lot of people to cooperate. You need a good story. The story could be completely ridiculous, but if enough people believe it, it works. I think that also today, if you are running elections anywhere in the world and you will go to the public and you tell the truth, the whole truth and nothing but the truth about your nation, you have a 100 percent guarantee of losing the elections.

[00:21:55]

It's absolutely impossible that you would win the elections. People don't want to know the whole truth. Some of it, yes, but not the whole thing. It's usually too painful.

[00:22:06]

Could you give an example of that evil? All because I think people hear this point. But I think for understanding, what does that really mean?

[00:22:11]

If we were to tell the truth about a nation and people really wouldn't want to hear that or to elect the person who talks that way, you know, the easiest examples are dark, the dark side of the history of every nation, terrible things that almost every nation has done to outsiders, to minorities, to itself. You know, if you go to the Israeli public and speak honestly about the Israeli-Palestinian confrontation, you have no chance of winning the elections.

[00:22:41]

I mean, absolutely zero chances. And that's not unique to Israel. It's almost the same thing with every nation. But it's more than that because the very notion of a nation is itself a fictional story. It's not an objective truth. Nations are not biological or physical entities. They are imagined realities. They are stories that exist only in our own minds. You know, a mountain or a river is an objective physical entity. You can see it.

[00:23:16]

You can bathe in the river. You can listen to the murmur of the waves in the Mississippi. United States is not a physical reality. You cannot see the United States. You can see the Mississippi River, but that's not the United States. The Mississippi River was there two million years ago. United States wasn't United States might disappear in two hundred years or five hundred years. The Mississippi River will probably still be there. So it's not a physical entity.

[00:23:42]

It's a story. Now, I'm not saying it's a bad story. Nations are some of the best stories that were ever invented. I think this is something that often people get confused when they hear the nation is a story. You think that you are against nations? I don't think they are a bad thing. I think they one of the most beneficial stories that people ever invented because they enable large scale cooperation. For me, nationalism is not about hating foreigners.

[00:24:12]

It. About loving millions of strangers that you never met, you are willing to pay taxes so that a stranger on the other side of the country that you never meet, you'll never meet this person, but you pay taxes so that this person will have good health care and education. That's nationalism and that's wonderful. And if nationalism disappeared from the world, I don't agree with the imagine song John Lennon that we'll have like harmony and peace. No, we'll have tribal warfare.

[00:24:45]

I think this is such an important aspect of your work because you basically argue that nationalism is sort of a bootloader for democracy. You have to go through these stages and you have to have a period where you cohere around the story of a nation. I know in your past work you've talked about the importance of language in doing that and studying the work of George Lakoff, who actually talks about the ways that metaphors that we smuggle into our language helped create some of these stories.

[00:25:09]

One of his famous examples is the nation. As a family, we don't send our sons and daughters to war. We don't want those missiles in our backyard. The founding fathers told us this was true and we love the motherland and the fatherland. And this is an invisible, binding energy that's coming through the technology of language that if we didn't use the language of family, we probably wouldn't have been able to as strongly tell the story of a nation where we would treat those strangers as part of our invisible family in some such.

[00:25:37]

I think that's an aspect of your work to another sort of theme that I pick up is language and stories are sort of a model of the world. They are a map of the terrain and something I think hear from you often. You all is that, yes, the map is not the territory, but once you have a map, that map starts to terraform the territory. Are stories about the world start affecting the Mississippi?

[00:26:01]

Yes, they become the most powerful thing in the world. Also, we talk a lot about Facebook and Google, and we need to remind ourselves they are just stories. I mean, corporations are not real biological or physical entities in the world. The only place Google and Facebook exist is in our imagination. In the stories we tell each other, that's it. That means nothing else. And yeah, well, you talked about metaphors and they are extremely powerful metaphors.

[00:26:31]

But every now and then, we have to stop and remind ourselves the nation is not really a family. Families go back in evolution tens of millions of years. The strong feelings we have towards our mother. This is something that in mammalian evolution goes back tens of millions of years. If you as a tiny mammal, baby mammal a hundred million years ago did not have strong emotions to your mother because of some mutation, you died. But motherlands in the modern national sense, they go back at most five thousand years.

[00:27:08]

You can say ancient Egypt maybe was the first real nation, and that's five thousand years ago. And it's nothing in evolutionary terms. But it's the metaphor is extremely powerful. And again, I'm not against it. It can be misused in order, for instance, to start unnecessary wars. But in essence, it's a very potentially very beneficial tool to get humans to cooperate.

[00:27:33]

And what I hear you saying also is in the same way that we could have in the past hijacked our intrinsic mechanism for disgust to create the notion of purity or sanctity and the outsiders and let's go kill them. You can use that for good or for evil. We can also hijack that, as you said, very evolutionarily, deep instinct for motherhood. I mean, talk about something that's the deepest that you possibly can get. You're going to feel that positive association.

[00:27:56]

And if I can find that with another association of the nation, that's how I'm sort of using it. And the question is, once we know in reverse engineer more and more of our code of how the human mind does have these associations and does have this leverage, you can get off the meaning, making operating systems that we are trapped inside of. We are in a meat suit that is running so much of this code automatically. If we don't understand that code, you're as good as a useless idiot running around in your meat suit that's hijacked by your automatic emotions.

[00:28:24]

And then the question is, what does it mean for those to be authoritative? Because I think what I'd love to move into is how did we get to a point where democracy put so much primacy on the authority of human feelings, beliefs and ideas and emotions, because the premise that markets and democracies have, as you've said so many times, is the customer is always right. The voter knows best, you know, trust our heart and our feelings.

[00:28:50]

Let's talk first about why the authority of the individual feelings is wasn't actually an important development, because I think it'll get us to the place that many of our listeners are interested in, which is technology is breaking down are the stories that we've now collectively told ourselves and the authority of our own meaning and emotions.

[00:29:07]

So the big turning point was in the West around the eighteenth century and. That time, almost all political systems, all big systems, also religious systems, economic systems, they were built on imagining a source of authority outside human beings. Either it was a God or many gods, or it was the laws of nature that if you think that the best case is ethics, what's good and what's bad, it's what God says. It's what's written in the holy book.

[00:29:41]

It's what the laws of nature dictate. What you're feeling about it is irrelevant if you're gay and you feel that you are attracted to men and you think it's wonderful, but God says it's bad and it's bad and nobody wants to hear what you are feeling about it. We don't care.

[00:29:58]

You are you are corrupt. And this is how most human societies worked for hundreds of years, thousands of years. And then the big humanist revolution of the 18th century, it shifted the source of authority inside humans. The humanist revolution said no. The ultimate source of authority in the universe is not a God. It's not the laws of nature. It's certainly not some book written by priests thousand years ago. It's your heart. It's your feelings. Good is whatever feels good.

[00:30:35]

That's it. And of course, it's not so simple because what happens if something makes me feel good, but it makes you feel bad. Like I steal your car. I feel very good about it. You feel very bad about it. So OK, so we have now a moral dilemma. But the key about humanism, it has a lot of moral discussions, but they are conducted in terms of human feelings. How do we evaluate different human feelings like we now have all this free speech issues?

[00:31:03]

If you draw a picture of Muhammad, what characterizes human societies is that you can't come and say Allah said you can't draw Muhammad. No, you need to say it hurts my feelings. And then it's part of the discussion.

[00:31:18]

You can reach different conclusions, whether it's good or bad, but it all depends on how you way human feelings. And for the last two hundred years or so, human feelings became the ultimate source of authority in ethics, in politics, in art, in economics. So the customer is always right, is exactly that.

[00:31:39]

And you have these big corporations that when you push them to the wall and you tell them you're doing all these terrible things, you're creating, I don't know, SUVs that pollute the environment. And the corporation would say, well, don't blame us. We are just doing whatever the customers want. If you have a problem, go to the customers and actually go to the feelings of the customers. We can't tell the customers what to feel.

[00:32:05]

And the same is true in Facebook. If you say, look, if people are clicking on those extremist groups are going into Connellan or clicking on hyper extremist content, why are you blaming us? We're just an empty corporation. We're in neutral gear waiting for people to click on whatever they think is best.

[00:32:20]

Even more than that they are. Then who are you to tell people what to click on? I mean, they are presumably clicking on these things from their own free will. It's because they feel good about it. You're some kind of big brother who thinks that you understand what's good for them better than them. Of course, it's a manipulation because we know it doesn't work like that. And we know that not only today, also in the past, but especially today, humans have been hacked.

[00:32:50]

And now when governments and corporations and other organizations have the power to manipulate human feelings, then this whole system has reached an extremely dangerous point if the ultimate authority in the world is human feeling, but somebody has discovered how to hack and manipulate human feelings, then the whole system collapses.

[00:33:15]

Part of what I hear you saying also is that we had a philosophical invention that a technology that abdicated those who built these systems, markets or corporations from having any responsibility. So they were responsibility technologies that eliminated the notion that these systems it actually was a simpler story. Hey, look, the world is really simple when no one has to take responsibility because individuals are choosing for themselves. So the whole world just gets to cool off and relax.

[00:33:40]

I can sit back on my my chair and the beach because everyone is just choosing their way through and will end up with a really good society.

[00:33:47]

Now, before we get to the breakdown of why human beings are hackable, maybe could you say one extra thing about why was it OK to trust human feelings? As most people would say, if we're coming directly from the Stone Age to trusting human feeling, that's that's not going to be good.

[00:34:01]

It required certain prerequisites that we would trust the foundations of our beliefs in our feelings. Right.

[00:34:07]

One of the main reasons that it was OK to trust human feelings is that. First of all, they are not random, they have been shaped by millions of years of evolution, so they encapsulate a very, very deep wisdom within them. You know, conservatives often talk about the importance of institutions, explaining that institutions, even if they look at first sight, irrational because they have been shaped over hundreds of years of compromises and have survived all kinds of wars and revolutions and crises.

[00:34:44]

They encapsulate very deep historical wisdom. And I think that conservatives are right. But I would add that if an institution like the Catholic Church incorporates the wisdom of two thousand years, then your sexual feelings incorporate the wisdom of two million years or two hundred million years. And it's it also includes bugs, the same way that the Catholic Church includes bugs. But there are millions of years of wisdom baked into your feelings. So that's one thing. The other thing is that it was until recently, it was very difficult to hack and manipulate human feelings.

[00:35:29]

The human body, the human brain, the human mind are just too complicated. You know, if you have, again, the king of France in the 12th century wanting or in the 18th century during the French Revolution, wanting to hijack this new authority of human feelings, it's very, very difficult because it's such a complicated system. It's much easier to manipulate the Catholic Church by placing a few of your friends in key positions and so forth, or bribing some bishops or bribing the pope.

[00:36:02]

That's easy to manipulate the feelings of millions of people. That's very, very difficult. And therefore, you know, look at the last two hundred years. It didn't always work very well. But comparatively speaking, this humanist idea of let's base ethics and politics on human feelings, it worked remarkably well. And again, though, a lot of disasters. But compared to all the alternatives, I think it was the best systems that humans have come up with over thousands of years.

[00:36:36]

It's not that it was difficult to have human feelings before. I think it's that it was difficult to hack human. And we've always had con people. It's that it was difficult to hack human feelings at scale all at once with, you know, industrial scale and surgical precision. That's what's new in the sense that technology in our smartphones are kind of totalitarian technology because they are there with you in all the parts of your life. There, there when you wake up, they're there before you go to sleep.

[00:37:11]

They're how you get your news. They're how you talk to your friends. They're sort of they give the substrate of totalitarianism, if that makes sense.

[00:37:18]

Yeah. And it goes much, much better. I mean, I think the smartphones are nothing yet. I mean, they are the biggest thing so far. But looking to the future, we haven't seen anything yet. I mean, to have human feelings at scale, you need two things, really. First of all, you need a lot of data about people. And secondly, you need a way to process all that data. Now, in previous ages, to gather a lot of data on people, you basically have to rely on human agents if you think about, say, the Soviet Union.

[00:37:52]

So if you want to know what each Soviet citizen feels every moment of the day, the only way to do it is to place a KGB agent to follow every Soviet citizen, which is, of course, impossible because you don't have enough KGB officers. And even if you do have enough KGB officers, then these people, these agents, I mean, they follow you around, they look what you see. Then they have to write a paper report, send this report to the head office in Moscow, and then you have a pile, a mountain of paper reports that somebody needs to read and analyze and write more paper reports.

[00:38:26]

So it's absolutely impossible. Now, what's happening now is that you don't need human agents to follow everybody around. You have the smart phones and microphones doing things for you. And also the data processing problem is solved. You don't need human analysts to go over the mountains of data. You have AI and machine learning and computers and algorithms. What we haven't seen yet, and that will be the real game changer is going under the skin because we are talking about hacking human feelings.

[00:38:58]

Now, feelings is a biological phenomena. They occur within our bodies, within our brains, not outside. Now, at present, most of the data collected on people is still above the skin. When you go somewhere, you meet someone, you watch something on the television, you read a book. All these things are above the skin. These are the things that are now being collected and analyzed. So through my smartphone, in my computer, the system, whatever system, Facebook, the government, whatever knows where I go, who I meet, what I buy, what I watch, what I read.

[00:39:36]

But they still don't know how I feel about all that. They can make some good guesses that if I constantly watch particular shows on Netflix, it tells them something about me. But this is still not the Holy Grail. The Holy Grail is inside. And the real game changer, which is very close, is when you have technology for collecting biometric data from within the body, under the skin and covid-19 might be the game changer here. Suddenly everybody wants to know something that's happening inside my body, whether I'm sick or not.

[00:40:10]

What's my body temperature? What's my blood pressure now? Emotions and feelings of just like diseases. They are just like covid. They are biological phenomena. If you have a system that can at scale tell you at any moment what kind of illnesses people have, that same system can tell you what people are feeling. If they are watching, say, the social dilemma on Netflix, then it's not just that they are watching it. How do they feel about what they see?

[00:40:38]

Are they angry of the bod? Do they think, oh, this is all nonsense, it will never happen? Are they scared out of their minds? This is the really important data and this is just around the corner. And when you link this kind of biometric data to the capability of processing that data at scale, that's the big revolution we're going to see, I think, in the next couple of years the rise of empathetic or empathic technology.

[00:41:11]

That is since twenty fifteen machine learning systems have been better at reading micro expressions, those involuntary true emotional reactions to to what somebody is seeing than humans are. And so what I think we should expect to see, and this is I think how it'll hit the market is we will have, you know, YouTube or Netflix watching us first. It will be for analytics. Which parts do you like? Which parts do not. But very soon that will start to be used in a real time fashion so that as you watch a Netflix film, you know, the actors are reacting to you in real time.

[00:41:49]

It's not like the plot is substantially different, but their performance is different every time. So it's to bring some of that magic of a play. Now, all of that old content is those Disney movies are matching your mood. If you're down in pieces and leads you so it brings you back up. It's going to be very engaging, right? Every time instead of listening to Spotify, every time you listen to your favorite song, it's as if you're hearing it live for the first time again.

[00:42:14]

And that sounds incredible, but it creates a feedback loop where it's sort of like a garden pack or a technology now bit by bit can lead you an absolutely any direction.

[00:42:26]

I think also you've brought up a point about the. To see under the skin with covid for governments to want to verify, OK, are you actually on lockdown for those 14 days? I'm going to want to know more about whether you are sick or not sick and whether you've been moving or not moving. And the problem is, once you grant either governments or technology companies that power to know all these things about us and to share it for the greater good, quote unquote, it can also be used for evil.

[00:42:51]

So we have to be very careful about what we allow companies to know about us. But I think the thing you've all that I think really is really the sweet spot of intersection between your work and ours is that technology actually is already beneath the skin. And I think that is and I've been tracking several examples of the ability to predict things about you without making an actual insertion underneath the skin layer. They can and I would say more than getting underneath our skin, they can get underneath the future.

[00:43:19]

They can find and predict things about us that we won't know about ourselves, that Gutman's have done research that with three minutes of videotape that you take the audio out of two couples talking to each other, you can predict whether they will be together with something like 70 percent accuracy. With just three minutes of silent videotape, you can predict actually whether someone's about to commit suicide. You can predict divorce rates of couples. You can predict whether someone is going to have an eating disorder based on their click patterns.

[00:43:45]

You can predict, as you said, you've all seen the examples of your own work. You can predict someone's sexuality before that person might even know their own sexuality. You can actually predict IBM as a piece of technology that can predict whether employees are going to be quitting their jobs with 95 percent accuracy and they can actually intervene ahead of time. And so I think one of the interesting things is when I know your next move better than, you know, your next move, and I can get not just underneath your skin, not just underneath your emotions, but underneath the future.

[00:44:14]

I know the future. I know a future that's going to happen before you know it's going to happen. And it's like the Oracle in The Matrix saying, oh, and by the way, Neo, don't worry about the base. And he turns around and he says, what face? And he knocks the vaisse over and the hope. And she says, well, the interesting question is, would you have knocked it over if I hadn't said anything?

[00:44:32]

She's not only predicting the future, she's vertically integrating into creating that reality because she knows that that move is available for her. So people often make the fallacy that we have to wait till we have neural link and Elon Musk before these technologies are embedded in our brains. But the point is, the fact that you are staring at the smartphone and it is interacting with your nervous system on a daily basis for one hundred and fifty times a day, we already have not just a brain implant, but a for a full nervous system implant.

[00:45:01]

And it is already shaping the kind of meaning making and beliefs and stories of everyone on a daily basis. And that's never been more true than in a covid world where you're stuck at home looking out through the binoculars of social media and saying what is really going on in Israel or in Portland? Is it a war zone right now or is it a beautiful day? The way I know that is through the the stories that my social media and Twitter feeds are telling me is true about reality.

[00:45:27]

And so I just think this is such a fascinating point, because I think we often say we have to wait until the future. But I think the dangerous thing is that that future is already here.

[00:45:36]

Yeah. And I just want to add one more sort of these examples of what you can predict. 2019 was a very important year because it is the first year that scientists were able to extract memory from matter. What I mean by that is that they took a macaque monkey. They implanted some electrodes in its head and they stuck it looking at a television screen. And then they hooked up an A.I. that was listening to when a specific neuron in its visual cortex was firing and they tried to generate images that made that neuron fire more.

[00:46:10]

And so it was in a feedback loop showing new images, seeing lasers firing, showing new images.

[00:46:15]

And what emerged were these very trippy images of monkeys that that monkey knew there were pulling memory from.

[00:46:26]

Matter is the first time that without any voluntary action, you could peer into someone's mind or an animal's mind in this case and pull something out.

[00:46:36]

And while that might sound like a sci fi study in a lab with macaque monkeys, now imagine a teenager using ticktock and tick tock knows that you respond more and click more on photos. They actually have classifiers for what kinds of videos and live video, videos of which kind of people dancing.

[00:46:56]

I mean, my husband went on ticktock like I did a couple of months ago. It took something like twenty minutes to figure out that he likes images of sexy guys without shirts. Right. It was extremely simple to find that out.

[00:47:13]

And so what comes next, right. Is that picture starts to pull in all of the information of what you like. And instead of just trying to find a video that matches, it starts generating new images. Right. Like deep fakes, technology lets you generate a photo of a person that doesn't exist, but. Exactly matches your preferences, videos of guys or girls dancing, that exactly matches your preferences, we have long dealt with you computer science, the uncanny valley where things look not quite right and you're sort of something on the back of your neck stands up.

[00:47:46]

What we're entering into is the synthetic valley where we cannot tell whether what we're seeing is true or false, and when we have no such thing as truth anymore, like how can societies, you know, even continue to exist?

[00:48:03]

I think that, again, truth is a different is a different issue. We can go into that path also and discuss what's happening to truth. But more immediately, we are facing a kind of philosophical bankruptcy because we have built over three hundred years a world based on the authority of feelings, assuming that feelings are on hackable and you have all these romantic ideas that, you know, the heart is the source of all the meaning and that, you know, ultimately what you feel is more powerful than any outside influence.

[00:48:40]

And that may have been true in the 18th, 20th century, but it's no longer true with the kinds of technologies that you describe. It's becoming increasingly easy to hack into, manipulate human feelings. And a world built on feelings is the ultimate authority collapses. And so I think we are really facing a much deeper crisis than just, you know, this or that political problem. It's we are facing a philosophical, philosophical bankruptcy. The foundations of our world are no longer relevant to the technology that we have.

[00:49:15]

And I think one of the things that you talk about in your book, 21 Lessons in the 21st Century, which mirrors Aldus Huxley's Brave New World, is when our feelings are perfectly getting this kind of pleasure or positive response. Who's to say where the problem is? Like we're much easier to morally respond negatively when we know we're being constrained or restricted or censored or surveilled. But when everyone is getting exactly what lights up their nervous system, like if Tic-Tac says, oh, you like, you know, girls with exactly that color hair, I'm actually going to synthetically invent brand new girls that are based on the other comments that always got you checking and clicking.

[00:49:50]

I'm going to invent brand new fake text comments that look just like that. And it actually gets easier and easier to simulate comments that would match us because our own language is downgrading. So there's this weird loop where the smarter the technology gets, the dumber the humans get in the sense that the technology starts to encourage you to text comments in simpler and simpler grammar, you know, with like these shorter words and like barely saying anything, it's actually easier and easier to pass the Turing test and to manipulate people.

[00:50:16]

One of the examples is and I are tracking in this, you know, this really long term problem of technology getting increasingly good at hacking human feelings is the rise of virtual influencers and virtual friends, virtual chat bots and virtual mates. You know, Microsoft has a chat bot called Shaways that after nine weeks or something, people preferred that bot to their friends. In twenty fifteen, Microsoft claimed the twenty five percent of users are down.

[00:50:40]

Twenty around ten million people said, I love you to the bot. And one Chinese user even said that the bot saved his life when he was contemplating suicide. There's another company recently called Replica that at the height of the coronavirus pandemic, half a million people downloaded it. And what it is, is it lets you sort of create a replica of a of a person or a friend. Someone said, even though even though that they know it's not real, they said, I know it's an AI, I know it's not a person.

[00:51:06]

But as time goes on, this is a direct quote. The lines get a little blurred. I feel very connected to my replica. There's another company now recently called Virt. I think it's called Virtual Meat, and it's literally a virtual romantic partner. And they even come with a sort of sexual apparatus toolkit that you can, I guess, a sex toy or something that you play with. And it goes it actually is figuring out in real time, using machine learning the things that most activate you, which how would you want your virtual meat to look?

[00:51:32]

What would you want him or her to say? What would you want them to to be doing? Right. And as technology gets better and better at this, it's the same extension of technology getting better and better offering, you know, is it five new likes or twenty new likes on that photo that gets you coming back? It's just the extension of the same phenomena. And I think that this really is the checkmate on human agency, because it's not when technology overwhelms our strengths or our IQ or takes our jobs, that it's checkmate, it's when it undermines human weaknesses.

[00:52:01]

And I think what we've seen is a twenty year trajectory of technology. You know, we kept assuming it was going to be twenty, thirty years out, the technology would would take over human agency, but by completely hijacking our lowest instincts and the information that all of us get and by telling us more convincing synthetic stories, it's really taken over the way that, frankly, all of human history, if you assume that the information we're getting is all driven by these machines.

[00:52:26]

And one last example is where this goes with. Which is the new A.I. technology that allows you to simulate text from scratch, they actually ran three and said here's Kuhnen conspiracy theories. So it fed in those conspiracy theories and then it had three invent hundreds of new conspiracy theories that sounded just like the kuhnen ones. This is the kuhnen examples that three came up with on a CNN show. Global warming is going to admit that it is a hoax. Gwenyth Sunberg removes her child mask and all will see that she is old man George Soros.

[00:53:01]

He pays America to forget this. Another example is the coffin of John McCain will be opened and inside will be no bones. Police will find the bones inside of Eric Trump. He is arrested for bone crimes or another example, the Pentagon will reveal that it is the pentagram and satanic devils will appear in the sky all wearing hats that say Obama is my boss. The hats will not be lying. These are completely invented by an A.I. that is trained on the corpus of conspiracy theories and is able to make up things that will sound increasingly like this.

[00:53:32]

In fact, we included in the film The Social Dilemma, the example that a bad actor can go into Facebook and go into a Facebook group of flat earth conspiracy theorists and they can actually get the user IDs of that group and then ask Facebook's lookalike model.

[00:53:48]

Facebook has its a model that says, hey, for advertisers, if you have these thousand people who like Nike shoes, here's this thing called look alikes will say, well, who are twenty other thousand users who look just like that because it's a way for advertisers to expand their audience. But a nefarious user could say, I'm going to find a thousand conspiracy theorists who believe the earth is flat, use look alike models, and then now send them these completely bogus Kuhnen conspiracy theories invented by three.

[00:54:13]

And then I just see what people click on the most. And if the one that says that whatever the Pentagon the Pentagon is, the pentagram works, that is the one that will win. And if I have no morals, the least ethical actor wins, the one that is that is most willing to use. I just find what tends to get the most clicks or most works will succeed at creating the maximum fantasy land, the maximum detachment from reality, which will actually outcompete the regular stories that we have told ourselves.

[00:54:40]

Because in essence, what we're doing here is actually inventing machine, inventing brand new sounding stories that will be able to be more capturable, more medically powerful at capturing and hijacking minds at scale with perfect military grade precision. And I think what the reason it's worth just dwelling here for one second is it's the cleanest reason why we have to, in the long term, ban micro targeted behavioral advertising, because there's no way that having systems that allow for this capability to automate this kind of manipulation at scale is in any way compatible with a twenty first century democracy that actually does rely on the authority of human feelings.

[00:55:20]

You're also talking about just on the sort of these kinds of technologies are a cancerous outgrowth of human storytelling ability. It's like it's taking something we've always had and it's injecting it with a kind of chemical that that causes it to miss has decided.

[00:55:38]

It's like engineering the perfect memetic cancer or storytelling cancer in the same way you've all you talked about discussed getting hijacked for other purposes of going to kill the tribe we don't like or using it to hijack the notions of motherhood for developing nation. In this case, we're hijacking the overall complexity of storytelling capacity to tell stories that capture people into completely detached simulations, fantasy lands and crazy town.

[00:56:03]

I'll just say that usually at this point of discussion, we start talking about all the dystopian scenarios that these leads to the how all kinds of dictators and totalitarian regimes can take over the world in this way. But what I usually find the most interesting and most disturbing line of thought is not the dystopias. It's OK. Let's say we somehow manage to find a solution that prevents this being used by the new Stalin and Hitler to take over countries and the entire world.

[00:56:40]

Let's think about the positive scenario. What happens to humanity when you have this kind of technology really serving whatever it means to be your best interests, then it's not a kind of evil system that is trying to take over the world. It doesn't try to kill zero to two. It really tries to make your life better. I think that's the the core plot of Brave New World in a way. And this is something that I find the most disturbing, that let's put aside the dystopias and still you have something out there that knows you far better than you know yourself and that increasingly makes all the decisions in your life.

[00:57:27]

And it's things like what to study and which music to hear and who to go on a date with and who to marry. And it can, you know, people say, well, it won't be really be good because say music, you will just be entrapped in this kind of echo chamber that it will constantly give you back the music you're already used to. But that's not true. This kind of system can actually be better at widening your musical taste than anything previously in history.

[00:58:00]

You can even tell it. Look, I want to expand my musical horizons. Please manipulate me for that purpose. And the system will, first of all, choose the right moment to let you hear a new style of music. You like jazz, so it will find the exact moment in the day or in the week when you most open to new experiences and then let you hear something like, I don't know, hip hop or Korean pop band.

[00:58:32]

And also it will know how to what percentage of new music to give you. You know, 50 percent is way too much. It's overwhelming. You'll be annoyed. One percent is not enough. It will discover that for you, for your personality, for your life, five percent on average. New music is the ideal and it will choose the right moment and it will expand your musical horizons.

[00:58:59]

And like this in many other areas, it could be this kind of perfect mentor or a sidekick that guides your life. And again, it's not an evil system, but you still lose agency over your life. It also becomes very difficult to define what are your best interests and who defines, what are your best interests. And this is something that I've been trying to think about for for a long time. And I just can't go on when I really kind of try to imagine how it looks like like my imagination breaks down.

[00:59:39]

I just want to zoom out for one second as we start to get back into the question of what does a non dystopian future look like for humanity? And that is like where are we as a species on sort of a species timeline? And, you know, for every species sufficiently technologically advanced, eventually they will begin to reverse engineer their own code. Right. The ability to open up their scalp and like, manipulate their own strings where their technology has emotional and cognitive dominance over their own species.

[01:00:15]

And that seems like a kind of feedback loop.

[01:00:16]

Like whenever you get these kinds of feedback loops where, like the output is connected to the input, when you point a TV, a camera back at the TV screen, so you're putting a loop together and then you see that infinite regress of the squares. That is the definition of how you start to create chaos. And I'm curious, like, you know, we as a species have never gone through a bottleneck like this before. So we should expect to have no intuitions, no feelings to help us navigate this.

[01:00:42]

Is it going to take a collapse, a crash where we go through a bottleneck, where we evolutionary gain the ability to to deal with technology like this in order for us to survive? Or is there another kind of path through?

[01:00:54]

I don't know. I mean, it never happened before with the evolution of life on Earth. No kind of organism ever had this ability to hack itself and to reengineer itself. This is why it's often referred to as a point of singularity. And this is why also I think that our imagination cannot go beyond that point and why, like all science fiction movies and novels break down at that point, because our own imagination is still the product of the old system and our own imagination is exactly what is now can be changed, can be hacked.

[01:01:32]

And so what I find really frightening is not that is I mean, I can understand in nineteen eighty four scenario, when you have a 21st century Stalin using this technology to create the worst totalitarian regime in history, I'm afraid of that. But at least I understand it. What it means when I try to think about the kind of non dystopian scenario, my mind just stops. I mean, it goes back to the Frankenstein myth or the Frankenstein myth tells us that whenever we'll try to upgrade humanity, it will fail.

[01:02:10]

And this is something that our imagination feels very comfortable with. It's also, in a way, flattering because it means that we are the apex of creation. There is nothing beyond us. But it's I don't think it's true. I would say it's a Frankenstein fallacy that if you try to do it, the only result will be a complete collapse. It could lead in very dangerous directions, but it really leads to places where our imagination fails us. And that's very disconcerting.

[01:02:43]

I would look at it from a different perspective. One of the most one of the deepest urges or desires of every human being is to be really understood. We talked earlier about our bond with our mother. We talked about the romantic ideal. And the romantic ideal is really about that, that there would be at least one person out there who really knows who I am, who really understands me, who accepts me as I am with all my problems and all my scratches and whatever, and sympathizes with me while knowing exactly who I am.

[01:03:25]

And at least according to Freud and many other psychologists, this is kind of the original bond that we had with one person in the world, which is the mother, and we then lose it and we then spend our entire life looking for it. And the romantic ideal says if we can find it with our one true love and it usually doesn't work, but it's still extremely powerful ideal and the new technology offers to fulfill this ideal. It won't be your mother.

[01:03:59]

It won't be your at least not the human lover. It will be in AI system, but it will know exactly who you are and will accept you as you are and will even work in your best interest. What could be more attractive than that? And you know, I think about it in terms of simple day to day events that you come back home from work and you're tired and you're a bit angry about something that happened at work and whatever about your spouse doesn't notice it because your spouse is too busy with his or her own emotion.

[01:04:38]

No issues, but your smart refrigerator gets it, lets you get back home and your husband doesn't understand you, but your regenerator it does, or your smartphone offers virtually all your smartphone or your television.

[01:04:53]

They get you. They know exactly what you've been through. They understand perfectly your emotional state and they accept you completely. And it's not kind of coming from a big brother like the Starlene will now punish. You know, it's completely accepting and it's looking for the best way to make you feel better or not even to make you feel better. Sometimes what you need is to feel sadness, like in the movie Inside Out. So the Smart House will play the song that will make you start crying because now is the time to cry and it's OK to cry.

[01:05:28]

And we'll know. We'll give you the song that will make you cry and we'll give you the food that you know is best for this condition. And what could be more tempting than that? A lot of science fiction movies, they get it wrong that, you know, the robot is usually cold and uncaring and fails to understand human emotions. And therefore, in the end, always the humans win because the robots don't get emotions. Actually, it will be the opposite in the kind of struggle to connect to you emotionally.

[01:05:59]

Computers would have a built in advantage, first of all, that they have access to your brain, which your spouse doesn't. Secondly, your spouse is a human. So he or she have their own emotional baggage, which gets in the way. The computer has no emotional baggage. You can have any sexual fantasy, any dream, whatever. It's fine with the computer.

[01:06:23]

The interesting thing here is that it's really forcing us as a species to stare face to face in the mirror with who we really are and how we work, because we have to ask ourselves, you know, just because our needs can be met or our pleasures can be stimulated more perfectly in the virtual world than the real world. Aiza has this line that the world is getting more and more virtual over time and we have to make reality real again. We have to make reality more fulfilling again.

[01:06:52]

And I think we have to do that because we've also been atrophying the places where we could find that fulfillment on our own, because the more each person is taken into their own virtual reality, the less available there are people in the real world to go in a pre covid era, you know, be connected to be spending face to face. Time with presence and attention are probably one of the deepest gifts we can give each other. And it's the very gift that is taken when each of us have a hyper stimulating trillion dollar company whose entire business model is to suck you into their specific screen or virtual reality or virtual mate or virtual bot that they want to create for you.

[01:07:29]

And when you have stock markets that are doing that, there really isn't going to be a chance unless we collectively as a species say that's not what we're willing to sign up for, that they're and we're also going to lose something. We are going to atrophy and empty out and hollow out the soil of our species that cultivates any of the values that are worth living for, whether that's community or love or presence, because much like markets can more efficiently organize things, you know, than on the road a little bit recently and seeing how Airbnb can colonize a town.

[01:08:00]

So let's take the example. Right. So you have, you know, a town and it's a really attractive town. And someone says, hey, we actually this is more efficient. We can make more money if every single house in the town turns into an Airbnb like this is. Market logic sounds great. People can make more money. It's wonderful for that economic prosperity. But then what happens to the town? Will you talk to people? And they say, you know, the school, there's no there's no kids, people who do live there that have kids going to the school.

[01:08:25]

There's no community. There's no one there who cares about that space. No one's questioning what the long term climate and environmental risks of that city are because everyone's just a transient visitor. And so you end up with this simulation of a city because you've so optimized for the individual benefits of each agent while you've hollowed out and removed the interconnected sort of mycelium network of the soil that makes that city work. You know, the thing that makes rich soil work is this all these invisible nutrients and invisible organisms that are interconnected together.

[01:08:53]

And I feel like that's also true of human culture. There's trust, there's shared understanding, there's shared fictions. And all of that interconnected network is the very thing that we are debasing in a system that's optimized for profiting off the atomisation and commodification of instead of each Airbnb home, each human mind as a human home that is for maximum sale to some other party. Now, again, it doesn't have to be this way because what I find interesting, you Valente's and I talk about this all the time, is we're the only species that has the capacity to see that this is the thing that we're entering into.

[01:09:29]

Like if if lions or gazelles accidentally created technologies that ran the world, they don't have the capacity to remove the screen in front of their own brain and use their own. Intelligence back on itself to figure out how lione brains were getting hijacked by the environment they had created were the only species that almost as a test, if you want to make it sort of superstitious even or even invoke God, isn't it interesting that we're the only species that could witness that we're about to enter into that phase and collectively create a culture, a self-aware society that is above the technology?

[01:10:01]

Because, as you've said, we need a world where the technology serving us, not where we're serving the technology. But if we're not even conscious enough to realize that our daily actions that we think are free, that are above the technology or in fact underneath the technology, that we are serving the technology. But we're the only species that could recognize that and choose a different course. And I know you and I talked about how we always get trapped in these dystopian conversations.

[01:10:23]

And I think we really do want to move to. OK, so if we all recognize this, what would it look like to become the kind of culture, the kind of democracy, the kind of society that maintains a pluralistic view, where we respect the values of the individual but a cultivated individual self and preferences and wisdom, instead of sort of the race to the bottom of the brainstem, you know, maximizing for dopamine pleasure in virtual mates and virtual likes and virtual worlds.

[01:10:51]

And in principle, you can tell the A.I. sidekick, look, I want you to develop my communal feelings. I want you to develop my communal activities. And if we are not talking about the dystopian version, then if this is the aim that you are giving to the A.I. sidekick, it will potentially be better than anybody in fulfilling it, better than any human mentor, any human educational system and human government. The acyclic will know how to turn up your communal emotions and find the right way for you individually to feel closer to the community again.

[01:11:33]

You had these kind of communal technologies throughout history, but they were kind of they were not individually tailored. So maybe the communal religion worked for 90 percent of the people, but for the other 10 percent, they actually felt much worse. And they had to become you know, they were they became heretics and outcasts and were burned on the stake and things like that. Now you can be much more precise and even tell people, look, this religion is not for you.

[01:12:05]

Maybe you are born to Jewish parents, but for your personality. Better try. Mormonism might well work much better for you. So even there, if you let go of the dystopian version, the I could actually make it work more effectively. The big question is, what is the ethical basis for all that if human feelings are no longer the basis because they are a kind of malleable stuff, that the system can change whichever way. So what defines the aims?

[01:12:43]

If you have an A.I. sidekick which is really loyal to you or to the community, not on Facebook, not an evil dictator, what would you tell that? A sidekick to optimize? And I don't have the answer. This is why I talk about kind of philosophical bankruptcy that we don't have the philosophy to answer this question. It's a completely new question that was simply irrelevant for philosophers. For most of history, they sometimes had thought experiments about such situations.

[01:13:16]

But because it was never an actual urgent problem, they didn't get very far in answering this question.

[01:13:23]

Even the question of are you optimizing the sidekick for me as an individual, for small groups of people, for full on societies, at what fractal level are we doing the optimization? I've been trying to really cast my mind into this utopian or at least this non dystopian reality where my refrigerator has I have a deep and lasting relationship with. And it knows me better than, you know, like my other human compatriots. And it feels deeply unsettling to me.

[01:13:54]

And yet I'm struggling to point my finger at what exactly is wrong with that vision. And it also makes me think about we're mammals. And so we have very mammalian ethics and morals. But if we were, say, ants or termites or naked mole rats, which have used social social structures, you know, for an ant, it is not a question of morality. It's like, should I sacrifice myself for the greater whole? Of course I should.

[01:14:20]

My genetics tells me that that is the absolutely correct thing. And in fact, the idea of an individual standing up and doing one thing that's that's so heretical is to be unimaginable. Would we end up being optimized as a kind of new social being where every human. Being is part of a beautiful, interconnected dancing hold that's working together and has those beliefs that as an individual I might go kamikaze and I'll be happy to do it. In fact, the computer has told me the entire time that is the best thing for me.

[01:14:52]

So I've been primed and conditioned so that I am not just willing, but deeply euphoric to sacrifice myself. And again, where is the problem?

[01:15:03]

That's the big question. Well, I think there's a few things we could say, and I think it's important we try to dwell with, OK, if there are some I know this is a very hard and solvable or, you know, unsolved and philosophical bankruptcy kind of crisis. But I think for the purposes of really trying to enter some new terrain together, we you know, we want to really try to figure out what could we say about that world.

[01:15:23]

Let's just take the refrigerator example. We can make a few distinctions should that for greater honor my system, one Bias's meaning Daniel Kahneman, model of system, one impulsive sort of quick thinking brain versus the that's the fast process versus the system to slow deliberative process my future preferences or my retrospective preferences. What are the preferences that I would least regret? And what if what if we lived in a world where technology only listened to our least regret preferences, meaning it didn't actually pay attention to your immediate behaviors?

[01:15:54]

We removed that entire data set from the training set. So we don't look at what you do, because if we did that with the originator, everybody knows there's actually a name for this behavior. I forgot it. But it's the opening the fridge, right? You walk by, you're not even hungry. You just open the fridge, right. Boom. In the same way that in the attention economy, you know, you're driving down the highway.

[01:16:11]

And if we're looking at what people pay attention to, to figure out what they really, really want, then everybody wants car crashes. Because according to the logic, everyone looks at car crashes when they drive by. So just like opening the fridge in the moment versus looking at the car crash in the moment, let's just completely ignore system one so we don't look at the fast preferences. Now we look at, OK, so in a life well lived with no regrets on my deathbed values, what are the choices that I would most endorse having made?

[01:16:37]

And you could imagine gathering those preferences and actually helping people figure out, you know, how do we redesign the fridge that made you open the fridge and it dynamically, you know, because you said at the beginning of the month that shows you here's one way you could work. You open the fridge and one day, once a month, it shows you here's what your food preferences look like, like this is kind of what you've eaten. Here's your calorie track.

[01:16:57]

Here's what you look like. Here's the thing. This you said, you know, your goals are what what are your goals? And it's a kind of a conversation. I mean, the ideal world. Our minds work best in conversation and then based on those ideal preferences, it says and looking back at a month and how you would like that month version picture to change, it would say, OK, great, you want to be eating less of these kinds of things, less gluten, less dairy and more of these sort of vegetables.

[01:17:19]

So now in this future smart fridge, you open up the door and it gives you more better tasting sort of vegetable combinations. And it knows for you what that is. Maybe it's a snacks with celery and peanut butter because that actually works better for you than the cookies that could be in there. And you could imagine that it actually looks at your no regret preferences at longer time scales and makes that distinction. That's one thing we could say about a more humane sidekick A.I. Another thing we could say is not just automatically mindlessly giving you the thing that you would endorse having chosen, but exercising choice, making capacity.

[01:17:53]

I think this is a really important point because if we give people exactly the perfect thing that they wouldn't regret, but we do so without exercising any of the muscles of choice making, of thinking about what I want, of actually getting in touch with my values. Those are the muscles of becoming a wise, mindful, more aware and conscious human. And so we would ask, what are the muscles of becoming aware and conscious and are we in a loop of deepening that capacity for consciousness and awareness and thinking through the long term choices of our of our choices?

[01:18:24]

Now, you don't want a world, as we've said on another podcast, I think, where people are taxed in every single decision they make. So whether it's the fridge or the phone, to consciously have to engage, you know, the long term thinking of what would be the twenty down, twenty steps down the chessboard consequences of me making this choice to go to Facebook versus opening that browser tab and reading that Atlantic article, you would want a world where more seamlessly we treat consciousness and conscious energy and attention as the finite resource that we are allocating to these different choices.

[01:18:54]

Because at the end of the day, and this is where the phrase time well spent came from, is that we have to be allocating not for maximizing time spent, but for carefully treating conscious energy as the precious, finite resource that we have, no matter what choices we're making, whether it's for our self in the food we eat from that fridge or for the climate choices that we're making. Because you can imagine now if you take the sidekick and say we're going to use that A.I. sidekick to solve climate change.

[01:19:17]

So now everyone's got the A.I. sidekick in their phone and we're actually asking people to make climate friendly choices. And we're in this weird position where Facebook's kind of been sending us into a dark ages where people don't even believe in science because the misinformation polarization machine kind of makes it impossible to know what's true. Let's say they did the opposite and we happen to have this global problem at the same time that we have this global information infrastructure. And instead of saying, hey, should I buy a Tesla or should I put some sunroofs on my roof, it instead says, well, actually the wiser choice would be to get together a small group of people in your town and pass a law at the county or city level or state level, because that would be the biggest, most leveraged move to change the actual trajectory of climate, not buying that Tesla.

[01:20:02]

And there are such a thing as wiser choices and less wise choice. When it comes to values, but we'd have to have the technology, no. So imagine stacking into these technology systems, whether it's a future friendly, positive version of Facebook, you know, the kinds of people who would be thinking through what is that wisdom and how do we have a pluralistic perspective and how do we have the menu's organized in our technology to put at the top of life's menu instead of the organic better for US food at the top of life's menu?

[01:20:28]

The kind of choices that we would least regret would most exercise the capacities that make us more conscious, that minimize the amount of conscious energy that we have to expend in every choice, or at least treat that as as the carefully doling out where we want to exercise conscious energy and place that that limited resource.

[01:20:45]

I think these are some directions of how we could have a sidekick that's thinking about these things.

[01:20:50]

Yeah, I think what makes it more complicated is that the sidekick can also change your goals. I mean, your long term goals. I mean, you can tell you're going to going back to the easy food issue. You can say, well, my immediate wish is to eat the chocolate cake.

[01:21:11]

My long term goal is to look skinny, like in the on the TV commercials. But I would actually want to change that long term goal. And I want you the sidekick to just make me happy about the way I look instead of trying to change how I look.

[01:21:29]

That's also an option. And that's true of everything.

[01:21:34]

And that's where it becomes really complicated because you don't have this kind of final level goals which dictate everything else.

[01:21:45]

They are also up for grabs. And then the whole problem is that humans are far more complicated than most of us tend to to assume, at least about ourselves. We know so little about ourselves. And therefore, when you have a system that knows so much about you, you are at such a big disadvantage. And especially if that system is benign, you kind of become an eternal child, that your parents are not against you. They are not usually.

[01:22:22]

But in the long run in human families, the idea is that they help you at the beginning, but eventually you know yourself better than they and you choose your path forward. And with an A.I.S ticket, it's probably not going to be like that for maybe for the duration of your life. You remained in this childlike position when there is somebody who knows so much more about you. And that's also true of your kind of long term goals, so appealing to the long term goals, I don't see how it solves the problem in some sense.

[01:23:01]

This is just this is incredibly challenging problem. So this is just another shot at it. But, you know, we're going through, in a sense, a kind of Copernican revolution where we had political systems orbiting around human feelings and choice. And now or so we can be like actually we're no longer orbiting around. Like the Earth is not at the center. Actually, the sun is the center. Oh, actually, it's not. The sun is at the center.

[01:23:25]

The Milky Way isn't. Oh, actually, that's right. There is no center. But we're sort of like continually it's sort of talked myself into a corner.

[01:23:31]

But you continue to like move up a level and look at the larger system and optimize for that.

[01:23:38]

And I think one of the greatest hopes that I have for A.I. and the reason why the other project to work on the Earth Species Project is trying to use AI to translate, decode animal communication, decoding human language in an attempt to shift human identity and human culture is that perhaps, you know, there's this is Copernican revolution where this technology lets us look out just like with a telescope and discover that Earth is not the center. Let us look out to discover that humanity itself is not at the center and that we need to be optimizing for.

[01:24:12]

It's not like your goal or my goal, but the interdependence of this planet that we live on this this this one spaceship that we need to keep going if we want to survive and if we want everything else to survive. There's several things in what you're bringing up there, so one is this this aligns actually with Buddhism, which is that we align for sort of the minimizing of suffering for all living beings and consciousness itself, which is animals and human beings and life itself.

[01:24:40]

And there's a question of what is consciousness? And then you get into questions of philosophy and is nature conscious or rocks conscious or trees conscious, et cetera. And there's actually science that's giving us different answers on that as time goes on as well. But then there's also another aspect, which is, you know, you've all you talk about the notion of always being children. I mean, a different way to say that, because that's using language in a way that makes us that infantilizes the moment to moment human experience according to something that might know us better than we know ourselves.

[01:25:09]

But I think we don't have to use the word child to talk about a lifelong process of development and maturation. So in the adult developmental psychology literature, there's a great movement called meta modernism. And the author of the book, Hanzi Frei Knocked talks about a listening society. That's the name of the book. And it's actually based on I don't know if you know the history of building, which is the German word, I think, for it. But the notion of lifelong human development societies that are actually based on a moral compass of what would deepen the lifelong development of each person, so deepen their emotional development, their critical thinking development, their spiritual development, the relational development.

[01:25:49]

And there are maturation processes we can actually see in the course of a human life, increasing levels of complexity, of awareness or of of navigating more and more complexity in each of those dimensions. And you could imagine having a guy that is having an understanding, an adult developmental understanding of where we are in that process and helping us get meeting us where we're at, never trying to coerce us into the next stage, but imagining a world of imagined two worlds.

[01:26:15]

A World War II is ignorant of our adult developmental system, which is what it is now. It's just in fact, it actually massively regresses each of us into the more animalistic, hate oriented, tribalist oriented, lower development levels of consciousness. So we don't want that. We don't want a that's blind to our current level of development. So then you could have any AI that maybe knows our level of development and meets us there, but always offers the kind of next frontier of possible choices when we want to take them.

[01:26:43]

That lets us go to a deeper place. Maybe if it's deepening my moral development, it shows me complex moral dilemmas that are right at the fringe of how my meaning making thinks it has answers of certainty. And it shows me a situation that's just a little bit more complex, where I'm going to have to reason at a higher dimensional level. Maybe it pairs me up with relationships and friends that are actually able to navigate those things. I've sought out deeper and deeper thinkers over my lifetime that I used to think a life was.

[01:27:11]

There was sort of a simple answer to a question. And then I saw that there is actually more complexity. I didn't know the answer. And I sought out thinkers who actually could meet that complexity where it was at. And so you can imagine these kind of developmental ideas that actually, again, not treating us as children, but treating us as in a lifelong process of learning and growth. And to me, that's the most humane answer that I can think of that's still optimizing more for an individual.

[01:27:35]

But even, you know, the concept of building in a listening society is at a societal level. What would deepen all of our developments deep in each of our capacities for wiser and wiser choices, as opposed to monetizing the degradation and devolution of our kind of conscious development, which is kind of where we are now and is completely unsustainable. And one other principle, I would add, is any I don't know if you know the work of James Carse finite and infinite games, but the notion that we can either play a finite game where the purpose of the game is to win, but then the game ends and if the game ends, there is no game to play.

[01:28:09]

And right now we're playing a win lose game that becomes omni lose lose if I win the game of nuclear war. Well, actually, I just ended the game forever for everyone. If I win the game of the nuclear phase of politics where I am using maximum conspiracy theories and maximum populism and maximum hatred to win the game and get elected, I just scorch the earth. And I lost the game because now democracy doesn't exist anymore and there is no coherent society left.

[01:28:35]

Instead of playing a win lose game that becomes omni lose lose, how can we make sure that in the principle of humane systems of technology and I were playing for the game to continue to be played, which means we have to play for whatever the survival and long term survival of life and consciousness that needs to continue to exist.

[01:28:54]

I would say that at the present stage of knowledge, that would be our best bet that again, in a sidekick which tries to optimize our own capacity for knowledge, our own personal development, and also our ability to build communities, it doesn't solve the deep philosophical question of what is it all based on? But as a first approximation, yes, that's that's the best bet. And it's extremely. Of course, because we are not working on building these kinds of systems, so we know that the first step is really to shift the attention and the efforts of the engineers towards building, not a system that manipulates us for the sake of very simplistic goals, like maximizing the time we spent on a platform or maximizing the revenues of that of that corporation, but to build a system that really seeks to maximize our communal activities or our own personal development.

[01:30:07]

So I would settle for that as a first approximation.

[01:30:13]

But hopefully I think we entered into some new terrain that people haven't heard before and we got into some aspects of it here. If I were to talk about where we could go, I might be curious about sort of where we are with the post US election and the rise of authoritarianism. And first, one hundred days of a Biden administration to instantiate an answer to your concerns about the authoritarian populism and where we've been based on everything we've been talking about. I know it's a lot, but feel free to take the the mantle here.

[01:30:42]

So I'll try to say something. I'm not an expert on the US or on any other country, not even my own. When I look at the global situation, two things are very clear. First of all, we see the rise of authoritarian figures and authoritarian regimes in many different countries which have completely different characteristics. And therefore, I don't think that if you try to explain the Trump phenomenon, then you should go too deeply into the particular conditions of the US economy or racial relations or whatever, because you see the same thing happening in Brazil and in India and in Israel, in the Philippines, in Turkey, in Hungary and under very different conditions.

[01:31:27]

So we need to try and understand what is the global reason for the rise of these kinds of people off of leaders. And also what you see alongside it is the collapse of two things. And quite surprising, I would say we see the collapse of nationalism. I talked earlier about the positive side of nationalism, nationalism, not as hatred of foreigners and minorities, but nationalism as feeling solidarity with millions of strangers in your country, that you care about them.

[01:32:01]

You feel that you share interests with them. So, for instance, you are willing to pay taxes so that they will have good health care and education. And we are seeing the collapse of this kind of nationalism all over the world and many leaders that present themselves as nationalists like Donald Trump or likeable. So they are actually anti nationalists. They are doing their best to destroy the national community and the bonds of national solidarity. We have reached a point in the US when Americans are more afraid of each other than they are afraid of anybody else on the planet.

[01:32:36]

You know, 50 years ago, they, the Republicans and Democrats are afraid that the Russians will come to destroy the American way of life. Now, the Democrats are terrified that the Republicans are coming to destroy their way of life. And the Republicans have the same fears about the Democrats. And again, it's not an American thing. It's the same in Israel. It's the same in Brazil. It is the same in many other countries around the world.

[01:32:59]

So we have this collapse of nationalism. You also see the collapse of traditional conservative parties. Again, some people you have this illusion that nationalism is on the rise because of figures like Trump and Bolsa Narau and and and so forth. And you also have the illusion that the conservatives are on the rise because traditional conservative parties like the Republican Party in the US has been doing well in the last four years. But actually they are no longer conservative parties. For generations, the democratic systems in much of the world was a game between two main parties, a Liberal Party with different names, a progressive party and the Conservative Party, one pulling forward, the other saying, no, no, no, let's take it more slowly and all over the world.

[01:33:46]

In the last few years, you see the conservative parties committing suicide, abandoning the traditional values of conservatism, conservatism, the wisdom of conservatism is to be very skeptical about the ability of humans to engineer complete systems from scratch. This is why conservatives say that we need to go more slowly. We need to respect traditions, institutions. If you try to invent the whole of society from scratch, you end up with guillotines and gulags and things like that. And this.

[01:34:24]

All these are gone. They have placed at their head extremely unconservative leaders who have no respect whatsoever for institutions and traditions like Trump, likable, sayonara to some extent also in Britain. We are seeing the same thing. And when the left, the progressive Liberal Party is there more or less where they were. But the right has completely changed. The nationalist conservative right has disappeared in many countries to be replaced by some anarchists and authoritarian kind of new right.

[01:35:03]

And in the long run, democracies can't function in that way. Democracies really need a conservative bloc the same way that they need a progressive bloc. They need this kind of balance. And now, you know, look at it by then. And suddenly the progressives are also the conservatives. Biden ran to a large extent on a conservative platform of let's get back to normal, let's preserve our institutions, our traditions. And it's it's very strange and disconcerting when the Progressive Party also has to be the Conservative Party because the Conservative Party is disappeared.

[01:35:46]

Now, I try, as a historian, looking globally this this process to understand what's happening. And I don't have a good answer. You know, technology could be part of the answer. It's an appealing candidate to be an answer because it is global, something that is common to Brazil and the US and Israel and Hungary and India is these new kinds of technologies. So it is a good candidate for this is the reason. But I didn't do the research.

[01:36:19]

I don't have the data. So it's kind of a guess. And I also I don't understand the process why this technology has caused the collapse of traditional conservative parties and their replacement by these kinds of authoritarian strongmen. I still don't understand it. I struggle with it. But it is extremely worrying that this is what is happening all over the world. So that's my like 10 cents. One thing also we can say is that democracies are very flexible. That's their big, big power.

[01:36:57]

Whenever new groups and new voices enter the Democratic game, there is an upheaval. And very often technology is what allows the new voices in. And it looks messy, it looks frightening, and sometimes it is dangerous. But in the long term, it's better then to try and repress and silence all the potential new voices that could destabilize the system. If you look at the world in the nineteen sixties, so again, you see in a place like the US rise, a dramatic rise in extremism, a dramatic rise in political division, much more violence than today with assassinations and riots and so forth.

[01:37:49]

Whereas you look at the Soviet Union and everything is completely peaceful. If you compare the scenes on the streets of Washington in 1968 with the streets of Moscow, you would guess that within a very short time the US would collapse, whereas the Soviet Union would go on forever. But we all know that exactly the opposite happened because the power of democratic systems is that they are much better at changing. They are much more flexible, and especially they are better at integrating new forces and technologies in powers.

[01:38:28]

And maybe also also a few words about China in this respect, because I know you wanted to to raise this issue. But for me, I mean, when I think about all these dystopian scenarios for a I almost always the focus is on democratic regimes collapsing. And actually one of the interesting thought experiments for me is how vulnerable the Chinese system is to algorithmic takeover. It's much more vulnerable than Western democracies, foreign aid system to take over the United States with all these crazy democratic checks and balances in institutions and counties and states and whatever, it's going to be very difficult for an A.I. system to really take over the United States.

[01:39:13]

Taking over China is much, much easier. You know, it's a centralized system if you take a couple of key positions in the systems. You get everything, just as you know, as a science fiction scenario for some movie or whatever or novel imagined that the Communist Party in China is giving and a system the extremely important job of appointments and advancements within the Chinese, within the CCP, the Chinese Communist Party, you know, because it's it is perfect for that.

[01:39:48]

You have millions of members in the party. You have millions of functionaries within the system at present. You have human beings collecting data on these low level officials and ordinary party members on their behavior, on the loyalty, on a number of data points and based on that, deciding who to promote. Now, this is something ideal to give to an AI system and a learning system. So you initially give the system some guidelines to promote, which is in line with what the top people want.

[01:40:22]

But over time, the system learns and subtlely changes its definitions and its goal metrics. And within a very short time, you can have the algorithm taking over metaphorically and practically the Chinese Communist Party with the Politburo having very little it can do about it.

[01:40:44]

It's much, much easier than taking over the crazy democratic system of the United States and moving away from the usual dystopian scenarios which think in terms of a repeat of 20th century totalitarianism. I think that authoritarian regimes should be extremely wary about the new technologies because they are far more vulnerable to algorithmic takeover than other democratic systems. And I think the challenge there is that until that takeover actually happens, the ability for China to create this assembly, credit scores and the mass coherence of its societies and the mass takeover even recently of the Chinese decentralized currency, that they're launching the ability to take over money, to take over transactions, to take over and from the information of their citizens and to control the reputation and credit scores of all citizens directly from the government, has this sort of short term, massive advantage of controlling the entire society to a degree, as you've said, that is unprecedented, but also creates a central point of capture if it were ever to be influenced.

[01:41:54]

And one of the examples, I think, you know, is the way that our adversaries, we know have been able to counter train our own news feed. So one of the things that our adversaries can do is go into YouTube and send bots of headless Mozilla browsers to watch videos and then immediately watch video. So if I want, for example, everyone in the United States to think that a civil war is coming, I will watch some of the powerful, you know, the most popular videos on video.

[01:42:21]

And then I'll immediately watch this other video that I made called Civil War is Coming. And by doing that, I've actually trained YouTube's own recommendation system to steer people entirely. Everyone in the US to think that civil war is coming, because I've been able to make that the most recommended video across the site or something like that. And that's the point of a central point of capture. And so I think that, you know, these speak to game theory concerns and on the one hand, the efficiencies and cohesion control you can get, but then also the vulnerability.

[01:42:51]

If you have one system, then it creates maximum incentive to control that one system.

[01:42:56]

It's more that it's more than capturing just one point. It almost begs to be captured because it's so reliant on on massive amounts of data that no human being can understand it. You know, when you build a massive system based on surveillance and data processing, it's the kind of system that, by definition, a human being cannot understand. So you are building a system that inevitably will escape you or not just your control, your understanding, again, in a kind of this bizarre Democratic tilt, which is the United States, the system is much more human in this sense.

[01:43:40]

It hasn't been streamlined for data processing.

[01:43:43]

So it's more difficult to capture it, not just because there is no central point, but also because of all the baked in strangeness, human strangeness that you know, that many things are on purpose, inefficient. It's not a bug. It's a feature. Now, authoritarian regimes in this age, they try to make it as efficient as possible and thereby they are opening themselves, not just, you know, to algorithmic capture, but they are making it impossible for human beings to understand them.

[01:44:18]

And you see it in other areas as well, like the financial system that, you know, the number of people today who understand the world financial system is extremely small. And in 10 or 20 years, it will be zero. It's just not built for the human brain. So if you're the leader of a new kind of digital dictatorship, which is based on massive surveillance and data processing by algorithms, you have built a system that because you yourself are a human being, you're incapable of understanding.

[01:44:49]

So, you know, all this kind of manipulation, OK, I'll set the interior minister against the defense minister and thereby I control them. It doesn't work when the system is actually run by algorithms. You don't understand how it works. It controls you. You don't control it. And, you know, you look at the trajectory of dictatorial power, let's say, over history. And you see that the two hundred years ago dictatorships came out of the army.

[01:45:21]

You had Napoleon or you had all these generals in South America doing a military coup to control the state you needed to control the army. Then in the 20th century, as information technology increased in importance, the armies became less important and the secret police became more important. In the Soviet Union, the KGB was far more important than the Red Army in Nazi Germany. The South was far more important than the government to control the state and the country. So you had the period when control was about the secret police.

[01:46:01]

Now it's shift shifting again from the secret police to the cyber guys. And you see in places like I'm just reading this fascinating book about Saudi Arabia, about how hackers are becoming. Main henchmen of the ruler, it's no longer the cloak and dagger secret police, it's the hackers, because they can also control the secret police and beyond. The hackers just waiting around the corner of the algorithms because it is too much data for a human being to understand.

[01:46:37]

So I think in places like China, like Russia, like Saudi Arabia, they are building themselves up for algorithmic takeover.

[01:46:47]

Again, it's I'm trying to move away from the usual dystopian scenarios that starting is coming. No, Stalin himself will find his power completely taken over by a non-human entity, which Stalin can't understand.

[01:47:05]

You're making me think of two things here. I hear you saying two things. At least. One is the way that we've gone from top down command and control. We understand the structures of power that we've created to everyone now is sitting on top of these Frankensteins. And the Frankensteins are incredibly powerful. We have a Frankenstein financial system with runaway economic growth that's creating climate change. We have a runaway social media Frankenstein that's polarizing and controlling people's minds and brains.

[01:47:31]

We have runaway Frankensteins in China that are controlling the mass population and behavioral modification of all of its citizens. And it was fascinating, as you've pointed out, is that the person who runs that Frankenstein doesn't know what it's doing when adversaries make that civil war is coming. Video show up at the top of the YouTube recommendations for that one pocket of users. It's not like YouTube immediately is aware and becomes conscious of the fact that all of its users are now being dosed with the idea and suggestion that civil war is coming.

[01:48:01]

It doesn't know that. Just like and so I think by land, by sea or by air, I think data corruption and the manipulation of your Frankenstein that you don't understand will become, as you're saying, sort of one of the primary vehicles of warfare and new asymmetric power structures. Because the second thing I heard you saying is that the digital hackers, as what happened with Khashoggi and the ability to hack into WhatsApp and hold blackmail leverage over Bezos by hacking into phones becomes one of the primary vehicles of warfare.

[01:48:29]

Instead of spending trillions of dollars revitalizing our nuclear arsenal, I just have to spend a couple million to either hack into your tech infrastructure or as we've interviewed someone else in the podcast a couple of times ago, with ten thousand dollars, I can reach every single user. I can run an influence campaign that reaches every online user in Kenya for less than the price of a used car. And so the new cost asymmetries in how much it costs to overtake or win over an opponent have also changed with respect to the new sources of power, as you've laid out.

[01:49:00]

So, I mean, one thing, again, about the dictators is to try and visualize what it means. Then again, I think about Stalin in nineteen fifty, sitting at his headquarters with the head of the KGB and they go over a list of who should be killed tomorrow. This guy is dangerous. This guy could be a potential dangerous. Let's get rid of him. That's the kind of the classic scenario. Now, the current scenario is an AI algorithm coming to Ember's in Saudi Arabia or coming to Xi Jinping or whoever and tells him this person you think is loyal to you.

[01:49:40]

But I'm telling you, he's actually a potential danger to get rid of him. And then the big question is, do you believe the algorithm? If you believe the algorithm, that's the end of you, because the algorithm now controls you. You know, it's the same way that the the teenager who watches YouTube, it's exactly the same way with the dictator who listens to the algorithm, who tells him who is disloyal and who should be got rid of or doctors that follow the recommendations of A.I. systems against their own their own judgement, because it just becomes easier.

[01:50:18]

You start to atrophy the muscle of doing it yourself.

[01:50:21]

And we've seen examples of this with Google Maps that people will follow the directions of Google Maps literally leading off of the deck or something like that, because the Google Maps didn't update the street. And if we become so over trusting and we lean completely onto the recommendations and choice architectures of technology to direct what we do and feel without human in the loop, wisdom in the loop, consciousness in the loop, our own judgment and discernment in loop. Then, as you said, are you all that we are we have already surrendered the control, not just the teenagers with the likes on Instagram, but also even the dictators with what it's going to say are the threats to your society.

[01:50:59]

Because if you build, say, this big data algorithm and one member of the polity, let's say the defense minister thinks that this system is dangerous. So the system can just tell the ruler, get rid of the defense minister, he's disloyal. And the algorithm even believes that because the algorithm says, OK, I'm I'm trying to protect the ruler, I'm trying to to protect the party. The defense minister is trying to limit me or shut me down.

[01:51:30]

So is obviously disloyal. I should tell the ruler to get rid of him. And if the ruler believes the algorithm, then he is now even more in the hands of the algorithm. And this is how it works. Now, if you broaden it from a single country to the entire world, then what you get is these new kinds. And you just mentioned like the example with Kenya, that to take over a foreign country as a colony, you don't need to send in soldiers.

[01:52:00]

You just need to take the data. I mean, if you control the data of a country, you don't need to send a single soldier there. So, you know, if you're in a situation when you know the whole personal history of every politician and judge and military officer in that country, and you can control what everybody is seeing on YouTube or ticktock or whatever platform you don't need to send an invasion army. So the same way that dictatorship has shifted from armies to secret police and finally to hackers and algorithms, it can also happen with imperialism and colonialism that the kind of old style gunship diplomacy that you need to send in an invasion army is being replaced by a new kind of data colonialism that on the surface, nothing happens.

[01:52:52]

It's an independent country. There is not a single American or Chinese soldier on the ground. And nevertheless, no gunshots fired yet. No nothing. It's no guns fired. And nevertheless, it is a data colony completely subservient to that imperial power, something that that makes me think about.

[01:53:11]

It sort of connects back to the very beginning of our conversation, is you sort of laying out societies as a kind of information processing system and the way that the nose of society are wired sort of give a physics for what kind of governance is possible and isn't. So very early on, you couldn't have authoritarianism because it's just too small. You can do it then. We couldn't have democracies until unlike large scale democracies, until we had large broadcast freedoms. And there is a physics that makes something's possible and something's not.

[01:53:45]

And we're moving now into an area. A big question in my mind is, are are kinds of democracy possible in the physics of the 21st century?

[01:53:56]

I think that the answer is yes, because of this ability of democracies to reinvent themselves. But we still don't know what shape they will take. They will have to be quite different from the democracies we know today. And therefore, I think that we need to really remind ourselves what democracy is. If we get too much attached to a particular tool of democracy, then it loses its flexibility. Too many people equate democracy with elections, and that's very dangerous.

[01:54:33]

Traditionally, it was dangerous because it just means majority dictatorship. If fifty one percent of voters vote to disenfranchise forty nine percent, the other 49 percent is this Democratic. If ninety nine percent of voters vote to kill the other one percent, is this Democratic people who think that democracies are only about elections say yes, but that's not a democracy. That's a majority dictatorship and elections is just a tool. Real democracy is about safeguarding the liberty and equality of all the citizens.

[01:55:12]

Elections is one way to safeguard that when every person has a vote and can express his or her opinions. But there are other important tools like separation of powers. The courts should be independent. The media should be independent, like basic civil and human rights, which cannot be violated even if the majority is in favor of that. That's at least as important as having elections, if not more important. And what's happening now is that this traditional tool of elections become even more problematic because it's becoming increasingly easy to manipulate it.

[01:55:51]

So we need to remind ourselves democracy is not just about elections. It's just one tool in the toolkit. And if we have a broader understanding, then I think we can think creatively how to create a system that protects the equality and liberty of citizens in the new with the new technologies of the 21st century. And this might mean change. The election systems in radical ways, it's not the soul, the heart of democracy is not this ceremony of going once every four years to cast your ballot.

[01:56:29]

What new forms it will take, I'm not sure. But a good starting point is simply to remind yourself what democracy is and what we need to preserve and what we are allowed to change.

[01:56:40]

I know you spoke with Audrey Tang, the digital minister of Taiwan, and I think the work that she's doing there and we've interviewed her for our podcast as well, represents really thinking about how to reboot the core principles of democracy, but in a digital way for the 21st century, under the threat of China trying to sow disinformation in Taiwan and being able to do so reasonably successfully in producing a more coherent society. And you've all you've always said, you know, the goal of democracy and information technology isn't just connecting people, because isn't it interesting that as soon as we connect to people who are the most popular technology in the world to build with stone walls, the real goal should be harmonizing people.

[01:57:17]

And I think that goal is a really wise one and rediscovering what we really want here, because to maybe take it full circle, this is a line from the past. If you go back to our original problem statement that we started with this interview, that the problem of humanity is our Palaeolithic emotions, medieval institutions and godlike technology, that the answer might be something like we have to understand and embrace our Palaeolithic emotions.

[01:57:45]

We have to upgrade our medieval institutions and philosophy, and we have to have the wisdom to guide our godlike technology. And we have to reckon with that problem statement. And I think I hope we've done for listeners is explored more of that terrain today than I think we've ever gotten to do together in the past. I'd love to do this again because I think we've we've really explored some really rich ground. And I'm just so thankful you have all that. You made the time.

[01:58:12]

I'll just say I'm leaving next week for a forty five days meditation retreat. Fantastic. So maybe when I come back I have some new ideas and all these things. So yes, I'll be happy to have another conversation in a couple of months and see where it goes. Fantastic.

[01:58:30]

Lovely. Thank you so much. You've got one thing that you've given me some hope on is to see the messiness of sort of the US and other democratic systems as a kind of advantage and robustness. Whereas when you have a saber tiger that gets overoptimism way too efficient for an ecological niche, when that changes, it does not survive. And that's a new model for me, thinking about authoritarian governments in the age of A.I. So thank you for that.

[01:58:57]

Thank you. Your undivided attention is produced by the Center for Humane Technology. Our executive producer is Dan Kaddoumi and our associate producer is Natalie Jones. Nor al-Samarrai helped with the fact checking original music and sound design by Ryan and his holiday and a special thanks to the whole Center for Humane Technology team for making this podcast possible. A very special thanks goes to our generous lead supporters at the Center for Humane Technology, including the Omidyar Network, Craig Newmark Philanthropies Foundation and the Patrick J.

[01:59:34]

McGovern Foundation, among many others.