Transcribe your podcast
[00:00:00]

The following is a conversation with Daniel Kahneman, winner of the Nobel Prize in Economics for Integration of economic science with the psychology of human behavior, judgment and decision making. He's the author of the popular book Thinking Fast and Slow, that summarizes in an accessible way his research of several decades, often in collaboration with Amos Tversky, a cognitive biases prospect, theory in happiness, the central thesis of this work. It's the dichotomy between two modes of thought. What he calls system one is fast, instinctive and emotional system to a slower and more deliberative and more logical.

[00:00:40]

The book delineates cognitive biases associated with each of these two types of thinking. His study of the human mind and its peculiar and fascinating limitations are both instructive and inspiring for those of us seeking to engineer intelligent systems. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube. Good five stars and a podcast. Follow on Spotify supported on Patreon or simply connect with me on Twitter. Allex Friedman spelled F.R. IDM man. I recently started doing ads at the end of the introduction.

[00:01:16]

I'll do one or two minutes after introducing the episode and never any ads in the middle that can break the flow of the conversation. I hope that works for you. It doesn't hurt the listening experience. This show was presented by Kashyap, the number one finance app in the App Store. I personally use cash to send money to friends, but you can also use it to buy, sell and deposit Bitcoin in just seconds. Cash also has a new investing feature.

[00:01:42]

You can buy fractions of a stock, say one dollar's worth no matter what the stock price is. Broker's services are provided by cash up investing a subsidiary of Square. And remember SIPC. I'm excited to be working with cash out to support one of my favorite organizations called First Best known for their first robotics and Lego competitions. They educate and inspire hundreds of thousands of students and over 110 countries and have a perfect rating. And Charity Navigator, which means that donated money is used to maximum effectiveness.

[00:02:15]

When you get cash out from the App Store or Google Play and use Scolex podcast, you'll get ten dollars in cash. I will also donate ten dollars. The first, which again is an organization that I've personally seen, inspire girls and boys to dream of engineering a better world. And now here's my conversation with Daniel Kahneman. You tell a story of an SS soldier early in the war, well, there were two in Nazi occupied France in Paris where you grew up.

[00:03:03]

He picked you up and hugged you. And showed you a picture of a boy maybe not realizing that you were Jewish, not maybe, certainly not. So I told you I'm from the Soviet Union that was significantly impacted by the war as well, and I'm Jewish as well. What do you think World War Two taught us about human psychology broadly? Well, I think the the only big surprise is the extermination policy of genocide by the German people, that when you look back on it and, you know, I think that's a major surprise.

[00:03:44]

It's a surprise because it's a surprise that they could do it. It's a surprise that they that enough people willingly participated in that. This is this is a surprise. Now, it's no longer a surprise, but it's strange.

[00:04:01]

Many people's views, I think, about about human beings, certainly for me, the Eichmann trial in that teaches you something because it's very clear that if it could happen in Germany, it could happen anywhere. It's not that the Germans were special. This could happen anywhere. So what do you think that is? Do you think we're all capable of evil, we're all capable of cruelty?

[00:04:30]

I don't think in those terms I think that what is certainly possible is you can dehumanize people so that they're you treat them not as people anymore, but as animals. And the same way that you can slaughter animals without feeling much of anything, it can the same. And when you feel that the I think the combination of dehumanizing the other side and and having uncontrolled power over other people, I think that doesn't bring out the most generous aspect of human nature.

[00:05:12]

So that Nazi soldier. And he he was a good man, I mean, you know, and he was perfectly capable of killing a lot of people and I'm sure he did. But what what did the Jewish people mean to Nazis? So what the dismissal of Jewish as well worthy of? Again, this is surprising that it was so extreme. But it's not one thing in human nature. I don't want to call it evil, but the distinction between the ingroup and the outgroup, that is very basic.

[00:05:54]

So that's built in the loyalty and affection towards ingroup and the willingness to dehumanize the outgroup.

[00:06:03]

That's that is in human nature, and that's that's what I think probably didn't need the Holocaust to teach us that, but the Holocaust is the very sharp lesson of, you know, what can happen to people.

[00:06:20]

And what the people can do. So the effect of the ingroup and the outdoor, you know, it's clear that those were people, you know, you could you could shoot them, you could you know, they were not human. They were not there was no empathy or very, very little empathy left. So occasionally, you know, they might have been and and very quickly, by the way, the empathy disappeared if there was initially and the fact that everybody around you was doing it.

[00:06:56]

That that completely the group doing it and everybody shooting Jews, I think that that makes it permissible. Now, how much you know, whether it would it could happen on in every culture or whether the Germans were just particularly efficient and and disciplined so they could get away with it.

[00:07:25]

That is a question, it's an interesting question. Are these artifacts of history or is it human nature?

[00:07:31]

I think that's really human nature. You know, you put some people in a position of power relative to other people and and then they become less human, become different. But in general, in war outside of concentration camps in World War Two, it seems that war brings out. Darker sides of human nature, but also the beautiful things about human nature. Well, you know, embodied what it brings out is the the loyalty among soldiers. I mean, it brings out the bonding.

[00:08:08]

Male bonding, I think is a very real thing that that happens. And so and and there is a certain thrill to friendship. And there is certainly a certain thrill to friendship under risk and to shared risk.

[00:08:23]

And so people are very profound emotions up to the point where it gets so traumatic that that little is left.

[00:08:34]

But so let's talk about psychology a little bit in your book, Thinking Fast and Slow. You describe two modes of thought, system one. The fast, instinctive and emotional warning system to the slower, deliberate, logical one at the risk of asking Darwin to discuss the theory of evolution, he described distinguishing characteristics for people who have not read the book of the two systems.

[00:09:07]

Well, I mean, the word system is a bit misleading, but it's at the same time, it's misleading. It's also very useful. But what I called system one, it's easier to think of it as as a family of activities. And primarily the way I describe it is the different ways for ideas to come to mind and some ideas come to mind automatically.

[00:09:35]

And the example, a standard example, is two plus two. And then something happens to you. And and in other cases, you've got to do something. You've got to work in order to produce the idea. In my example, I always give the same pair of numbers as 27 times 14.

[00:09:53]

I think you have to perform some algorithm in your head and and it takes time. It's a very difference. Nothing comes to mind except something comes to mind, which is the algorithm. I mean, that you've got to perform and then it's work and it engages short term memory and engages executive function. And it makes you incapable of doing other things at the same time. So the the main characteristic of system to that, there is mental effort involved and there is a limited capacity for mental effort, whereas system one is effortless, essentially.

[00:10:31]

That's the major distinction.

[00:10:33]

So you talk about there, you know, it's really convenient to talk about two systems, but you also mentioned just now and in general that there is no distinct two systems in the brain from a neurobiological, even from psychology perspective. But why does it seem to, uh, from the experiments you've conducted? There does seem to be. Kind of emergent two modes of thinking, so at some point these kinds of systems came into. A brain architecture, maybe mammal it, but or do not think of it at all in those terms.

[00:11:17]

There is all emotion. These two things just emerge.

[00:11:19]

You know, evolutionary theorizing about this is cheap and easy. So it's the way I think about it is that it's very clear that animals have a perceptual system and that includes an ability to understand the world, at least to the extent that they can predict. They can't explain anything, but they can anticipate what's going to happen. And that's the key form of understanding the world and. My crude idea is that what I call system to upwell system to grew out of this and, you know, there is language and there is the capacity of manipulating ideas and the capacity of imagining futures and of imagining counterfactual things that haven't happened and to do conditional thinking.

[00:12:17]

And there are really a lot of abilities that without language and without the very large brain that we have compared to others would be impossible. Now, the system one is more like what the animals are, but system one also can talk. I mean, it has language. It understands language. Indeed, it speaks for us. I mean, you know, I'm not using every word as a deliberate process. The words I have some idea that the words come out and that's automatic and effortless.

[00:12:54]

And many of the experiments you've done is to show that, listen, system one exists and it does speak for us and we should be careful about it. The voice it provides, because, I mean, you know, we have to trust it because it's the speed at which it acts. A system is if we if we are dependent on system to for survival, we wouldn't survive very long because it's very slow crossing the street, crossing the street.

[00:13:24]

I mean, many things depend on there being automatic. One very important aspect of system one is that it's not instinctive. You use the word instinct that it contains skills that clearly have been learned so that skilled behavior like driving a car or speaking in part a skilled behavior has to be learned. And so it doesn't you know, you don't come equipped with with driving. You have to learn how to drive and and you have to go through a period where driving is not automatic before it becomes automatic.

[00:14:03]

So, yeah, you construct I mean, it's where you talk about heuristics and biases is you to make it automatic. You create a pattern and then system that essentially matches a new experience against a previous pattern. And when that match is not a good one, that's when the all the all the best happens. But most of the time it works.

[00:14:26]

And so most of the time, the anticipation of what's going to happen next is correct. And and most of the time, the plan about what you have to do is correct. And so most of the time, everything works just fine. What's interesting, actually, is that in some sense, system one is much better at what it does than system two. Is that what it does? That is, there is this quality of effortlessly solving enormously complicated problems, which clearly exists so that the chess player is a very good chess player.

[00:15:06]

All the moves that come to their mind are strong moves. So all the selection of strong moves happens unconsciously and automatically and very, very fast. And all of that is in system one slow system to verify.

[00:15:24]

So along this line of thinking, really what we are, machines that construct a pretty effective system. One. You could think of it that way, so so we're not talking about humans, but if we think about building. Artificial intelligence systems, robots. Do you think all the features and bugs that you have highlighted in human beings are useful for constructing AI systems, so both systems are useful for perhaps, well, instilling in robots?

[00:15:56]

What is happening these days is that actually what is happening in deep learning is is more like the system one product than like a system two product. I mean, deep learning matchers patterns and anticipate what's going to happen. So it's highly predictive of what is right, what deep learning doesn't have.

[00:16:23]

And, you know, many people think that this is the critical it it doesn't have the ability to reason. So it it doesn't and there is no system to there. But I think very importantly, it doesn't have any causality or any way to represent meaning and to represent real interaction.

[00:16:42]

So until that is solved, the you know, what can be accomplished is marvelous and very exciting, but limited.

[00:16:52]

That's actually really nice to think of. Current advances in machine learning is essentially system one advances. So how far can we get with just system one? If we think of deep learning in artificial systems?

[00:17:05]

And I mean, we know it's very clear that the mind has already gone way, way beyond what people thought was possible. I think I think the thing that has impressed me most about the developments in the eye is the speed. It's that things at least in the context of deep learning and maybe this is about to slow down. But things moved a lot faster than anticipated.

[00:17:32]

The transition from solving solving chess to solving go was I mean, that's the world we how quickly it went.

[00:17:43]

The move from alpha gold to Alpha Zero is sort of bewildering, the speed at which they accomplished that. Now, clearly there there are so many problems that you can solve that way, but there are some problems for which you need something else, something like reasoning.

[00:18:03]

Well, reasoning. And also, you know, one of the real mysteries. Psychologist Gary Marcus, who is also a critic of A.I., I mean, he. What he points out, and I think he has a point, is that humans learn quickly. Children don't need a million examples. They need two or three examples. So clearly there is a fundamental difference. And what enables with enables the machine to to learn quickly what you have to build into the machine, because it's clear that you have to build some expectations or something in the machine to make it ready to learn quickly that that at the moment seems to be unsolved.

[00:18:56]

I'm pretty sure that the mind is working on it. But yeah, if they have solved it, I haven't heard yet.

[00:19:05]

They're trying to actually them and open air, trying to start to get to use neural networks to reason. So assemble knowledge. Of course causality is. Temporal causality is out of reach to most everybody.

[00:19:22]

You mentioned the benefits of system one is essentially that it's fast allows us to function in the world and skilled, you know, a skill, and it has a model of the world.

[00:19:33]

You know, in a sense.

[00:19:34]

I mean, there was the earlier phase of I attempted to model reasoning and they were moderately successful.

[00:19:45]

But, you know, reasoning by itself doesn't get you much deeper. Learning has been much more successful in terms of, you know, what they can do with now. That's an interesting question, whether it's approaching its limits. What do you think? I think absolutely so. I just talked to John Lithgow and he mentioned, you know, I know him. So he thinks that the limits we're not going to hit the limits with networks, that ultimately this kind of system of pattern matching will start to start to look like system to work with without significant transformation of the architecture.

[00:20:27]

So I'm more with the with the majority of the people who think that, yes, neural networks will hit a limit in their capability.

[00:20:34]

He on the one hand, I have heard him tell them it's a service essentially that, you know, what they have accomplished is not a big deal, that they have just touched that basically, you know, they can't do unsupervised learning in in an effective way. But you you're telling me that he thinks that the current within the current architecture, you can do causality and reasoning.

[00:20:59]

So he's very much a pragmatist in a sense that saying that we're very far away, that there's still. Yeah, I think there's this idea that he says is we can only see one or two mountain peaks ahead and there might be either a few more after or thousands more after. So that kind of idea, I hope you're right.

[00:21:21]

But nevertheless, he doesn't see a the final answer, not fundamentally looking like one that we currently have. So neural networks being a huge part of that.

[00:21:36]

Yeah, I mean, that's very likely because because pattern matching is so much of what's going on.

[00:21:43]

And you can think of neural networks as processing information sequentially. Yeah. I mean, you know, there is. There is an important aspect to, for example, you get systems that translate and they do a very good job, but they really don't know what they're talking about. And and and for that, I'm really quite surprised. For that. You would need. You would need.

[00:22:11]

An A.I. that has sensation and A.I. that is in touch with the world and awareness and maybe even something resembles consciousness, kind of ideas, awareness of know, awareness of what's going on so that the words have meaning or can get in touch with some perception or some action.

[00:22:33]

Yeah.

[00:22:33]

So that's a big thing for Young and is what he refers to as grounding to the physical space. So that's what we're talking about. The same. Yeah. So but so how, how you ground.

[00:22:46]

I mean the grounding without grounding then you get, you get a machine that doesn't know what it's talking about because it is talking about the world.

[00:22:56]

Ultimately the question open question is what it means. The ground. I mean, we're very human centric in our thinking, but what does it mean for a machine to understand what it means to be in this world? Does it need to have a body, does it need to have a finiteness, like we humans have all of these elements?

[00:23:16]

It's a very it's no you know, I'm not sure about having a body, but having a perceptual system, having a body would be very helpful to me if if you think about human mimicking humans, but having a perception that seems to be essential so that you can build you can accumulate knowledge about the world. So if you can if you can imagine a human completely paralyzed and there is a lot of the human brain could learn, you know, with a paralyzed body.

[00:23:50]

So if we got a machine that could do that, it would be a big deal.

[00:23:56]

And then the flip side of that, something you see in children and something in machine learning world is called active learning. Maybe it is also and is being able to play with the world.

[00:24:09]

How important for developing a system or system to do you think it is to play with the world, to be able to interact with a lot, a lot of what you learn as you learn to anticipate the outcomes of your actions? I mean, you can see that how babies learn it, you know, with their hands. They are they learn, you know, to connect, you know, the movements of their hands with something that clearly is something that happens in the brain and and and the ability of the brain to learn new patterns.

[00:24:44]

So, you know, it's the kind of thing that you get with artificial limbs that you connect it and then people learn to operate the artificial limb, you know, really impressively quickly, at least from from what I hear. So we have a system that is ready to learn the world through action. At the risk of going into way too mysterious of land, what do you think it takes to build a system like that? Obviously, we're very far from understanding how the brain works, but.

[00:25:20]

How difficult is it to build this kind of hours? You know, I mean, I think that Young Lagoon's answer that we don't know how many mountains there are. I think that's a very good answer. I think that, you know, if you if you look at what Ray Kurzweil is saying, that strikes me as off the wall. But but I think people are much more realistic than that. We're actually there, Mr. Subbase is. And Yanez.

[00:25:50]

And so the people are actually doing the work fairly realistic, I think, to maybe phrased it another way from a perspective, not of building it, but from understanding it. How complicated are human beings in the following sense? You know, I work with autonomous vehicles and pedestrians, so we tried to model pedestrians. How difficult is it to model a human being?

[00:26:19]

Their perception of the world, the two systems they operate under, sufficiently to be able to predict whether the pedestrian going to cross the road or not, I'm in I'm fairly optimistic about that, actually, because what we're talking about is the huge amount of information that every vehicle has and that feeds into one system, into one gigantic system. And so anything that any vehicle learns becomes part of what the whole system knows. And with a system multiplier like that, there is a lot that you can do.

[00:26:58]

So human beings are very complicated. But and and, you know, system is going to make mistakes, that human makes mistakes. I think that they'll be able to I think they are able to anticipate pedestrians. Otherwise a lot would happen. They're able to you know, they're able to get into a roundabout and into the end to traffic. So they must know both to expect, though, to anticipate how people react when they are sneaking in. And there's a lot of learning that's involved in that.

[00:27:36]

Currently, the pedestrians are treated as things that cannot be hit and not treated as agents with whom you interact and the game theoretic way.

[00:27:51]

So, I mean, it's not it's a totally open problem. And every time somebody tries to solve it, it seems to be harder than we think. And nobody's really tried to seriously solve the problem of that dance, because I'm not sure if you've thought about the problem of pedestrians, but you're really putting your life in the hands of the driver.

[00:28:12]

You know, there is a dance part of the dance that would be quite complicated. But for example, when I cross the street and there is a vehicle approaching, I look the driver in the eye. And I think many people do that. And, you know, that's a signal that that I'm sending. And I would be sending that machine to an autonomous vehicle. And it had better understand it because it means I'm processing.

[00:28:39]

So and there's another thing you do that, actually. So I'll tell you what you do, because you watch I've watched hundreds of hours of video on this is when you step in the street, you do that before you step in the street. And when you step in the street, you actually look awake away. Yeah. Yeah.

[00:28:56]

Now, what is what they're saying is, I mean, you're trusting that the car who hasn't slowed down yet will slow down.

[00:29:05]

And you're telling him, yeah, I'm committed. I mean, this is like in a game of chicken. So I'm committed and if I'm committed, I'm looking away. So there is you, you just have to stop.

[00:29:18]

So the question is whether a machine that observes that needs to understand mortality here.

[00:29:24]

I'm not sure that it's got to understand so much because it's got to anticipate so and here.

[00:29:35]

But, you know, you're surprising me because here I would think that maybe you can anticipate without understanding, because I think this is clearly what's happening, playing God and playing chess. There's a lot of anticipation and there is zero understanding. So.

[00:29:56]

I thought that you didn't need a model of the human. Yes, and the model of the human mind to avoid hitting pedestrians, but you are suggesting that you do as and then it's then it's a lot harder.

[00:30:13]

So this is all and I have a follow up question to see where your intuition lies is it seems that almost every robot human collaboration system is a lot harder than people realize. So. Do you think it's possible for robots and humans to collaborate successfully if we talked a little bit about semi-autonomous vehicles like in the Tesla autopilot, but just in tasks in general? If you think we talked about current, you know, it's being kind of system one, do you think?

[00:30:50]

Those same systems can borrow humans for system to type tasks and collaborate successfully.

[00:30:59]

Well, I think that in any system where humans and the machine interact, the human will be superfluous within a fairly short time. And that is if the machine has advanced enough so that it can really help the human, then it may not need the human for a long time. Now, it would be very interesting if there are problems that for some reason the machine doesn't cannot solve, but that people could. So then you would have to build into the machine an ability to recognize.

[00:31:34]

That it is in that kind of problematic situation and and to call the human, that cannot be easy without understanding that is it? It must be very difficult to to program a recognition that you are in a problematic situation without understanding the problem.

[00:31:57]

But that's very true in order to understand. The full scope of situations that are problematic, you almost need to be smart enough to solve all those problems.

[00:32:10]

It's not clear to me how much the machine will need the human. I think the example of chess is very instructive. I mean, there was a time at which Kasparov was saying the human machine combinations will beat everybody. Even Stockfish doesn't need people, and Alpha Zero certainly doesn't need people.

[00:32:33]

The question is, just like you said, how many problems are like chess and how many problems are the ones where are not like chess, where? Well, every problem probably in the end is like chess. The question is how long is that transition period?

[00:32:47]

I mean, you know, that's that's a question I would ask you in terms of. An autonomous vehicle just driving is probably a lot more complicated than go to solve that. Yes, and that's surprising because it's open. No, I mean, you know, it wouldn't that's not surprising to me because the because. That there is a hierarchical aspect to this which is recognizing a situation and then within the situation, bringing bringing up the relevant knowledge. And for that hierarchical type of system to work, you need a more complicated system than we currently have.

[00:33:33]

A lot of people think because as human beings, this is probably the the cognitive biases they think of driving is pretty simple because they think of their own experience. This is actually a big problem for AI researchers or people thinking about AI because they. Evaluate how hard a particular problem is based on very limited knowledge, based on how hard it is for them to do the task and then they take for granted, maybe you can speak to that, because most people tell me driving is trivial and humans, in fact, are terrible at driving is what people tell me.

[00:34:15]

And I see humans and humans are actually incredibly driving and driving is really terribly difficult. Yeah. So is that just another element of the effects that you describe in your work on the psychology side? I mean, I haven't really you know, I would say that my research has contributed nothing to understanding the ecology and to understanding the structure of situations and the complexity of problems.

[00:34:47]

So all all we know is very clear that. That goal, it's endlessly complicated, but it's very constrained, so and in the real world, far fewer constraints and and many more potential surprises.

[00:35:06]

So so that's obviously because it's not always obvious to people. Right.

[00:35:11]

So when you think about well, I mean, you know, people thought that reasoning was hard and perceiving was easy. But, you know, they quickly learned that actually modeling vision was tremendously complicated and modeling, even proving theorems, was relatively straightforward to push back a little bit on the quickly part.

[00:35:37]

It took several decades to learn that and most people still have to learn that. I mean, our intuition and of course, our researchers have. But you drift a little bit outside the specific. I feel the intuition is still present. Oh, yeah.

[00:35:53]

No, I mean, that's true. I mean, the intuitions, the intuitions of the public haven't changed radically. And they are there, as you said, they're evaluating the complexity of problems by how difficult it is for them to solve the problems. And that's got very little to do with the complexity of solving them.

[00:36:15]

And I I do think from the perspective of a researcher. Do we deal with the intuitions of the public so in trying to think arguably the combination of hype, investment and the public intuition is what led to the A.I. winters? I'm sure that same can be applied to tech or that the intuition of the public leads to media hype, leads to companies investing in the tech, and then the tech doesn't make the companies money. And then there's a crash. Is there a way to educate people to fight the court system?

[00:36:59]

One thinking. In general, no, you know, I think that's the simple answer, and it's going to take a long time before the understanding of whether those systems can do becomes, you know, and becomes public knowledge, I. And. And then and the expectations, you know, there are several aspects that are going to be very complicated and that the.

[00:37:37]

The fact that you have a device that cannot explain itself is a major, major difficulty and and we're already seeing that. I mean, this is this is really something that is happening. So it's happening in the judicial system. So you have you have systems that are clearly better at predicting parole violations than than judges. But but they can't explain the reasoning. And so people don't want to trust them.

[00:38:13]

We seem to, in system one, even use cues to make judgments about our environment. So let's explain ability point. Do you think humans can explain stuff? No, but I mean, there is a very interesting aspect of that. Humans think they can explain themselves. Right. So when you say something and I asked you, why do you believe that, then reasons will occur to you and you will. But actually, my own belief is that in most cases, the reasons have very little to do with why you believe what you believe, so that the reasons are a story that that comes to your mind when you need to explain yourself.

[00:39:02]

But but but people traffic in those explanations. I mean, the human interaction depends on those shared fictions and the stories that people tell themselves.

[00:39:15]

You just made me actually realize. And we'll talk about stories in a second. That. Not to be cynical about it, but perhaps there's a whole movement of people trying to do explainable I. And really, we don't necessarily need to explain, I just need to explain itself, it just needs to tell a convincing story.

[00:39:39]

Yeah, absolutely it does. Unless the story doesn't necessarily need to reflect the truth as it might, it just needs to be convincing.

[00:39:48]

There's something you can say exactly the same thing in a way that sounds cynical or doesn't sound so aggressive. And so but but the objective brilliance of having an explanation is, is to tell a story that will be acceptable to people and and and for it to be acceptable and to be robustly acceptable, it has to have some element of truth. But but the objective is for people to accept that. It's quite brilliant, actually, but so on the and the stories that we tell, sorry to ask me to ask you the question that most people know the answer to, but you talk about two cells in terms of how life is lived, the experience of and remembering self.

[00:40:41]

Can you describe the distinction between the two? Well, sure.

[00:40:44]

I mean, there is an aspect of of life that occasionally, you know, most of the time we just live and we have experiences and they're better and they are worse. And it goes on all the time. And mostly we forget everything happens. We forget most of what happens then occasionally you. When something ends at different points, you evaluate the past and you form a memory and the memory is schematic. It's not that you can roll of film, of an interaction.

[00:41:20]

You constructs, in effect, the elements of a story about it, about an episode. So there is the experience and there is the story that is created about the experience, and that's what I call the remembering. So I had the image of two selves. So there is a self that lives and there is a self that evaluates life. Now, the paradox and the paradox, and that is that we have one system oneself that does the living, but the other system, the remembering self is all we get to keep.

[00:42:01]

And basically decision making and and everything that we do is governed by our memory is not by what actually happened. It's it's governed by by the story that we told ourselves or by the story that we're keeping. So that's that's the distinction. I mean, there's a lot of brilliant ideas about the pursuit of happiness that come out of that. What are the properties of happiness which emerge from the self?

[00:42:31]

There are there are properties of how we construct stories that are really important. So that I studied a few. But but. A couple of really very striking, and one is that in stories, time doesn't matter, there's a sequence of events or the highlight or not the end and how long it took.

[00:42:58]

You know, they lived happily ever after and three years later, something it time really doesn't matter. And in stories, events matter, but time doesn't that that leads to a very interesting set of problems, because time is all we got to live. I mean, you know, time is the currency of life. And yet time is not represented basically in evaluating memories. So that that creates a lot of paradoxes that I've thought about.

[00:43:35]

Yeah. That are fascinating. But if you were to. Give advice on how one lives a happy life, well, based on such properties, what's the optimal?

[00:43:49]

Well, you know, I gave up I abandoned happiness research because I couldn't solve the problem. I couldn't I couldn't see. And in the first place, very clear that if you do talk in terms of those two cells, then that what makes the remembering self happy and what makes experiencing self happy are different things. And I, I asked the question of suppose you are planning a vacation and you're just told that at the end of the vacation you'll get an amnesic drug.

[00:44:22]

So you remember nothing and they'll also destroy all your photos. So it'll be nothing. Would you still go to the same vacation? And it's. It turns out we go to vacations in large part to construct memories, not to have experiences, but to construct memories. And it turns out that the vacation that you would want for yourself if you knew you would not remember is probably not the same vacation that you will want for yourself, if you will remember.

[00:44:57]

So I have no solution to these problems. But clearly, those are big issues. And you've talked about issues. You've talked about sort of how many minutes or hours you spend about the vacation. It's an interesting way to think about it, because that's how you really experience the vacation outside the being in it. But there's also a modern I don't know if you think about this or interact with it, there's a modern way to magnify the remembering self, which is by posting on Instagram, on Twitter, on social networks.

[00:45:34]

A lot of people live life for the picture that you take, that you post somewhere. And now thousands of people share in a potentially potentially millions. And then you can relive it even much more than just those minutes. Do you think about that? I magnification much.

[00:45:51]

You know, I'm too old for social networks and I've never seen Instagram, so I cannot really speak intelligently about those things. I'm just too old.

[00:46:03]

But it's interesting to watch the exact effects you describe make a very big difference. I mean, and it will make it will also make a difference. And that I don't know whether. It's clear that in some ways. The devices that serve us are supplants function, so you don't have to remember phone numbers, you don't have you really don't have to know facts. I mean, the number of conversations I'm involved with, somebody says, well, let's look it up.

[00:46:39]

So it's in a way, it's made conversations. Well, it's it means that it's much less important to know things, you know, it used to be very important to know things. This is changing. So the requirements of that that we.

[00:47:00]

Hub for ourselves and for other people are changing because of all those supports and and I have no idea what Instagram does, but that's what I'll tell you if I knew I mean, I wish I could just have the remembering self could enjoy this conversation, but I'll get to enjoy it even more by having watched by watching it and then talking to others about one hundred thousand people, Skerries is to say, well, listen or watch this. Right. It changes things.

[00:47:33]

It changes the experience of the world that you seek out experiences which could be shared in that way. And I haven't seen it's the same effects that you describe. And I don't think the psychology of that magnification has been described yet because it's a new one, the sharing.

[00:47:53]

There was a there was a time when people read books and and and you could assume that your friends had read the same books that you read. So there was kind of invisible sharing.

[00:48:11]

There was a lot of sharing going on and there was a lot of assumed common knowledge. And, you know, that was built in I mean, it was obvious that you had read The New York Times. It was obvious that you'd read the reviews. I mean, so a lot was taken for granted that was shared. And, you know, when there were when there were three television channels, it was obvious that you'd seen one of them, probably the same.

[00:48:43]

So sharing a schimming is always, always was always there. It was just different. At the risk of inviting mockery from you, let me say that I'm. Also, a fan of Sartre and Cammo and existentialist philosophers, and I'm joking, of course, about mockery, but from the perspective of the two selves, what do you think of the existentialist philosophy of life?

[00:49:11]

So trying to really emphasize the experiencing self as the proper way to. They are the best way to live life. I don't know enough philosophy to answer that, but it's not you know, the emphasis on experience is also the emphasis in Buddhism. Right. Still on that, you just have got to experience things and and and not to evaluate and not to pass judgment and not to score. Not to keep score.

[00:49:49]

So if when you look at the grand picture of experience, you think there's something to that, that one one of the ways to achieve contentment and maybe even happiness is letting go of any of the things, any of the procedures of the remembering self.

[00:50:09]

Well, yeah.

[00:50:11]

I mean, I think, you know, if one could imagine a life in which people don't score themselves, it feels as if that would be a better life, as if the self scoring and, you know, how am I doing a kind of question. Is not is not a very happy thing to have, but I got out of that field because I couldn't solve that problem. And and that was because my intuition was that the experiencing self. That's reality.

[00:50:48]

But then it turns out that what people want for themselves is not experiences. They want memories and they want a good story about their life. And so you cannot have a theory of happiness that doesn't correspond to what people want for themselves. And when I when I realized that this this was where things were going, I really sort of left the field of research.

[00:51:12]

Do you think there's something instructive about this emphasis of reliving memories in building A.I. systems so currently artificial intelligence systems? Are more like experiencing self in that they react to the environment as some pattern formation, like learning so on, but you really don't construct memories except in reinforcement learning. Every once in a while you replay over and over.

[00:51:42]

Yeah, but, you know, that would in principle would not be as useful.

[00:51:48]

Do you think it's a feature, a bug of human beings that we that we look back? Oh, I think that's definitely a feature. That's not a bug. I mean, you have to look back in order to look forward. So without without looking back, you couldn't you couldn't really intelligently look forward.

[00:52:10]

You're looking for the echoes of the same kind of experience in order to predict what the future holds. Yeah. Though, Victor Frankl in his book Man's Search for Meaning, I'm not sure if you've read, describes his experience that the concentration concentration camps during World War Two as a way to describe that finding, identifying a purpose in life, a positive purpose in life, can save one from suffering. First of all, you connect with the philosophy that he describes there and.

[00:52:45]

Not really in the so I can I can really see that somebody who has that feeling of purpose and meaning and so on, that that could sustain you. I in general don't have that feeling. And I'm pretty sure that if I were in a concentration camp, I'd give up and die, you know. So he talks. He is he is a survivor. Yeah. And, you know, he survived with that.

[00:53:16]

And I'm and I'm not sure how essential to survival this thing is, but I do know when I think about myself that I would have given up at, oh, this isn't going anywhere.

[00:53:31]

And there is there is a sort of character that that that manages to survive in conditions like that. And then because they survive, they tell stories. And it sounds as if they survived because of what they were doing. We have no idea they survived because the kind of people that they are and they are the kind of people who survived to tell themselves stories of a particular kind.

[00:53:56]

So I'm not so I don't think seeking purpose is a significant driver and our IT.

[00:54:05]

It's a very interesting question, because when you ask people whether it's very important have meaning in their life, they said, oh, yes, that's the most important thing. But when you ask people what kind of a day did you have and and, you know, what were the experiences that you remember? You don't get much meaning. You get social experiences then and. And some people say that, for example, in and in child, you know, and taking care of children, the fact that they are your children and you're taking care of them makes a very big difference.

[00:54:47]

I think that's entirely true, but it's more because of a story that we are telling ourselves, which is a very different story when we are taking care of our children or when we're taking care of other things, jumping around a little bit and doing a lot of experiments.

[00:55:05]

Let me ask a question. Most of the work I do, for example, is in the wall in the real world, but most of the clean good signs that you can do is in the lab. So that distinction. Do you think we can understand the fundamentals of human behavior? Through controlled experiments in the lab. If we talk about people, Damiri, for example, it's much easier to do when you can control lighting conditions. Yeah, so when we look at driving, lighting, variation destroys your your ability to use pupil diameter.

[00:55:47]

But in the lab for, as I mentioned, semi-autonomous autonomous vehicles in driving simulators, we can't we don't capture true, honest human behavior in that particular domain. So your what's your intuition? How much of human behavior can we study in this controlled environment of the lab?

[00:56:10]

A lot, but you'd have to verify it. You know that your conclusions are basically limited to the situation, to the experimental situation. Then you have to jump that the big inductive leap to the real world. So and and that's the flare. That's where the difference, I think, between the good psychologists and others that are mediocre is in the sense that that your experiment captures something that's important and something that's real and others are just running experiments.

[00:56:50]

So what is that like the birth of an idea to his development in your mind, to something that leads to an experiment? Is that similar to maybe like what Einstein or a good physicist do is your intuition, you basically use your intuition to build up?

[00:57:06]

Yeah, but I mean, you know, it's it's very skilled intuition. I mean, I just had that experience, actually. I had an idea that turned out to be a very good idea a couple of days ago. And and you and you have a sense of that building up. So I'm working with a collaborator. And he essentially was saying, you know, what? What are you doing? What's what's going on? And I was I really I couldn't exactly explain it, but I knew this is going somewhere.

[00:57:38]

But, you know, I've been around that game for a very long time. And so I can you develop that anticipation that, yes, this this is worth following something here. That's part of the skill.

[00:57:51]

Is that something you can reduce to words in describing a process in the form of advice to others, know follow your heart, essentially.

[00:58:03]

And, you know, it's it's like trying to explain what it's like to drive. It's not. You've got to break it apart and it's not. And then you lose and then you lose the experience, the. You mentioned collaboration, you've written about your collaboration with. Amos Tversky, that this is you writing the 12 or 13 years in which most of our work was joint or years of interpersonal and intellectual bliss, everything was interesting, almost everything was funny.

[00:58:35]

And there was a current joy of seeing an idea take shape. So many times in those years, we shared the magical experience of one of us saying something which the other one would understand more deeply than the speaker had done. Contrary to the old laws of information theory, it was common for us to find that more information was received than had been sent. I have almost never had the experience with anyone else. If you have not had it, you don't know how marvelous collaboration can be.

[00:59:06]

So let me ask Bird, perhaps a silly question. How does one find and create such a collaboration that may be asking, like, how does one find love?

[00:59:18]

But yeah, yeah, b, you have to be lucky. And and I think you have to have the character for that, because I've had many collaborations with me, none with as exciting as would almost be. But I've had and I'm having just very so it's a skill. I think I'm good at it. Not everybody is good at it, and then it's the luck of finding people who are also good at it.

[00:59:49]

Is there advice in a forum for a young scientist? Who also seeks to violate this law of information, Dary? I really think it's so much luck is involved and, you know, in those. Really serious collaborations, at least in my experience, are a very personal experience and and I have to like the person I'm working with otherwise, you know, I mean, there is that kind of collaboration, which is like an exchange or a commercial exchange of giving this.

[01:00:35]

You give me that. But the real ones are interpersonal. They're between people who like each other and and who like making each other think and who like the way that the other person responds to your thoughts. You have to be lucky. Yeah. I mean, but I already noticed even just me showing up here, you very quickly started digging into a particular problem I'm working on an already new information started to emerge. Is that a process?

[01:01:07]

You just the process of curiosity, of talking to people about problems and seeing I'm curious about anything to do with the AI and robotics and, you know, and so and I knew you were dealing with that. So I was curious.

[01:01:22]

Just follow your curiosity, jumping around and the psychology front, the dramatic sounding terminology of replication crisis, but really just. The at times this is the fact that at times studies do not are not fully generalizable, they don't you are being polite and it's worse than that.

[01:01:49]

But is it? So I'm actually not fully familiar with how bad it is. Right.

[01:01:55]

So what do you think is the source? Where do you think.

[01:01:58]

I think I know what's going on, actually. I mean, I have a theory about what's going on and what's going on is that there is, first of all, a very important distinction between two types of experiments and one type is within subjects. So it's the same person as to experiment on conditions. And the other type is between subjects where some people have this condition, other people that they different worlds and between subject experiments are much harder to predict and much harder to anticipate.

[01:02:39]

And the reason and they're also more expensive because you need more people. And that's that's just so between subject experiments is where the problem is. It's not so much an within subject experiment, really between. And there is a very good reason why the intuitions of researchers about between subject experiments are wrong. And that's because when you are a researcher, you are in a within subject situation that is, you are imagining the two conditions and you see the causality and you feel it.

[01:03:21]

And but in the between subjects condition, they don't think they see they live in one condition and the other one is just nowhere. So our intuitions are very weak about between subject experiments. And that, I think, is something that people haven't realized.

[01:03:42]

And and in addition, because of that, we have no idea about the power of manipulations of experimental manipulations, because the same manipulation is much more powerful when when you are in the two conditions than when you live in only one condition.

[01:04:04]

And so the experimenters have very poor intuitions about between subject experiments and and there is something else which is very important, I think, which is that almost all psychological hypotheses are true.

[01:04:21]

That is in the sense that, you know, directionally, if you have a hypothesis that A really causes B, that that it's not true, that it causes the opposite. B, maybe A just has very little effect. But hypotheses are true mostly, except mostly they're very weak. They're much weaker than you think when you are having images of.

[01:04:51]

So the reason I'm excited about that is that I recently heard about some some friends of mine who they essentially funded 53 studies of behavioral change by 20 different teams of people with a very precise objective of changing the number of time that people go to the gym. But, you know, and. And the success rate was zero then one of the 53 studies work. Now, what's interesting about that is those are the best people in the field and they have no idea what's going on.

[01:05:38]

So they are not calibrated. They think that it's going to be powerful because they can imagine it. And actually just weak, because you are focusing on on your manipulation and feels powerful to you, there's a thing that I've written about that called the focusing illusion, that is that when you think about something, it looks very important. More important than it really is, more important than it really is, but if you don't see that it affects the 53 studies, doesn't that mean you just report that?

[01:06:14]

So what was, I guess, the solution to that? Well, I mean, the. The solution is for people to trust their intuitions less or to try out their intuitions before, I mean, experiments have to be preregistered and by the time you run an experiment, you have to be committed to it and you have to run the experiment seriously enough and in a public. And so this is happening. And the interesting thing is. But what happens before and how do people prepare themselves and how they run pilot experiments, it's going to change the way psychology is done and it's already happening.

[01:06:59]

You have a hope for this might connect to that, this study sample size. Yeah, I do hope for the Internet for this.

[01:07:09]

I mean, you know, this is really happening and. Yeah. Everybody is running experiments on them, Turk, and it's very cheap and very effective.

[01:07:20]

So do you think that changes psychology essentially because you think you can do that now, that eventually it will?

[01:07:28]

I mean, I you know, I can't put my finger on how exactly, but it's that's been true in psychology. Whenever an important new method came in, it changes the field. So and an intern is really a method because it makes it very much easier to do something to do some things.

[01:07:52]

Is there undergrad students will ask me, you know, how big a neural network should be for a particular problem. So let me ask you an equivalent equivalent question. How big how many subjects this study have for it to have a conclusive result?

[01:08:11]

Well, it depends on the strength of the effect. So if you're studying visual perception of the perception of color, many of the other classic results then in visual and color perception were done on three or four people. And I think one of them was color blind, but partly color blind.

[01:08:31]

But on vision, you know, it is fairly reliable. Many people don't need a lot of applications for some type of of neurological experiment.

[01:08:46]

Neuro, when you're studying weeker phenomenon and especially when you're studying them between subjects, then you need a lot more subjects than people have been running. And that is that's one of the things that are happening in psychology now, is that the power is a statistical power. The experiment is increasing rapidly.

[01:09:11]

Does the between subjects as the number of subjects goes to infinity approach?

[01:09:16]

Well, you know, it goes to infinity is exaggerated. But people the standard number of subjects for an experiment, psychology with 30 or 40 for a week effect, that's simply not enough. And you may need a couple of hundred. I mean, it's that that's sort of an order of magnitude.

[01:09:45]

What are the major disagreements and theories and effects that you've observed throughout your career that still stand today?

[01:09:55]

Well, more than several fields. Yeah, but I what still is out there is as a as major disagreement, perhaps in your mind, in. I've had one extreme experience of, you know, controversy with somebody who really doesn't like the work that they Mazursky and I did, and it's been after us for 30 years or more at least when.

[01:10:19]

Want to talk about it? Well, I mean, his name is Good Gigerenzer. He's a well-known German psychologist. And that's the one controversy I have, which I. It's been unpleasant and no, I don't particularly want to talk about it, but is there is there open questions even in your own mind? Every once in a while? You know, we talked about semi-autonomous vehicles in my own mind. I see what the data says, but I also am constantly torn.

[01:10:52]

Do you have things where you or your studies have found something, but you're also intellectually torn about what it means?

[01:10:58]

And there's been maybe disagreements within your own mind about particular?

[01:11:04]

I mean, it's you know, one of the things that are interesting is how difficult it is for people to change their mind, essentially. And once they're committed, people just don't change their mind about anything that matters, and that is surprisingly, but it's true about scientists. So the controversy that I describe and how that's been going on like 30 years and it's never going to be resolved. And you build a system and you live within the system and other other systems of ideas look foreign to you and and there is very little contact and very little mutual influence that happens a fair amount.

[01:11:50]

Do you have a hope for. Advice or message on that, we think about science, thinking about politics, thinking about things that have impact on this world, how can we change our mind? I think that I mean, on things that matter, which are political or political or religious, and people just don't don't change their mind. And by and large, and there is very little that you can do about it, the what does happen is that if leaders change their mind.

[01:12:31]

So, for example, the public, the American public doesn't really believe in climate change, doesn't take it very seriously. But if some religious leaders decided this is a major threat to humanity, that would have a big effect so that we we have the opinions that we have, not because we know why we have them, but because we trust some people and we don't trust other people. And so it's much less about evidence than it is about stories.

[01:13:06]

So the way one way to change your mind isn't at the individual level is that the leaders of the communities you look up with, the stories change and therefore your mind changes with them. So there's a guy named Alan Turing came up with a Turing test. Yeah. What do you think is a good test of intelligence? Perhaps were drifting? In a topic there were maybe philosophising about, but what do you think is a good test for intelligence, for an artificial intelligence system?

[01:13:41]

Well, the standard definition of, you know, of artificial general intelligence is that it can do anything that people can do and it can do them better. Yes. And what what we're seeing is that in many domains, you have domain specific and. You know, devices or programs or software, and they beat people easily in a specified way, but we are very far from as if a general ability, a general purpose, intelligence. So we. In.

[01:14:21]

In machine learning, people are approaching something more general mean for Alpha Zero was I was much more general than an hour ago, but it's still extraordinarily narrow and specific and what it can do. So we're quite far from. From something that can in every domain, think like a human better, what aspects of the Turing test has been criticized, its natural language conversation?

[01:14:53]

That is too simplistic. It's easy to, quote, unquote, pass under under constraints specified. What aspect of conversation would impress you if you heard it? Is it humor? Is it what what would impress the heck out of you if you saw it in conversation? I mean, certainly what would you say? Which would be impressive. And humor would be more impressive than just factual conversation, which I think is is easy and allusions would be interesting and.

[01:15:33]

Metaphors would be interesting, I mean, but new metaphors, not practice metaphors, so there is a lot that, you know, would be sort of impressive in that it's completely natural in conversation, but that you really wouldn't expect there's a possibility of creating a human level intelligence or superhuman level intelligence system.

[01:15:58]

Excite you, scare you?

[01:16:01]

Well, I mean, how does it make you feel? I find the whole thing fascinating, absolutely fascinating, exciting, I think, and exciting, it's also terrifying, you know, but but I'm not going to be around to see it. And so I'm curious about what is happening now. But I also know that the predictions about it are silly. We really have no idea what it will look like 30 years from now. No idea. Speaking of silly, bordering on the profound, let may ask the question of, in your view, what is the meaning of it all, the meaning of life?

[01:16:47]

He's a descendant of great apes that we are. Why? What drives us as a civilization, as a human being, as a force behind everything that you've observed and studied. Is there any answer or is it all just a beautiful mess? There is no answer that that I can understand, and I'm not. And I'm not actively looking for one. Do you think an answer exists? No, there is no answer that we can understand. I'm not qualified to speak about what we cannot understand.

[01:17:26]

But there is I know that we cannot understand reality, you know, and I mean, there are a lot of things that we can do. I mean, you know, gravity waves. I mean, that's that's a big moment for humanity. And when you imagine that ape, you know, being able to to go back to the big bang, that's that. But but the why. Yeah, that was bigger than us. The Y is hopeless.

[01:17:57]

Really then. Thank you so much. It was an honor. And thank you for speaking. Thank you. Thanks for listening to this conversation and thank you to our presenting sponsor cash app, download it, use Cold Legs podcast, you'll get ten dollars and ten dollars will go to First, a STEM education nonprofit that inspires hundreds of thousands of young minds to become future leaders and innovators. If you enjoy this podcast. Subscribe on YouTube. Give it five stars, an Apple podcast.

[01:18:25]

Follow on Spotify, support on Patra or simply connect with me on Twitter. And now let me leave you with some words of wisdom from Daniel Kahneman. Intelligence is not only the ability to reason, it is also the ability to find relevant material and memory and to deploy attention when needed. Thank you for listening and hope to see you next time.