Transcribe your podcast
[00:00:00]

Today's episode of Rationally Speaking is sponsored by Give Well, a nonprofit dedicated to finding outstanding charities and publishing their full analysis to help donors decide where to give. They do rigorous research to quantify how much good a given charity does, how many lives does it save or how much does it reduce? Poverty per dollar donated. You can read all about their research or just check out their short list of top recommended evidence based charities to maximize the amount of good that your donations can do.

[00:00:25]

It's free and available to everyone online. Check them out at Give Weblog. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I'm your host, Julia Gillard, and I am taping this podcast live at the Northeast Conference on Science and Skepticism.

[00:00:56]

Hi. I am very excited to introduce today's guest who's here is here with me, I have Professor Robert Kurzban, who is a professor of psychology at the University of Pennsylvania, where he specializes in evolutionary psychology. And one of his books, excellently titled Why Everyone Else is a Hypocrite, has been such a great title. Rob, it's been very formative in my thinking about rationality and has shaped a lot of my work in my talk that I give to people over the years.

[00:01:29]

So this conversation has been a long time in the works. Rob is also the co-author of The Hidden Agenda of the Political Mind, which is a book that I talked about in, I guess, my most recent episode of the podcast with Jason Witten. And Rob, no pressure, but Jason was an excellent guest. So the bar is just that high. That's all I'm saying. So, Rob, welcome to the show and thanks for having me.

[00:01:56]

Pleasure to be here. So as I hinted at the thing that I'm most excited to talk to you about is why everyone else is a hypocrite. I thought we could just jump right into your thesis, maybe by by talking about what motivated it, what is what is the mystery or puzzle in the world that that demanded a theory to explain it? Yeah, well, as a psychologist, I consider myself to be a student of human nature. And the main lesson that I take from my time as a psychologist is that people are super weird and super puzzling.

[00:02:31]

And one of the things I think that's most surprising or at least puzzling to me is just how inconsistent we are that a part of my background in economics and economists review people as these coldly rational, consistent beings. And this is not my experience of humanity. And so did anyone ever consider the theory that economists just don't go out very much and meet people? Yeah, that's actually that's the main theory, as it turns out. Yeah. They need to get out more.

[00:02:59]

So I got interested in human inconsistency. And for me, as a person who is a student of evolutionary approaches to psychology, what was exciting was this idea that the fact that the mind consists of lots of different pieces, that connects very directly to this notion that there might be different pieces that are going on and off at different times. And so if that's true, then you start to get a window into why we're not sort of these uniformly consistent creatures, because the mind just doesn't work that way.

[00:03:25]

It's kind of a collection of parts. And trying to use that idea to explain this fundamental mystery to me was extremely exciting. So let's talk about what it means to say that the mind is made of parts. Are you are you talking about physical parts in the brain that do something different and operate independently or something else? What I'm talking about really is a functional sense of different parts. Like if people open up their smartphones, as I see many in the audience right now are doing, you'll see that there are no I'm not insulted at all.

[00:03:57]

Well, you'll see there's lots of different notes on. Yeah, I think that's right. You see that there's like little applications right there specialize. So one of them is a communications app. One of them is, you know, maybe checking your stocks. One of them is for throwing pigs at birds or birds at pigs or whatever that game is. Right. But they're so you're actually productive. So the idea is that and we know this, right?

[00:04:20]

So the mind has a visual system which lets us see the world as a memory system, which allows us to remember scintillating talks like this one and the language system, which allows us to process language and produce it. And so that's the sense that I mean, now those things might be distributed all over the brain. I'm not too worried about the spatial elements in the same way that in the smartphone you don't care where Angry Birds lives. You just care that you can get to it when you click things.

[00:04:41]

So what I'm sort of interested in is this idea that the mind consists of parts in the sense that parts that are specialized to do different jobs, that's the fundamental message. And because of this so-called functional specificity, once again, you get this idea that there's not not a unitary entity in there. It's a lot of different pieces that are that are sort of working somehow together. How controversial is that? Because, I mean, to some extent, as you say, we have a vision.

[00:05:08]

We we can process language. There's all these different things that the brain is doing that you could think of as as module's or is analogous to apps. So in that sense, it can't possibly be controversial. But maybe there's a spectrum of how modular people think the brain is. Yeah, that's exactly right. So I think that almost every modern psychologist is willing to concede that they're specialised systems for vision. I mean, you poke somebody in the eye, you look in there like, oh, there's a retina that does one thing all day, just sits around and it waits for light.

[00:05:37]

And then it says, oh, there's something out there. And then that sense everyone agrees there specialisation. But as you sort of go up metaphorically, the cognitive system, it gets more controversial. So from memory, people sort of feel like, yeah, there's probably a specialised memory system. But now let's talk about the systems that underlie social behavior. Like is there a specialised system in your head that's designed to identify when someone has cheated you or is.

[00:06:00]

At a more general process and in there lies the controversy, and I feel comfortable saying that there's animated controversy, the old expression, right. So the fights in academia so venomous because the stakes are so small. Well, this is one of those cases where it feels important. That's economist. Well, I certainly think it's important. I mean, I think in many ways what's at stake here really is something super important, which is what is the fundamental nature of the mind is does it consist of lots of different specialized parts all the way all the way down, like everything from vision to the sophisticated social systems that we have?

[00:06:34]

Or is there are more or more general processes. And of course, all of this intersects with other kinds of theories, including my area of evolutionary approach approaches the evolutionary point of view points to specialization, because we see that throughout the animal kingdom and there's good reasons to think that. So, yeah, there's I feel comfortable saying that there's sufficient controversy to to keep the field going. I think I think that's right. So at the other end of the spectrum from you perhaps would be the one algorithm hypothesis that people like Jeff Hawkins sometimes talk about that.

[00:07:05]

Yeah, it might seem like we have all these different functions we can optimized for social goals. And then there's also vision and language and strategizing and so on. But really, it seems most likely that there's just one algorithm that can do all these different things, like the same way that that deep learning, reinforcement learning algorithms, machine learning researchers now can take roughly the same kind of basic algorithm and throw a bunch of computing data at it and train it to do language or train it to classify images or to play video games.

[00:07:39]

The idea being, according to this hypothesis, that our brains work the same way. Why is that implausible to you? Well, so I would say a couple of things. I mean, the first the first step, if we're going to have a debate like this, is just to talk about what the different evidence would look like in favor of the different views. I don't think my reading of the evidence in terms of psychology doesn't sort of point me in that direction.

[00:08:00]

I mean, everywhere you look, it looks like there's specificity. But there's other issues, too. There's philosophical problems in the sense that it's very difficult to lay out exactly what that algorithm looks like in a way that can actually solve the problems on the ground that the mind has to solve, particularly in real time. Certainly, people have tried this right. So the history of psychology going dating all the way back to behaviourism. So the behavior sort of said the same thing.

[00:08:23]

They said, look, just give me give me a child and I'll teach it by shaping it in different directions. Right. With reinforcement learning and so on. And it just turns out that's not true. Like there's a reason that the cognitive revolution replaced behaviorism is because because pigeons tend to sort of behave the way they were supposed to according to the behavioral principles. And people really did it. And then it sort of got a second incarnation in the context of the sort of connection is sort of ideas that we saw in the 80s and into the early 90s.

[00:08:52]

And once again, my read of the evidence there is that the systems just don't have enough content in them to actually solve the problems. Don't get me wrong, I think that learning is super important. But one of the things I think we're learning about learning, as it were, is that you need different kinds of learning systems to learn different kinds of information. So the thing that learns language is probably not going to be the same thing that learns who you ought to make, who should be your friend and who should be your mate and who should be your ally, who should be your enemy.

[00:09:18]

You just need different information, data structures in order to solve that problem.

[00:09:23]

So what would be an example or two of a more sort of social or psychological function that a module might serve in your theory? Not not like vision. So the one that I mentioned earlier is usually held up as the prototype. This idea of cheater detection. So this comes out of some work that my former advisors did to Santa Barbara, where what they were able to do was show with different psychological tasks that there looks like there's a system in your head.

[00:09:47]

And all it's doing all day is looking around and asking the question, is there somebody who has taken some benefit without having met the requirement or pay the cost? And you can give people logic problems that they're very bad at. And then if you just recast them into the language of social language, where there's costs and benefits involved with people potentially cheating, all of a sudden people become extremely good at reasoning about it. Now, that wouldn't be true on the kind of very general system that you have in mind if we're just, for example, if we were just good logicians, if I give you a problem of the structure of P, then Q You should be pretty good at handling that.

[00:10:18]

That turns out not to be true. If you add social content to these problems, you get improved performance. And that what that points to is the idea that there's a system in your head that is specifically good at doing that. And when you when you engage it, you start to see that our performance is much, much better. And that's pretty good evidence of specificity. I mean, of course, one of the one of the places that you referred to as well as language and people are still talking about whether there's a specialized language acquisition device.

[00:10:43]

Chomsky, of course, made his career on this. And I think many people find those ideas is extended by Steve Pinker and others fairly persuasive that it looks like there's at least some architecture in your mind that's pretty good at language. And don't get me wrong, I think this is an ongoing research enterprise. Right? That's why I have. Unbelievable job security is because there's also tenure, but but is that more drama will never, never run out of human weirdness.

[00:11:09]

There's always more weirdness. There's always more controversy. There's more always more data to be gathered. And I think we've established that I'm more charming than Jason at this point.

[00:11:20]

Now, I like you both equally. You're both special and talented. Thanks, Mom. You're welcome. OK, just to get a little bit more clarity on the idea of a module.

[00:11:30]

If so, if you make the claim the mine has all these different modules which serve different functions, is that saying anything above and beyond the claim the mine serves different functions, like is the word module doing any work there? The word module is only a shorthand so that I don't have to keep saying functionally specialized, integrated computational system over and over again. So it's doing sort of it's a placeholder for that idea. But like anything else, the crucial thing about the commitment to modularity is that it makes predictions in terms of the empirical work.

[00:12:04]

So if you're an evolutionary biologist and you make a claim that some structure is specialized for some task, you're making an empirical prediction that the structure is going to show design features that are just really good at doing that task. And so what this does is it gives us a lot of leverage in terms of predictions that we can then test in the laboratory. And that, I think, is the crucial part of this, is that, yes, it is a framework for understanding the mind.

[00:12:29]

And that sense is useful in and of itself. But as a scientific matter, it becomes the sort of where it becomes incredibly useful is the idea that, look, we now commit outloud to these properties and that allows us to go in and test them and other people can do their tests and then we can arbitrate these issues. And I think that that in that sense, it's not just a word. It has content in terms of the way we do our business of science.

[00:12:54]

Last thing on the on the how modular is the mind question. I imagine if you just had one really good reinforcement learning algorithm and just and through a ton of data and computational power at it, it could look like it was really good at a bunch of different specialized tasks. That doesn't necessarily mean there are specialized modules for this. Look, again, I think these are empirical questions. I mean, there's other things that we're learning only gets you so far because the system has to do something with the learned information.

[00:13:20]

And it also has the structure is learning. So philosophers have wrestled with a very long time about the problem of induction and the the thing that a child faces as they're learning about the world is that they really don't know what it is they ought to induce from any given set of stimuli. So there are limits, I think, philosophical limits to what reinforcement learning can do. And I should also say, you know, developmental psychologists are showing us all the time that systems are coming in and young babies and infants way sooner than they should be, given the amount of information they have to work on.

[00:13:51]

So it looks to us like to many of us that there's a lot of of computational flexibility that is coming online very quickly. And it's hard to tell a good story about exactly what how it is the organism learned as from a blank slate. I think if people want to look into what I consider to be the best compilation of data against this argument that Steve Pinker is the blank slate, which I suspect your listeners and people in this room are familiar with.

[00:14:19]

So one of the agendas of that book or the agenda that book was to say, look, if you think it's just a blank slate and then you're going to learn everything, you've got to explain all these data which pushed the other way. And I think that should be the sort of thing that persuades you. Of course, that philosophy should be persuasive to some extent about the problems of induction. But the evidence that psychologists and economists and sociologists and others have gathered, that should be the thing that that makes up your mind.

[00:14:43]

OK, so let's move on to hypocrisy and self-deception. How does the modularity hypothesis help explain that? So first, I think it's important to define terms. So people use these terms in different ways. I sort of think of hypocrisy as the case in which you say doing X is wrong and then you yourself do X. It's an inconsistency between what you morally condemn and then what you actually do. And the story that I tell about that and I use the word story just to be an explanation, I don't mean that in a pejorative way is sort of this is look, one of the specialized systems in your head is this moral system.

[00:15:18]

We all go around the world and we try to identify when people do wrong. Things were very sensitive to other people's moral failings. But by the same token, when we ourselves are choosing what to do, we don't always shockingly use our own moral compass in deciding what we're going to do. And so the argument is you have one system in your head which is designed specifically for moral condemnation, and that leads to saying things like, you know, you shouldn't tweet insults at people, particularly if you're if you're a high status individual and they're a lower status individual.

[00:15:50]

Some people might have that view. I'm just speaking randomly here. This is totally random. Right? And then you have a kind of your behavior, which let's say might be. Like this very hot emotional system which causes you to engage in very aggressive behavior on social media, for example, so on the one hand you might say, look, here's my moral principle and that's guiding me and what I say. And then here is my sort of behavioral system.

[00:16:15]

And that's guiding me in terms of what I do. And those things are not necessarily going to lasso one another. Right. There's nothing that prevents people who condemn doing acts from themselves doing X when they decide to do it, when it's in their best interest. And that, I think, is the crux. And I just gave away the whole books. And now you know why everyone else is a hypocrite not to buy it. That's the whole crux of it, which is that we're moral creatures in part.

[00:16:39]

That is to say, we're good at condemning other people, but we're not always more features and behavior because the part of your mind that guides condemnation is a different part, that guy's behavior. And that's sort of the fundamental conflict. So it makes sense to me why that would be a useful inconsistency to have just from my own self interest. I want to get away with as much as possible. And I also want other people to not get away with stuff.

[00:17:04]

But what about something where there seems to be more conflict? Like I ostensibly have this goal that I want to achieve, but at the same time I do things that are inconsistent with that ostensible goal, or I have a self image of myself that isn't necessarily true or justified based on the information. There are a lot of cases like that where I mean, maybe that's not really what we call hypocrisy. We might call it self-deception. Those cases seem a little harder to explain from a self-interest point of view.

[00:17:34]

I agree. And I would I would say that those things have different sort of explanations. So let's take the first one. Right. So we all want to be in good shape as an abstract, long term goal. But none of us really wants to go to the gym. I mean, some of us, we like to go to the gym and many of us say that we love to go to the gym, at least on our Tinder profiles.

[00:17:48]

I love you to the gym. Right. Right. But so that the explanation for that is still a modular story, which is that you have some models in your head which are really forward looking. Those guys will look ahead. Your frontal systems are like, I want to be healthy in the future. And then you have these other systems which which are designed to cause you to conserve energy and not want to exert like I just want to stream Netflix right now.

[00:18:12]

Right. And those two things are in conflict. And that's because you've got these two different modular systems. So I think the modular approach explains why we have, on the one hand, long term goals that would push us in a particular direction. And then we had these short term sort of reward systems which push us in the opposite direction. So that's, I think, how I would explain that kind of inconsistency. The other thing you talk about was just to bookmark that.

[00:18:36]

What's the ultimate explanation like? Why don't all behavioral economists and psychologists and evolutionary psychologists like how could they not acknowledge that we have inconsistent goals such that it's seems like the best way to model that is that different parts of our mind want different things and they're in conflict. I would say that the way that I put it is not super controversial. That is to say, I think many people would more would think of it more or less that way. I use a slightly different language from the way that other people do.

[00:19:07]

So people like Danny Kahneman and Amos Tversky and so on talked about dual systems theory. And so they prefer I mean, one of the big debates and you sort of alluded to this earlier is how many different systems are there? So I want to say there's a ton of them and people like Honeyman Tversky. Well, just dandy. But I want to say to them, when they're talking about multiple systems they want, their granularity is just a lot coarser than mine is.

[00:19:30]

So I want there to be a lot of systems in there, all of which have a different discount factors or what have you. They sort of think, well, it's actually just a couple of them and they're in conflict from time to time. But more or less, I think there is there's a certain amount of agreement there. What we agree on is that people are not consistent temporally, that they're doing very strange things over time. You know, and I think a lot of economists, one of the things they would say is, look, this is just irrationality and it's and they might have other other ways to model it.

[00:19:57]

But I think this is a particularly valuable way. But I do want to distinguish that from the other thing you said, which is also interesting, which is self-deception, which I think a slightly different story. So the way I sort of think about self-deception is that in the modern world and also in the past, one of the things that we do with human creatures is we meticulously cultivate a certain reputation. One of the main sources of information that other people have about us is sort of our own stated beliefs about our about our traits, about our future and so on.

[00:20:31]

And if you think that what you say about those things is going to influence other people's judgment about your reputation, then sometimes it can be very valuable to sort of broadcast true things about yourself that are not necessarily true. Right. So no one knows how good a driver I am. But if I go around acting as if I'm saying that I'm in the top 10 percent of all drivers in the country, then you might have reason to believe me.

[00:20:54]

And I mentioned that what you said in your Tinder profile, I have a. Four point nine five operating over there right now. Excellent uber passenger. So I mentioned that example in part because there's research on this, right. So something like 70 percent of people who are surveyed put themselves in the top 15 percent of drivers like we know for sure. That can't be true as a statistical matter. That can't be true. So then the question is, why do people believe things that must be false?

[00:21:22]

I mean, this community should be extremely interested in that question, right? The skeptics community is obsessed with the question of why do people believe otherwise? Sensible people believe things that are false. And one of things I love about that research is that it really strongly suggests that, yes, you also believe things that are false in particular about yourselves and why? Well, if it's true that my having a false belief about my self improves my ability to persuade you that I'm a successful, articulate, good driver kind of guy, well, then that's good for me.

[00:21:51]

So on this view, the self-deception part is really a reputation maintenance part. And a lot of these things are hard to penetrate. Right. So it's hard for anyone to see how good a driver you could check, how good an Uber passenger I am by going into my app. But I have a little fingerprint sensor on my phone, so I don't think you're going to be able to get into there. So but in general, reputations are hard to hard to verify.

[00:22:14]

Yeah. On the drive, for example, in particular, it occurs to me that it could technically be possible for a majority of the population to not be wrong, that they're above average drivers because they could be defining quality of driver differently. Like maybe some people are like, well, I get to where I'm going faster than other people and maybe they do. And other people like I get into fewer accidents than most people and maybe they're right to.

[00:22:34]

That's right. But we could be very selective in how we define good drivers such that we fit the criteria. That's right. And the way that we finesse this problem is we also have research where the criteria are much more objective. And what you find is that for those for those traits that are a little bit more objective, it's true that those effects diminish. But more or less they never go away. We're always just a little bit more optimistic about ourselves.

[00:22:59]

We we get to get it wrong and it's a little less so. So I agree with you that things like driving have multiple different dimensions that you could evaluate your evaluate yourself on. But that's another case where you've got to look at the entire body of evidence in this work. And I think that I think that you would say that more or less, even in objective cases. So, for example, optimism, you know, we know how many people are going to break their legs and the next year as a fraction and it people wildly underestimate their probability of breaking a leg.

[00:23:28]

So step carefully out there.

[00:23:31]

What do they do? They correctly estimate the general rate in the population of leg breaking and then think their rate will be lower. Or do they just are they confused about how likely people are? They underestimate the base rates and then they further underestimate. Yeah.

[00:23:45]

How do we distinguish between the two hypotheses? One, people self-deceit in this way so that they can more effectively convince other people that they're better than they really are. Versus on the other hand, people stop to see because they just want to feel good about themselves. And it feels good to believe that you're in the top whatever percent of drivers would distinguish. Yeah, good question. So, you know, in psychology, the idea that we're motivated by the drive for self-esteem has been one of the largest bodies of literature that is out there.

[00:24:17]

This has been a source of a tremendous amount of work. But by and large, when people go in and try to test that hypothesis by looking at people's beliefs and looking at how that affects our self esteem, by and large, that work has shown that these effects are extremely small or even zero. So so, you know, I just there have been some meta analysis on these bodies of research that look at the entire set of published findings in an area.

[00:24:40]

And the conclusions and those in those analysis suggest that you just can't explain people's beliefs or behaviours on given the motive to maintain self esteem. I think there's some other kinds of evidence, a point away from it as well. So, for example, the kinds of effects you see in this literature depend on whether or not the claim that you're making a sort of a public claim. And so people tend to inflate what they're saying about themselves to the extent that they think those kinds of claims are going to be known by experimenters or by other people and so on.

[00:25:12]

If it were just a motive to make yourself happy, that should be irrelevant. You shouldn't care who's going to know. You should just care about the belief itself. So I think those are two important pieces. I will say there's a theoretical piece, right. As an evolutionary psychologist, my view is that natural selection has never cared how happy or sad an organism is. It just cares about its survival and reproduction. It would be very strange if natural selection designed an organism to seek happiness like something that's internal to the organism as opposed to outcomes in the world, things that are going to lead to its survival reproduction.

[00:25:43]

So I think on two empirical grounds and one theoretical ground, the try to feel good. So I'm going to be wrong. Doesn't I don't think I don't think it does very well. So would your model predict then that people would not self deceive? About whether their life was going well. Like like people would not be likely to avoid thinking about stressful topics regarding how much of a pickle they're in, if they're going to be able to solve their problem.

[00:26:12]

Or would your model also predict that that's something that would make people look bad to others if others knew that they were in a tough situation? Well, there's good reasons for people to think about their problems because it helps them drive towards a solution. So I think I agree with that. But people often don't like people will often not think about health problems, for example, because it's just too upsetting to think about the fact that things might actually be worse than you'd like to think they were, or is there anything that would be upsetting to someone to think about but wouldn't make them look bad to their fellow primates that we could use to distinguish?

[00:26:48]

Yeah, no, I admit I haven't thought through that in detail. I mean, I will say one thing that people tend not to think about is cases in which. Yeah. So it's sort of bad to think about. And also there's nothing I could do about it if it were the case that I'm actually in serious trouble. So people don't seek out information that they can't do anything with, or at least they rarely do. So many people, for example, who are who forego getting medical tests done.

[00:27:11]

Many of them have beliefs about how because of their insurance or for whatever other reason, there's just nothing they would change anyway. And so they forego these these tests. And I'm not denying that. Part of that could also be that there are reputational there is reputational damage. Once you find out, for example, if you have something which is ultimately going to be fatal, you know, it's just a sad fact about human nature, as we tend to prefer to spend our time with people who we think are going to be around longer.

[00:27:39]

And that's because we're strategic creatures. I mean, how many times are you trying to build a friendship with someone who just said that they were leaving the state in three weeks? I mean, we just that maybe that's a cynical view of human nature, but I think it's important to be realistic about it. What about the just world hypothesis, folk hypothesis that if bad things happen, people deserve them and, you know, people will rationalize their explanation for why the person, the victim was actually at fault?

[00:28:08]

My understanding is the theory of why people tend to do this is it's just it's upsetting to think that the world could be random and good people could have bad things happen to them. So this this seems like a kind of self-deception. It also seems like something that fits more, fits better with the model of people wanting to believe things that are nice, as opposed to people wanting to believe things that are strategic in the way they appear to others. Yeah, I think there might be some of that.

[00:28:34]

And I think that there might be cases in which people believe things because they sort of think it would be nice. But I think there's other stories there. So, you know, the another sad fact about people is that we're always looking for excuses not to help people that would cost us to help. All right. So there's and you see this both in the interpersonal level. So here we are in Manhattan. And as I was walking here, of course, you have a couple of panhandlers and and most people walk by and they have a maybe it's a true story and have a true story about why they're not going to tell, but having to do with, look, this is a person who's not genuinely in need or maybe it was their fault.

[00:29:10]

Those stories allow them to walk by and to to choose not to do something beneficial. And that is useful as a as a social matter. Right. So we all want to be able to tell a plausible story about why we don't want to do the thing that's costly for us to do. And I think that just world hypothesis is one of those stories. Right. If I if I have the belief that those people over there, they certainly had to come in, then I can say, look, I don't think that the federal government should be giving foreign aid to those individuals.

[00:29:39]

If they wanted to live better, they'd grow corn or I don't know exactly how that would go. But I think that just world hypothesis, like many of these kinds of stories. Sure, one possibility could be that we want to feel better. But I think that we should entertain the notion that it has to do with having a good way to explain to other people why we've chosen costly not to act when we could do something that would be valuable.

[00:30:03]

And I think I think these stories are super important. And I think the stories we tell also in the political sphere. Right. So we want to if we have a particular political position, we never say, well, I think we should cut taxes on the wealthy because I'm wealthy and I'd rather have more money. No, we say, look, you know, that's going to stimulate economic growth and we're all going to be better off. The first explanation is just not a persuasive explanation, but the second one is.

[00:30:31]

So I'm curious how much we disagree here. Three possible things that you could be saying. You could be saying a bit. Self-deception is just purely helpful. Just only helps us. It doesn't hurt us be you could be saying that it both helps and hurts us, but overall it helps us more so on that it helps us. Or lastly, you could be saying self-deception both helps and hurts us. It's not clear what the net effect is, good or bad.

[00:30:59]

Which of those three I'm going to say no for the above, which is I actually think that the way to to have this conversation is to get away from the notion of self-deception. So let me let me try this. I think that a better way to think about self-deception is what I've called being strategically wrong. So you're being wrong in a way that is helpful for you. And that, by the way, does put me closest to your number one.

[00:31:23]

So. Well, I'll go that far. So, you know, the way I think about this is when we talk about self-deception in almost every single case, what we're really talking about is something like, look, this person has this belief which somehow they really shouldn't have. They should have a belief that they're more likely to get a broken leg. They should have them believe that there are worse drivers. They should have a whatever a belief that's closer to what the reality is in the world.

[00:31:47]

But they don't have that belief. Then the question is why? And my argument is, well, it's because having the false belief is useful for persuading others about how wonderful you are. And so in that context, yeah, I actually do. And I and I'm reluctant to say this since we're taping, but like I do actually think that the strategically wrong beliefs are the product of evolved systems that were specifically designed to be wrong in this way. That, yeah, is helpful in the long run.

[00:32:11]

I'm not saying that self-deception can't be can't be in some cases, very damaging. So Bob Trevor Robert Trevor has a book that came out Folly of Fools two or three years ago, and he takes a position which is not completely unlike mine, but is unlike mine in the sense that. So first of all, he cast the net of self-deception much more broadly. So he takes cases where people are just simply wrong about facts of the matter to be self-deception.

[00:32:39]

I think that doesn't quite it doesn't quite get the point. But then he talks about cases in which he says these beliefs have horrible outcomes. So he has a story in there about some pilots who had the belief that they were cleared for takeoff, which turned out not to be true because a radio malfunction with catastrophic results. I don't consider that to be self-deception. I consider that to be a case where they were they were definitely wrong. There's no doubt that they had a false belief.

[00:33:05]

But maybe we should distinguish I would call something self-deception. If someone is if there's a rationalization process going on where they have all the information that they need to know what the right answer is. And and, in fact, if they had less of an emotional or personal stake in the question, if it were just a pure logical, like an abstract word problem, maybe they would get it right. But because they have some motivation to get a particular answer, they end up deceiving themselves.

[00:33:30]

But then there's also self-deception about yourself, which is the main thing we've been talking about. So that's, I guess not what he was. Yeah. And again, he castanet more broadly. Well, let me put it this way. I think there's another important lesson that modularity has to teach us about this statistic with the driving example. I actually think that it's completely plausible that in people's heads you have two beliefs that are mutually contradictory, but they exist at the same time.

[00:33:53]

So I think that it's stored in one part of your head could be the idea that I'm a very safe driver, superstar and then or whatever. And then another part of your head could be a belief which is more accurate with your skill. Instead, like I'm a skillful driver and then there's another part of my head. There's somewhere and there that knows that I'm actually not that good. Right. And so when I actually get behind the wheel of a car, which I hardly do at all since I use Uber so often, but when I get behind the wheel of a car, I'm excruciatingly right and I'm very vigilant.

[00:34:21]

I don't try to show off and so on. So on the one hand, I might broadcast to the world, I'm a super skilled driver, but then when I actually get behind the wheel, my behavior is driven by the true representation, the true belief that I'm actually kind of impaired in this way. And so I think that the one of the big problems in this area is this idea that there's just one belief in there somewhere, that sort of which one ring to rule them all, when, in fact you can have beliefs that are in different parts of your head that are doing different jobs.

[00:34:50]

One does a public relations job, one does a behavioural guidance job. And that seems like it's kind of a weird thing. But I don't I don't think that it's implausible because of the architecture of the mind. I definitely I can think of examples of of they've failed failed entrepreneurs who they weren't doing this sort of strategic switching back and forth between self-deception and truth seeking, depending on whether it was useful, which is useful to them, like they weren't being confident in meetings with potential investors and sort of truly believing in those moments that their startup was definitely going to succeed and had no problems.

[00:35:28]

And then once they got home, trying to be as accurate as possible with themselves about all the potential risks and pitfalls and flaws, they were just confident the whole way and in sort of retrospect, think that they were overconfident and and were sort of in denial about the risks or problems that feel counter. Yeah, that's you're right. That seems like a case where it's hard, though. You want to do that. You kind of want to do the right.

[00:35:51]

You don't want to have hindsight bias. So for every entrepreneur who was super confident and then had a bad outcome, you want to ask the question for how many of those entrepreneurs? That confidence actually lead to good outcomes, right? How many of those and it's a minority, though. Yeah, but if the if the if the the stake is a unicorn, if it's if I win, I get a billion dollars. If I lose, I lose a thousand dollars of someone else's money, then I think people's cognition is really sophisticated.

[00:36:16]

Are people perfect at mathematics? No. But are they pretty good at computing the probabilities? Yeah. And I think in these contexts, look, I mean, one might make the argument that the recent history has shown that a ridiculous amount of confidence and optimism has had led someone to an incredibly good outcome that they somehow shouldn't have gotten, given the actual skills that they have in the job for which they were applying to the American people to say that not to get a problem, just don't know what you're getting at.

[00:36:45]

So, so so, you know, I think the cases like that illustrate that. Sure. Their confidence in some settings, which is not justified, can lead to disastrous outcomes, as in the airplane case. But by the same token, you know, I think William James was one of the first people who talked about this is that humans are deeply social creatures and we are subject to self-fulfilling prophecies. The example that he gives is in the context of mating, where he talks about a guy, a suitor who is just so sure that the woman he is pursuing is going to fall in love with him, that he brings about that outcome.

[00:37:24]

And part of it is because there is something compelling about that kind of certainty. Right. So there's something kind of weirdly compelling about the person who leads you to believe. I know for sure we are meant to be together forever. You're my you're my lobster. This is it. Like, I think I think that's a thing that quote on your Tindouf. That's my free advice to you. I appreciate.

[00:37:47]

OK, wait, so we only have a few minutes left. So I want to make sure just to be clear, I think we do disagree in that I'm I'm less bullish on on the benefits of self-deception on net than you are, although I agree that, like, in some context, it can be helpful. It's just not clear to me that that's not a it's not clear to me that that isn't outweighed by the costs of self-deception. And B, it's like separately, it's not clear to me that we couldn't intervene on the system and make some improvements to get a better outcome, that we couldn't do better than evolution did.

[00:38:20]

And one reason that I'm less bullish is that it seems like for one reason that I think it's we we should expect that the system, the solutions that we found would not be optimal now is because of the many differences in our in our Decision-Making environment now compared to the evolutionary environment. So just to throw some examples off the top of my head, it seems quite plausible to me that there are many more skills that we can intentionally improve on now than there used to be skills that like, you know, if you really put in the time and effort, you can get better at it.

[00:38:58]

And if that didn't used to be the case in the evolutionary adaptive environment, then maybe it really was just better to try to convince everyone that you were much stronger than you actually were or much more fierce in battle or much more reliable or something like that, because you can't improve on it. You might as well just like believe you're already great and get the benefits of social persuasion. Whereas now, like the situation has changed, like the number of skills available to us to train is much bigger and they're more complex.

[00:39:25]

But also we maybe have the mental capacity now to sort of decide to do deliberate practice on public speaking or on math or something again and again until we improve. And we want to do that. If we already believe we're perfect. I mean, that might be one difference. Another difference is just in our social environment that looking good to the people around us was really important back when we were a small tribe. And social disapproval or scorn could mean getting cast out of the tribe.

[00:39:54]

But now, like, if you go to a bar and get rejected by 10 people, it doesn't really matter that much. But it feels like it does because our brains seem to have been optimized for really, really being risk averse when it comes to rejection. So for those and other reasons, it seems to me that the sort of intuitive calculus that our brain is doing when deciding when to self deceive might just not be optimal. Yeah. Now, those are those are really interesting examples.

[00:40:22]

I think in particular, that last one is interesting. Like in the modern world, it is bizarre how little, how infrequently that happens, given the cost of rejection are so low. I mean, in places like New York City, men should be having relatively short interactions to try to judge interest and then moving on after they get rejection, there's four. And that I agree with you. The problem there is that you've got this evolved psychology where if you were rejected by someone in a world of 50 mates, that was like a big deal.

[00:40:49]

You lost a big fraction in the modern world. It means nothing, of course, right. Being swiped left doesn't really matter. Well, just briefly to clarify. To clarify. I don't I understand these are evolutionary stories, but like given that the model itself is based on evolutionary story, I'm saying accepting that premise, I think there we should then also expect that it would be an imperfect solution. But the example that I thought you were going to go to, which I would have agreed with, is gambling.

[00:41:16]

So there's a case where other people are structuring the world, I think, to take advantage of our overoptimism. So gambling in the modern world, an adversarial environment, then? This is a good example, it seems to me, because, you know, there overoptimism that is a bad idea. And in fact, it costs people who really can't afford to lose money, tons of it every day. And that's because if you do have a system in your head that sort of is a little it's a little bit on the margin, too optimistic.

[00:41:42]

And you're faced with an adversarial world where people want to take advantage of it. Boom. All of a sudden, you've got Las Vegas and Monaco and Atlantic City where they make all of their margins on the fact that everyone is wrong about how good an idea is pulling aside the issue of the value of the entertainment or what have you. But there is a case where I agree with you. The modern world has presented the ancient psychology with this tremendously difficult problem, where specifically the errors that it's making is leading to catastrophic, well, negative outcomes, let's say.

[00:42:09]

So that part I would agree with that. Yeah. And I think it's a I agree. I mean, don't get me wrong, I 100 percent agree and have taken the position that the fact that the modern world is so different from our ancestral world leads to us being leads to some bad decision making that I agree with what I'm reacting to in some parts with this work is this idea that, you know, I think many people, again, coming back to the tradition of Condiment Tversky and others have taken this view that there's all these flaws and there's all these biases.

[00:42:36]

And I'm sort of the work that I've sort of been proposing says, look, some of these things might not actually be as bad as you thought they were, that that there can be advantage to error. And so in some sense, it's less taking out the extreme position as opposed to tacking back towards the middle. Yeah, and this is actually a running theme in my podcast that is looking for ways that apparent irrationality might be rational in some contexts or under certain conditions.

[00:43:03]

Like I had Dan Sperber on talking about the argumentative theory of reason that rationalizing and and making bad arguments might actually be useful because it sort of it's designed to persuade other people to agree with you, not designed to figure out the truth about the world. And I've done a couple of things with Tom Griffiths, who argues that apparent cognitive biases might actually be really good solutions, you know, under bounded conditions where you have limited time and resources and you have to sort of avoid serious downside risks.

[00:43:32]

So, I mean, I think this is a like an additional very important pillar in this set of re-examining apparent irrationality. Yeah. And in that notion, we were talking a little bit before the show about sources that are relevant. So along the same lines, get Gigerenzer in his group at the Max Planck Institute, have a whole body of work that flies under the flag of heuristics that make us smart. And the idea there is that, yeah, these little shortcuts, these little these little heuristics that we have are in the service of getting to good outcomes.

[00:44:01]

And that's a whole nother research community that has more or less come to the same conclusions that, yeah, sure, there's going to be shortcuts, there's going to be imperfections in the system. But you can tell by the way they frame it that they're talking about how this gives us an advantage. And so for listeners who are interested in a body of work that pushes that, I think in a really productive direction. Look, looking at Gigerenzer as a group and the books that they produced, I think it would be a really rewarding experience.

[00:44:28]

So I was just about to wrap up and ask you if you wanted to give a pick for the episode, which is a book or blog or journal article that has influenced your thinking. Is is that your pick or do you have another pick? Well, I have another one. I think that in the modern era, people are talking so much about artificial intelligence that people have forgotten the good old fashioned artificial intelligence guys. And so part of what influenced me was Marvin Minsky's book Society of Mind, which dates all the way back to the eighties.

[00:44:56]

And so I think a smattering of applause. So I think for for people who don't just want to look at the cutting edge, but want to take a look at some of the, you know, the background work from someone who was trained in computer science, which I think still has relevance today. I think that would be a very rewarding read. Excellent. Will Robinson, and such a pleasure getting to chat with you finally. And thanks for coming on the show in Texas.

[00:45:21]

Thanks for having me. This concludes another episode of rationally speaking. Join us next time for more explorations on the borderlands between reason and nonsense.

[00:45:39]

Thank you.