Today's episode of Rationally Speaking is sponsored by Give Well, a nonprofit dedicated to finding outstanding charities and publishing their full analysis to help donors decide where to give. They do rigorous research to quantify how much good a given charity does, how many lives does it save or how much does it reduce? Poverty per dollar donated. You can read all about their research or just check out their short list of top recommended evidence based charities to maximize the amount of good that your donations can do.
It's free and available to everyone online. Check them out at Give Weblog. I also want to let you all know about this year's Northeast Conference on Science and Skepticism being held in New York City June twenty ninth through July 2nd. I'll be there taping a live podcast and there will be lots of other great guests, including the Skeptics Guide to the Universe, my former co-host, Massimo Plushy, the amazing James Randi and keynote speaker Mike Massimino, former NASA astronaut.
Get your tickets at Nexxus dot org and e see ASPHAUG.
Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I'm your host, Julia Gillard, and I'm here with today's guest, Professor Tanya Lombroso.
Tanya is an associate professor of psychology at the University of California, Berkeley. She's also an affiliate in the philosophy department and the director of the Concepts and Cognition Lab. Tanya. Welcome to the show. Thanks for having me, Julia. So Tanya's research is, as it is situated at the intersection of philosophy and cognitive science and asks questions both about how humans reason and how they theoretically, normatively should reason. It is all utter. Julia Bates. And so it was very hard to pick an area to focus on for today's episode.
But the aspect of your research, Tanya, that I was hoping to focus on today is the research on explanation. So people are constantly reaching for explanations about the world like a tragedy happens. And we want to explain why did this happen or we want to explain patterns that we see in the world, like why are some people successful and other people aren't? And so one of the many things that Honiara's work looks at is what kinds of explanations do we reach for?
Like, for example, do we have a bias in favor of simple explanations over complicated explanations? And to the extent that we have this bias, is it justified? What is you know, does it make us more accurate? And if not, does it do other things for us and so on? So those are all questions that we will get to this episode, I hope. But maybe to kick things off, Tanya, you can just kind of situate the study of explanation in the broader project of cognitive science, like if we let's say we we solved all the questions we're interested in with respect to, you know, humans and explanation.
What would that open up for us or what other mysteries would that solve for us about human psychology? Right. Well, I think one way to motivate why it is we should care about explanation. And what it can tell us is to think about the ways in which explanations actually may be a little bit mysterious. So if you think about the sorts of things that we want science to do and the sorts of things that we want our everyday cognition, do, we want it to be able to support prediction and control?
It's very clear why those things are likely controlling the world around us. That's right. We ought to be able to intervene on the world to make particular outcomes that are desirable come about. We want to be able to intervene on the world to prevent things that we think are not desirable. Right. And so on. And that, I think, is true for both science, the way we approach the world scientists, but also just the way that the human mind works.
We need to be able to predict to control the environment in various sorts of ways. But we also are really, really motivated to explain. And it's a whole lot less obvious what explanation is doing for us. Why is it that we bother engaging in this activity of trying to make sense of something or trying to understand why it happened and asking why and trying to find good answers to those questions? And so I think I think the answer to that that's attractive in that I've advocated in many other people have as well, is that in some ways something about seeking explanations and engaging in this process of trying to explain is instrumentally valuable because it's going to downstream help us predict and control better creativity and understand something.
We're going to be in a better position to make inferences related to that in the future. So I think if you think about explanation playing that this very fundamental role in the very mechanisms that allow us to predict the world, the very mechanisms that allow us to know how to make effective intervention in the world, to bring things about it, to prevent outcomes, then it's it's really clear why it should be super important for understanding both the human mind, but also the scientific process.
Right. If we can understand explanation, we can understand learning.
We're going to in friends work and understand how the process of discovery works and the descriptive aspect of this, i.e. looking at how what are these rustics that the human brain seems to be using when we reach for explanations or or come up with explanations, can be useful for this for the purpose you're describing, because we might be able to discover ways in which we're systematically acting like explanations, that types of explanation that maybe aren't ideal or aren't ideal in the kind of modern context that we are now using them in.
But even if they were optimal when they evolved.
Yeah, I think I think that's right. So I think if we understood how explanation supports good learning and good inferences, we could presumably foster that better. But also by understanding the cases where we tend to get things wrong, then maybe we can have quite good correctives. Right. So, for example, if we discover and there's some evidence for this that we favour simpler explanations more than we should write, then that's a case where maybe recognizing that explanatory preference can actually lead us to be better reasoners by making us more wary of that kind of a preference when it shouldn't matter.
One thing that I've noticed personally about the usefulness of explanation is that it just helps you personally understand something so much better. Like I used to teach these classes on on reasoning and decision making and judgment and. The concepts and the techniques that we were teaching were relatively complicated and often talking to people after the classes, we would realize, oh gee, they really didn't understand what we were saying. They seemed like they understood. They were nodding their heads. They felt like they understood.
But in talking to them, we realized they really didn't. And so then we started adding in this this kind of paired tutoring session where we had people explain the classes to each other. And this even though there was no real content in this class, it was just people explaining things to each other that we'd already taught them. It became like the what by far the most popular class that we taught. And in the feedback form, people kept saying things like, wow, I didn't actually understand the classes until I tried to explain it to other people and I realized what was missing and so on, so forth.
You reinvented a discovery from the cognitive science education literature, which I think is really an important one. I think it fits a lot of people's experiences that when you try to teach something, you often come to understand it better or realize that there are gaps in what you thought you knew. But this actually in the research literature, there was an influential paper in the late 80s where the researchers are trying to figure out what it was that made some students just much more effective learners than others.
And in some of the earlier studies that they looked at was the patterns in study behavior of students going through physics problems in physics textbooks. And they found that some of them were doing this thing, which looked a whole lot like explaining to themselves. They were just doing this spontaneously as they worked through the problems and thinking, OK, well, why does that step follow and why is this the case? And so on. And the students who are doing more of that self explanation were also doing better on post tests showing that they had learned more from this training.
And so initially, those studies were just correlational. They just found people who are like maybe smart people just do this. And also smart people do better on the test. Exactly. Exactly. So the next step was to look at that experimentally and see what they would do is do these studies where they would give people some sort of a learning task, like some of the early ones involved, students learning about the circulatory system and how the heart works.
And they would then randomly assign them to two conditions and have half of them explain to themselves as they studied this and have the other half do something else. And there have now been many, many years of these studies of the control conditions are varied. Sometimes they have them just matched for time, but don't give them other instructions. Sometimes they have them think aloud, sometimes they have them go through the materials twice. So there's lots of variance on what you compare the explanation for exactly the self explanation condition, too.
But what you find in most cases is that those who are prompted to self explain do significantly better on measures of learning afterwards than those in control condition.
You know who else is independently discovered this phenomenon? Computer programmers who have there's this phenomenon that that I learned about after we started having these tutoring sessions in computer programming world called rubber duckies. So they have whoever started this trend had a rubber duck sitting on his desk, I guess, just for a sort of decoration or companionship or whatever. And the way this started was programmers would encounter a bug in their code. They would notice their code wasn't working, figure that out there was a bug, couldn't find it, and then would try to explain to a fellow coder what was happening and what the like where the program was failing.
And before they could even ask for help, they would interrupt themselves and go, oh, I know what it is. And then they would go solve the bug. But this was somewhat inefficient because it requires another person to come over to your desk and sit there while you explain it to them. So this coder started explaining the problems to his rubber duck sitting on his desk instead. And that worked basically just as well. So he would start talking aloud to his desk about the problems with this code and then solves problems.
Like I didn't know that example. And part of what I really like about it is that it illustrates something that I think is really cool about this process, which is that usually the way that we think about learning working is that you get some sort of new information from the external world that you observe something that you hadn't observed before. Somebody corrects your mistake. Those are the sort of canonical cases. Right. But in these cases where you learn by explaining to yourself the only sort of new information you're getting is information that in some sense was already in your head.
Right. And so what's nice about the rubber duck cases, the rubber duck is not giving you any feedback. The rubber duck is not telling you when you're not even sure what you're getting at. Exactly. Nonetheless, there seem to be some benefits, at least in some cases. Right. And so what's cool about that is I think the benefits are not coming from those standard learning mechanisms that we think about which derive from the feedback we get. So it's a phenomenon that in my lab we've been calling learning by thinking, in contrast to learning from observation, which is I think the more canonical case where what you're observing is sort of outside your head.
It's comes from another person. It comes from a textbook. It comes from the natural world. But in cases like explaining to yourself, in cases like thought experiments and science, in cases like mathematical proofs as well, there's a sense in which it seems like you can learn something genuinely new, even though you're not getting new input from the external world. This is a. Big update for me a few years ago, I used to get into arguments with especially with people from the humanities because I didn't see how you could learn from fiction.
Like people say this about novels, about fiction all the time that like you learn about the world and learn about human nature and what it is to be human. And I kept objecting that like but it's fiction. It's not actually like so whatever new things you think you're learning aren't actually justified because they were just made up. They're not necessarily I mean, they may be representative of the world, but you don't know because the author, you know, wasn't optimizing for representativeness, he was optimizing for good story.
And and I eventually decided that I was at least partially wrong. I still think there's some bias that comes when you sort of form impressions about the world from fiction, as we intuitively do. But I still think I was significantly wrong because what fiction often does is it kind of shines a light on things you already kind of knew but hadn't been thinking about. And so the updates that you're making about the world are can in fact be justified. But you just hadn't put the pieces together that you already had until the election highlighted them.
Yeah, I think that's right. And I think that the toy example that I sometimes get people to just have it be easier to think about is a case of deductive reasoning. So you might know P and if P thank you and just hadn't yet drive the conclusion that you followed. Right. You sort of had those two pieces of information that were sort of isolated in the way that you represented them in your mind. And so from your armchair or maybe in the course of reading a novel, you put those together and you drive.
Q And in that case, presumably it is justified. It's justified because it's a good deductive argument. It's just that the process that led you to that conclusion didn't involve sort of a new piece that wasn't already in your head. But I think there's harder cases to think about. So I don't know if you've maybe thought about this in the context of fiction, but a lot of scientific models are themselves, in some sense, fictional. We make lots of idealising assumptions that we know are not true.
We assume that something is a perfect vacuum or something has no friction or we know we assume that someone has infinite time and whatever the assumptions are that we built into our model. So in some sense, we know that they are false. And yet there are cases in the scientific process where it seems like we use those idealized fictionalise models to draw inferences that we hope to tell us something about the real world.
So I think there are these more subtle cases where maybe we can use as an intermediate step this sort of a tool, this model that is in some sense not true or accurate, but nonetheless helps us in drawing inferences, right? Yeah, I would guess that is a trickier or more complicated case because it's not purporting to be data. It's a framework that we're the new framework that we didn't have before. That is useful, but not true, but is helping us interpret the data we do have.
But it's complicated. Yeah, but one thing I wanted to ask about the explanation for the purpose of understanding was why that works like it. It feels sort of intuitive that, of course, you understand something better if you explain it to people. But I could imagine different mechanisms that there exist different mechanisms for that. And almost certainly there are multiple mechanisms that operate in parallel. So I'll tell you some of the ones that there's good evidence for. So one thing that happens when you explain is that you appreciate you maybe didn't know as well as you thought you did.
So if there's a big gap in your understanding, then trying to explain it might bring that to light. Right. So in some sense, what you're learning is what you didn't know. Right. And that's one thing we saw happening in our Turing. Yeah. Yeah. And so that might then allow you to ask the right questions, to seek the right information and so on. And some of the time that might allow you to then draw the right inference in the course of explaining to somebody, you sort of realized how you get that you thought, you know, how you get from step two to three.
But you get to that point in your explanation. You're like, oh, I don't actually I don't realize how you got to step three from step two, but maybe having drawing attention to that now, you can actually figure it out on your own. Right. So sometimes you can you can identify the gaps and sometimes you can fill the gaps. Right. So that seems to be one important thing you're doing. Another thing that you're doing is that you seem to be integrating whatever you're trying to explain with the prior beliefs that you already had, you sort of trying to make sense of it in light of what you already know.
And so rather than just having the thing that you're learning is isolated and you think it's now more usefully integrated with your beliefs. Another part which seems to be important, and this is the part that my own research is focused on the most, is that I think one of the key things that happens when you try to explain is that you try to relate the thing that you're trying to explain to some broader pattern of regularity. Now, part of it gives us a sense that you understand it now that it's explained that you feel like, oh, OK, now I see how this fits into this more general explanatory pattern.
This is broader generalization. It's an instance of something that makes sense to me. And I think part of the process of doing that makes you move away from the particular idiosyncratic things about this example and to focus on the things that are actually generalisable, useful, more general purpose pieces of information. So by virtue of doing that, you're now going to be in a better position to, for example, generalize tunicates. So this is one of the things that you see in educational.
If you have people explain to themselves, for example, how to solve a particular word problem and then you have a sort of test later where they have to apply the same mathematical principle, but it looks really different to maybe the first case involved, how quickly you could pick berries of different types from Bush's. And the second case involves building things in a factory. So superficially, they're very different. But maybe the actual formula that you need to apply to solve the problems is very similar.
And what you see is that if you, the people who explained are more likely to be able to generalize from these cases to the new cases. And I think part of that is because what you do when you explain is you don't just focus on the fact that it's about berries and bushes and so on. You sort of related to some more general principle that makes the principle a little bit more explicit, a little bit more accessible.
And that's not something that you would do just naturally, intuitively, unless you were sort of prompted to explain what's happening. I mean, what we know from the experimental literature is that it makes a difference when you do want people to explain, OK, so there's something that they're not all doing spontaneously. Now, that doesn't mean that there aren't other routes to getting there. But it does suggest that it's not something people always do naturally. And I can give you another example which comes from these studies with children.
So a lot of people are familiar with stories that involve a moral of the story. So you read a story to a child. And like I said, it's fables. And superficially, it's all about, you know, do you want examples from literature that this is a cartoon about Clifford the Big Red Dog, and there's a three legged dog and the three legged dog wants to play with them? And initially they exclude the three legged dog and they come to realize by the end of the story that it's fine to play a three legged dog like a dog, but it's another dog like that.
And so clearly, the moral of the story, there is something about social exclusion. But if you ask children what they think the moral is, it's that you should be nice to three legged dogs.
Oh, storable. So this is reminding me of the question that you posed at the beginning of the episode about why why would explanation be necessary or why would it have evolved in the first place above and beyond? Just a prediction. Like let's say, you know, there's an animal and we, you know, on the on the savanna, we probably have some instinctive sense of whether the animal's dangerous. And there's just this sort of intuitive pattern recognizing prediction algorithm that's that's producing this prediction.
And maybe it's using things like, is the animal large or does it seem confident or not? Like maybe large and confident animals are more likely to be predators then they're not. But it's not clear why we would need to have conscious. It's not clear why we would need to be able to explain what are the factors that are causing that animal to be to seem dangerous to us. Why couldn't we just have evolved fear of animals that have whatever properties are learning algorithm in our brain has determined are associated with with danger?
Yeah. So if you got the prediction, why do you need the explanation. Yeah. And oh sorry. So the reason that I brought this up now is that all these benefits of explaining something to someone else are making me think that maybe there was this adaptation that happened when humans started evolving language and and the social complexity of human tribes started to go up where individuals were able to to sort of exploit the social environment they were in, either for the tribes benefit or for their own benefit.
And the the context for this suggestion is something I'm sure you've heard of it and I did an episode on. But for those listeners who aren't familiar with it, the argumentative theory of reason is a theory that says that our capacity for conscious, deliberate reasoning evolved not to help us figure out what's true about the world, but to help us justify our beliefs to our fellow tribes people. And so, you know, basically the point being reason didn't evolve to help us make decisions that evolved to help us win arguments.
And so maybe there's something similar that's true of explanation. Where to take the direct analogy from the argumentative theory of reason, maybe we're fine with using prediction if we were on our own, but in order to convince the rest of our tribe that we should be afraid of this animal, we have to be able to explain why we got the protection that we did. Or, you know, I could think of other less direct analogies as well that like maybe I end up better at avoiding dangerous animals if I try to explain to my fellow tribes people why these animals seem dangerous to me.
And that's only a benefit to me. Like I it could also benefit my tribe's people, but that there would be this, like, separate mechanism as well. Yeah, I think there's two parts to your question, and I think both are related. So one is. What is the explanation at all doing, and then why are they being prediction, right, and then why would it be this explicit, verbal, conscious sort of process? Right.
So let me say something about the first part first. So one reason why I think explanation may be particularly powerful if you ultimately only care about prediction is because it might be that by trying to explain something, you figure out how it is that you should generalize what you noted in cases. So this is much easier to think about in a concrete case. I'll give you an example that actually comes from an experiment that we did. So the cover story for our participants in the study is that you are the assistant to the director of the museum and you're just collecting all sorts of data about the museum you like.
This giant sheets and sheets of different observations have been made about people come to the museum and what they do and so on. And one of the things that you notice is this really strong correlation between having visited the portrait gallery for a particular visitor and having made an optional donation on the way out of the museum. OK, so there's just this correlation that there's evidence of some kind of a relationship here right now. From just what I told you there, if you had to try to predict what might be the case in other in other contexts that are similar, it doesn't seem like you have a lot to go on.
Right. Like, for example, would you predict that people who go to the sculpture garden are also likely to make an optional donation? Would you predict that people who go to the portrait gallery are also likely to give money to somebody asking for money outside the museum? It's really hard to, you know, a lot, right. OK, now I'm going to give you an explanation for this relationship. And the explanation is that when you are surrounded by portraits or watchful others, that triggers mechanisms that we have that have to do with social obligations and our reputation and so on.
And by having activated those kinds of mechanisms where people are now thinking about themselves as social creatures and social obligations who are being observed by others and so on, they're more likely to engage in the social behavior of having given money. So knowing that assuming that's true, then I can say in response to your question about the sculpture garden, I can say, well, if the sculpture garden has sculptures of people, then, yes, I would expect the same effect.
But if not, then no. Exactly. Exactly. So what did the explanation get you there? Well, one of the things that it did by having a sort of mechanism explanation that relates those two pieces of information is it tells you something about how you should generalize that relationship from that case to other cases. It seems like this kind of transfer learning is something that our brains do intuitively. It's like built into the the you know, I keep wanting to say machine learning algorithms, but it's not a human learning algorithm, Braylon.
Like, you know, if a baby pushes a block off a table and it falls to the floor, the baby can generalize from that, apparently, to expect that if he pushes a cup off the table, it will fall to the floor. So we do have some capacity to do that. There's no question that we generalize in all sorts of ways. But the thought is that maybe the process we go through of trying to generate explanations is one of the tools in our generalization toolbox.
And maybe it's one that's especially useful in cases where the generalization that you need to make relies on causal mechanisms or complicated underlying principles, or because a lot of the cases where we know that, you know, pretty much all non-human animals can do a fair amount of generalization, they learn that if this reddish thing is good to eat right, this slightly different reddish thing is also good to eat. Right. And they're presumably not using explicit verbal. That's right.
That's right. That's right. That's right. So the claim is definitely not that explanation is necessary for all kinds of generalization, but rather that just for humans, for humans, it seems to be a particularly powerful way to engage in certain kinds of generalization. So that's the part of your question. But then why would it be this explicit verbal process? And there I do think that the what you're pointing to about it being the sort of social communicative activity might be a really important part of that.
I would expect that we would see different effects and we should expect different mechanisms in cases where the explanation someone's giving is something that they're figuring out on the spot versus cases where someone already believe something and they're trying to explain to someone else why they believe it. Has anyone looked at that? So there have been there's there is a distinction between why something is the case, which is why you believe something to be the case. Right. The different kinds of explanations.
But I think what you're pointing out is more a case where someone is maybe coming to understand themselves on the fly as they explain versus a case where maybe I'm the teacher, I already understand the material well, but I'm just thinking about the best way to present it to you, given certain assumptions about what you are like as a learner. Is that the contrast you imagine?
Yeah, I guess I'm I'm imagining that we should expect explanations to be more more accurate or to track more with with accurate models of the world. In cases where we are. We're figuring it out. We're we're like trying to figure out what's happening to the extent that we can give an explanation. Whereas in other cases where I already believe that the monster is dangerous, animals dangerous, and I'm trying to explain to my fellow tribes people why that's true. Maybe they're we shouldn't really expect the explanations to be accurate, so much is just compelling and this is basically the argumentative theory of reason, I suppose just seem like two different kinds of explanations that we should expect to work differently because they're serving different purposes, is what I'm saying.
I mean, to the extent that different explanations do have a purpose, as I agree, that you might expect them to be more or less accurate or more or less persuasive and so on, I don't know of any research that gives us clear boundaries for where they should be. Right, that these are the types of explanations that would have these properties and these would be different. I mean, what we do know, which I think is not too surprising, is that if what you're explaining is wrong, then it's not typically going to be so beneficial.
So if you're if you're going through a math problem that you've solved yourself and are explaining out loud why that was the solution, if you got it right, then you might get some extra benefit from having gone through the process of explaining why. That's right. That's going to reinforce the correct principle and so on. But if you had you got it wrong, right. And you go through the process of explaining how you got there without recognizing that your answer is wrong, then the explanation might just be reinforcing the mistake that you started out with.
So there's a way in which explanation might actually be extra beneficial when you're on the right track and you're getting kids mostly right. But in cases where you're getting things wrong, it might actually help reinforce or entrench those misconceptions or false beliefs and so on. So that's one of the cases where I think explanation can actually be dangerous or go wrong or lead us astray.
You've I know you've looked into several ways in which explanation can have a downside. What are some of the others?
So I think one of the ones that I found most interesting in my research comes from the idea that one of the things you're doing when you're looking for an explanation is looking for a good explanation. You want a satisfying explanation. You don't just want like any old explanation. You want a really good one. And humans are really picky about what it is that makes something a good explanation. We like explanations to be beautiful. We like them to be elegant.
And when you try to cash it out, usually that means they have to be simple in some sense. And it's saying what simple means is not at all simple. Typically we want them to be broad and so on. So there's these characteristics that we're looking for in a good or satisfying explanation brought in the sense that this explanation doesn't just explain the case at hand, but it also helps to understand lots of other cases. Exactly. Exactly. So, for example, if you want to be able to explain why your friend got angry on this particular occasion, I really want that explanation to be the same one that you appeal to in similar sort of situation.
People in general get angry when you kick their chair out from under. Exactly something that that sort of is very idiosyncratic and just applies to this particular case might be a little less satisfying. So we're typically looking for simplicity and breadth. And so what I've shown in some of my work is that when there is something simple and broad to be learned or discovered, then engaging in this active process of trying to find an explanation can be beneficial. It might lead you to look beyond the obvious to find this more subtle underlying pattern.
But what about a case where you're trying to look for an explanation and there just is no satisfying explanation? The world just is messy and complicated and lots of not random. There is some water, but it isn't an easy simple to explain order. I think both of those might be cases of explanations. So one could be the case where it's actually genuinely random and another one is a case where maybe there is some sort of underlying pattern or regularity, but it accounts for like seventy eight percent of cases.
And the other cases seem totally systematic. Right. So in those kinds of cases, it's I don't think explanations are always going to be beneficial because in some ways what you have to do is step back from the idea that you're going to find a really elegant, beautiful explanation and accept some amount of messiness. And so so our data suggests that in those cases, looking for an explanation can actually sometimes impair learning.
Huh? This reminds me a little bit of what I read about the phenomenon of verbal overshadowing in which I'm going to try to explain it. And you can correct me after I try your ability to accurately recall what an experience was like or what a what a person looked like, say, is impaired. If, you know, after having had that experience or after having seen that person, you were asked to give a verbal description, like the process of generating the verbal description seems like it kind of replaces or overshadows some of your intuitive memory and you're less likely to be able to recognize the person again after having given that description of them.
Is that does it does that seem like a similar one part of it that I think is similar. So first, let me say the way in which I think it's it's dissimilar. OK, so in all of our experiments, we compare participants who are prompted to explain to participants who do something else, which is also verbal. So they might be thinking aloud. They might be writing down their thoughts. They might be describing they're also doing verbal. And so at least for our experiments and for a lot of the other studies that have been done in the literature, we know that the effects that we're trying to explanation have to do with something about explaining and not just the language speaking.
That's right. So that's that's, I think, an important dissimilarity from the shatteringly. But I think there is an important similarity there where I think one way to understand what's going on in a lot of the verbal overshadowing cases is that you have this very rich perceptual experience or something. What? The person looked like what the wine tasted like is another case where you get these verbalization, so you have this very rich perceptual experience and language just does not supply you with fine grained enough categories to try to represent it very well.
So when you try to sort of shoehorn it into your language, you're you're losing a lot of the richness that you had initially, probably probably adding some bias or as well, because you have to pick a word that not only is not perfectly capturing it, but it's kind of wrong. Just the best you could do. That's right. That's right. So there might be both both imperfect and, of course, grained right. And so then you see the downstream consequences of having tried to take this very rich aspect of your experience and sort of changed it to a representation that's coarser and perhaps not well aligned to it.
So I think what's similar about the explanation case is that there's perhaps the data or the world and it has certain kinds of characteristics or structure to it. And when you're trying to explain if what you're doing is looking for a good explanation, a really satisfying explanation, you're sort of trying to shoehorn it into something which is like the pattern of a good explanation, a good explanation, a simple and broad and so on. So to the extent that there really is something like a simple, broad regularity or pattern there to discover, that might be a pretty effective process.
But do they sense it's not? You're going to be slightly distorting the data to try to make it better into that pattern. But in order for the process of explanation to make us worse off, it would have to be the case that we did have some ability to intuitively to intuitively understand whatever order there was in that messy pattern and be able to make somewhat accurate predictions. That's right. So to make this concrete I mean, one of the things we do in one of our studies, we so we have one version of an experiment where people basically have to learn which individuals are the individuals who are very likely to give to charity versus not very likely to give to charity.
And they are, in fact, just I believe it's 12 people that they have to learn. That's about six of them tend to give to charity in six of them don't. Yeah. And so they just have to learn for those 12, which ones are the ones to give to charity. Now, you could just memorize the features of those 12. You could just remember, OK, Julia, who has brown hair, she's one of the people who gives to charity.
And Bob, who has blonde hair, doesn't just memorize these idiosyncratic features. So you don't really have an explanation for what it is that makes somebody give to charity or not. But you can be for for that sort of 12 year that you can classify people perfectly. OK, now we've also built in some regularities in the way that we do this. So we also give them information about people's ages. We also give information about whether people have personality characteristics that are more introverted or extroverted, and the way these predict giving to charity differs across people.
But I'll just give you one example is over. For one participant, they might get data that's consistent with a claim that the younger people who are more extroverted are the ones who tend to get to charity. So there's a there's a pattern there that they could extract. Now, if that pattern is perfect so that you could use that to classify these 12, then the explainers do just as well as people in a control condition. But now imagine a case where just Mogadon do better.
They they do not significantly better in that case. So they do find America. They do better. But when you analyze it, statistically insignificant. OK, but now suppose you have a case where of the 12 that you're studying for, 10 of the 12, it's true that the younger people give more to charity than older people. But there's these two exceptions. It's a case like that where the people who are prompted to explain as they learn actually do worse than the people who are not permitted to explain.
So if you imagine what it's like to be a participant in this task, you see this particular person who you've seen before, but maybe you don't remember now if they were associated with giving to charity or not. You see that this person is young. You think, OK, I'm going to guess they do give to charity and you get the feedback. No, this person doesn't give to charity. And if you're in excellent condition, you have to take a few moments to explain why you think it is that this person doesn't give to charity.
And if you're in the control condition, you take a few moments to write out your thoughts as you study that this person doesn't give charity and so on. Got it. And what we seem to be finding is that explainers are just really reluctant to give up on their being some sort of a generalizable pattern that applies to everything. And so they sort of keep the perseverating looking for a pattern. They think, OK, it's not about aid, maybe it's about where they're from.
And they try to find out who they are. And if it's not about where they're from, maybe it has to do with their college major. You know, they try to find all these patterns on the information that they have, whereas the people in other conditions are more willing to at some point sort of either not search in the first place or abandon the search for some sort of perfect regularity and just settle for, OK, this person is one of the ones who gives to charity so it doesn't go to charity.
And I don't have a broad underlying principle to explain it. But I can tell you reliably for these 12 people who does what this does surprise me. These results surprised me a little bit just because I wouldn't expect there to be perfect relationships in the world at all. So if if having imperfect order in the world makes like causes explanation to fail us, then I would expect I wouldn't have expected the explanation to be evolutionarily useful at all. That's right, that's right, I mean, pretty much any regularity in the world involves exceptions only because there's noise and so on.
So we've done a series of follow up studies where we've tried to figure out to what extent there's really something special about there being a perfect exceptionals pattern or if it's just that sort of accounting for more cases, accounting for more. The variance is good enough.
And so far, the data actually suggests that people really, really like the perfect exceptionals case, which they do super stimulus so much better than what we evolved to crave. That's right. That's right. But for the same reasons that you suggest, I find it surprising. I mean, for example, in psychological research, we can never count for one hundred of the variance. That's preposterous. That were just right. That would never happen in a study.
Right. If you can account for most of the cases, that's phenomenal for most social scientific research. So I think there's a few things that could be going on. So to some extent, it's an open and people question. We don't know the details yet. But the other thing is that I think we're really, really good at explaining away exceptions and so we can preserve the sense that there really is a perfect regularity despite that. So if you're somebody who believes that people who have a particular horoscope have a particular personality characteristic, but then you encounter an exception, what do you do?
You could say, oh, I guess I was wrong about that association. Or you could come up with some reason why that person is an exception. Right. And you sort of shield your view about there being this perfect relationship from that potential counterexample.
And I think people do that a lot. Oh, I totally agree. They do that a lot. But that that still makes us inaccurate, though, right. So if explanation is supposed to help us understand and control the world around us better than if we're making these excuses that aren't actually justified, it doesn't seem like we're better off unless the purpose of explanation is to be compelling to our fellow tribespeople, which the argumentative theory of reason might imply, in which case maybe we are fine as long as we can rationalize why this exception makes sense.
Yeah, there's another argument one can give and I alternate days of the week. I find it very compelling today. I'll go with it. And the idea is basically it has to do with something that comes up in comforting or when you're trying to build a model to fit data, which is that you don't want to overfit the noise. Right. So if you have some data points, you're trying to figure out what's the best way to characterize these data points.
You want to capture the underlying signal you don't want if it the noise. And so you could think about what explanation is doing in cases like this as being a kind of what's called a regularisation sort of process. It's trying to prevent overfitting. Now, part of the reason I only find this compelling on alternate days of the week is because I think almost certainly to the extent we do it, it's probably more than we should. I think it interacts badly with other kinds of cognitive biases that do lead to erroneous thinking.
So, for example, I'm guessing at some point in your podcast, you've covered things like motivated reasoning, confirmation bias. These are processes that lead us to interpret things in the way that's most very favorable to what we want to believe or what we already believe. And so I can imagine exploration interacting really poorly with those kinds of processes where it might give you sort of extra tools in some way for holding onto beliefs that you want to believe. So I think it's definitely not always going to be beneficial if it is playing this role of allowing you to sort of ignore some of the noise and favor the broad generalizations.
On the other hand, there might be cases where it's actually useful in preventing you from overfitting noise and focusing too much on any single data point. Yeah, that's right. And then in your models too much.
Yes, sort of focusing on what's the single big thing going on here that allows me to predict most things rather than sort of getting in the weeds of the sort of small variations which maybe don't generalize much from case to case. Right. That makes sense. Well, since we've started talking about potential justifications for prioritizing simple theories over complicated theories, I have a couple ideas that I want to run by. They're not my ideas, but I'm confused about how convinced to be by them.
One of them comes from a philosopher at Carnegie Mellon named Kevin Kelly. He's basically been working on trying to see if there is a formal way that we can state Ockham's Razor, which is prioritize simple theories such that it is actually provably true. And one way that he's carried it out is in saying that, look, it's not that simple theories or simple models are more likely to be true. It's rather that having a policy of reaching for simple theories like the simplest theory you can feel explains the data.
Having that policy over time allows you to reach the true theory more quickly or more efficiently, assuming you're, of course, discarding those simple theories as new data and validate them and then reaching for the next most simple theory that explains your current data. So it's basically, I think he calls it ACMS Efficiency Theorem, which is an interesting take on. I hadn't before reading about that. I hadn't considered that there are different. There are different ways of like striving for truth.
One is like, I want to be as accurate as possible right now. Another is I want to maximize my ability to converge on accuracy quickly or efficiently. And I think there are others as well. But what do you. It seems like you you are familiar with having Kelley's work. What do you think about it?
I think it's really cool. I think to the general idea what you just highlighted, which is that maybe we should favor simpler explanations, not because the world is simple, but because it's a good policy for getting at the world. I think that's a really interesting and powerful idea. Yeah. In terms of for Kelly to prove that, he has to come up with a very precise way of articulating what simplicity means and the way he does. So, I think MIT may or may not map on to what we intuitively mean by simplicity when we talk about explanations and sort of everyday cases.
I think that sort of an open question. Yeah. I mean, yeah, I agree. I kind of glossed over that quite a bit, but it still feels like there is there's a colloquial version of that that's kind of true, that, like I you know, I would put that expression strong beliefs held weekly or or strong, strong views, weakly held or something. It's a mantra that, like you should you know, instead of having sort of vague, uncertain views about things, you should just have bold views that, you know, are likely to be wrong.
But at least if you state them clearly enough that the world can, you know, disprove them for you, clearly you're going to become more accurate.
Yeah, actually. But the philosopher Popper also said something along those lines. He argued for simplicity again, not on the grounds that the world is likely to be simple, but that a simple theory is easier to falsify. And so if what you want to be doing to make scientific progress is articulating theories where it's clear how you would go about testing them, which for him it's trying to falsify them, right. Then it's going to be good to be formulating those kinds of theories.
So in general, I really like this sort of approach. So I've actually I've argued for a position with the philosophy co-author that we call explaining for the best inference, and that's supposed to contrast with what's called inference to the best explanation. So the idea behind inference, the best explanation, it's a formal term that was introduced in the philosophy literature, but I think it's made it made its way outside of philosophy. And I think it's probably reasonably well known.
But it's basically the idea that if there's one hypothesis that explains the data better than any other hypothesis, then you should infer that that's a hypothesis. That's true. So it's a pretty intuitive, harda idea. Yeah, it's a pretty intuitive idea, although there's a question of I mean, a lot depends on how you flesh out what it means to be the best, right? Exactly. Is it the model that makes that data the least surprising that that's what a Bayesian might do?
That's right. So that's it. So the attempts that there are to give a formal articulation of what people do when they evaluate explanations suggests that that's not quite what they're doing. Right. They're doing something more like figuring out how much evidence the observations provide for some hypothesis rather than figuring out the posterior probability of a given hypothesis.
So they're sort of ignoring the base rate. They're there to say, I'm basing this on about three studies that have been done. So this is not a huge literature, but those three studies suggest that when people are making explanation judgments, they are less sensitive to the base rates or priors than if they were calculating your probabilities. But it seems like a reasonably intuitive idea. And the shift that I've argued for this is with a co-author, Daniel Oakenfold, is from inference to the best explanation to explaining for the best inference or the idea, like the Kelly example, is that maybe the way to think about it is not that the reason we favour particular explanations is because those explanations are the most likely to be true.
But rather, the reason we have certain explanatory packages is because those explanatory practices are downstream, most likely to lead us to true or useful inferences. Right. So what that's what that allows in are cases like that where maybe if you imagine sort of a long process of learning or scientific inquiry, maybe the process of engaging explanation doesn't get you to the true thing immediately, but maybe it gets you to formulate the kinds of hypotheses and do the kinds of experiments and collect the kinds of data that downstream are going to have beneficial consequences.
Maybe now's a good point at which to talk about explanatory vices. So we've been we've been talking about we didn't use this phrase, but we've been talking about explanatory virtues, features of explanations that make them more desirable, you know, explanations that we want to reach for more explanations that are more likely to lead us to the truth, etc.. But you've also written about explanatory vices, which are features of explanations that make them more desirable but aren't actually more useful or true.
So one one example of an explanatory bias that I like is random, scientific, irrelevant gibberish makes people apparently more likely to consider something a good explanation. What's the what's happening there? Right, there's a there's a few explanations for what could be going on in a case like that. So the uncharitable one to people is to say that when people are given scientific gibberish, their ability to engage in good, critical reasoning just kind of goes out the window.
And so they're doing something pretty dumb in those cases. They're just they're like, well, it seems science, therefore, and like science, these things are more accurate than unscientific things. I think even the way you just describe it, there is not only a few minutes slightly more charitable, actually, because that's not a crazy inference, right? That's true. I guess scientists are far more likely to be accurate than they have been on average. Yeah, I retroactively.
So I think that this is the least charitable or something like in the face of science, your ability to engage in real thinking shuts down. And then there's the increasingly more charitable ones I'll give you I that I think the other extreme, which is the more charitable interpretation, which is that you just gave me an explanation. I'm really good at evaluating explanations. I realize that there's a gap in the explanation you gave me. You didn't really give me a good causal mechanism.
You didn't really give me a good generalization as part of my explanation. But you also said the scientific literature, anything, and I didn't totally understand that. But I'm going to assume that because you're an authority and expert on this topic, that thing I didn't entirely understand. That's what fills the gap and makes it a good explanation. Right. So I'm going to basically to defer to the authority that I think you have and assume that, you know, when the doctor said that the doctor is an expert on this stuff, I didn't totally understand it.
But presumably when the doctor said, you know, blah, blah, amygdala, blah, blah, that actually was part of the explanation. Right. So in that case, I think people are doing is basically applying a statistic that's a reasonable amount of cases, but that isn't perfect. And it's that if I if I have reason to think that you're an authority in this domain that you know what you're talking about, then even though I didn't understand it perfectly.
Yeah, yeah. I'm going to I'm going to count that as good enough.
I guess it seems like a little bit of question substitution that people were asked. Is this a good explanation? And they're answering instead. How much is this? Probably correct.
Yeah, I think I think that could be true. That's right. I mean, we know that a lot of cases people are doing something more sophisticated than just is this true for an explanation? For example, if I ask you, you know, why does inflation occur under these circumstances? And you say because there are trees like these are trees. I know that's true. But, you know, people are not going to accept anything true as an explanation.
So they're doing something a few notches more sophisticated. But I think just the idea that you've asked them, how good is this explanation? And what they're answering is how good do I think it would be to somebody who could understand that? Right. They are answering a different question, but it's probably a reasonable one. It's a reasonable substitution and lots of everyday cases. So I think I can understand why people would function well enough a lot of the time doing that.
And I think that is the more charitable way to understand this phenomenon. But it also it's something that can go wrong. And lots of people say, for example, people have looked at the effects of neuroscientific jargon in the case of an expert testimony in a legal case. And there it does suggest that people are often very swayed by neuroscientific sounding explanations for people's behavior in a way that we could argue about whether that's good or bad for the legal system.
But it has an influence. There certainly seems like it can be bad. I don't know if it's that bad, but that's right.
That's right. So that's a case where it seems like it might have important real world consequences, whether people are able to engage in good, critical evaluation of explanations when they involve no scientific jargon. Right.
It also seems like there is an aspect of science, the explanations that we find persuasive above and beyond the authoritativeness of it. So like I'm thinking of all these pop science articles that say like, oh, we have the scientific explanation for love. It's that this region of the brain activates or something. And that, I guess kind of feels like you've explained love, like you have this physical mechanism that that causes the subjective experience of love. But it doesn't really explain anything in that, you know, we we don't know we don't really know what causes that region of the brain to activate.
We don't know sort of zooming out. We don't know why, like love would have evolved or I mean, maybe we have other explanations for it. But I'm just saying, knowing which part of the brain is active when love is being experienced in a human brain, in the human mind, is not really that much of an explanation. But I think a lot of people feel like. Because there's a mechanism that counts as a good explanation and that doesn't just seem like appealing to authority, that seems like a deeper, more philosophical confusion.
Yeah, I think you're right. I think a lot going on in cases like that. And we've tried to actually do a few different studies that relate to that. But I'll tell you about one of them. So what motivated one of the studies was the sense not so much that when you give a brain region that that seems like an explanation, but that when you give a category or a name for a phenomenon that seems to provide an explanation. That's right.
Feynman wrote about this. Yeah, I think so. That, like, kids will often feel like, you know, you've given the name of a bird and now the kid understands all there is to know about the bird, like, oh, that's a that's a Robin or something. Now I get it. And he instead wanted to focus on hope. This is correct and I'm not mixed up with someone else. But he was instead arguing for, you know, explaining about the bird's behavior or, you know, how it differs from other birds instead of just giving it names.
OK, interesting. So maybe we should have called our phenomenon the phenomena of fact. We call we called it the normative virtue effect because of this famous famous passage from Moliere, where the question is, why is that particular drug makes you sleepy? Because it has a dramatic virtue with it to it makes you. Exactly right. So so what all of these explanations that we're talking about, I think, have in common is the idea that you've pointed to something, whether it's a brain region, a category, a label, and it seems like that's doing some explanatory work for people that maybe it shouldn't.
So what we did in our study is we gave people a little thing at where we described a person's behavior and then we explained it in a way that either did or did not include a category. So, for example, this person engages in this. They they steal objects from store. That's not particularly valuable. Why did they do this? Because they have a tendency to steal objects from a store or because they have Dipankar a tendency to say, oh, I bet Kekkonen was even more.
The second one was much more it was much more satisfying. We just we just added this made up name for a condition and that may be right. But an interesting question is why? What is it that's going on when you give people that label that makes them think it's more satisfying? And what our subsequent study suggested is that when you give a label, people seem to assume that there is some underlying cause, something about the person that causes them to have this behavior.
So in some ways, what the label is doing is serving as a placeholder for a causal mechanism, the same way that you might think labeling a brain region does. And in lots of cases, it's reasonable that if you give a causal mechanism that does explain why something was the case. So I think what's going on here is people are basically accepting an indication that there is a causal mechanism for the explanation itself, that even that is sort of a leap or an assumption on their part, that it's not just like if you if you have the flu, that's there is like a virus in your body that's causing the symptoms that we call the flu.
But if you have narcissistic personality disorder, that's just a name that we've given to the set of symptoms you're displaying. We're not necessarily we don't necessarily know that there's some underlying thing in your body or your brain that is causing this. That's right. But people tend to think people tend to think it's like the flu. And to the extent they find it explanatory, it seems to be because they assume it's like that. Interesting. I mean, I should say, just to be clear, it is not in fact like that.
And most clinicians do not think it is. In fact. No, but but but our studies with with laypeople who have no special expertise related to mental illness do suggest that you do get some variation in the extent of people. Do you think of it more like the flu, where there's a single underlying cause or maybe a set of underlying causes? And to the extent you do think about it like that, you're more likely to find an explanation about why someone why did why does that person experience persistent sadness?
Because they have depression. They're more likely to find an explanation like that satisfying to the extent that they are actually thinking about depression as something that involves some underlying cause, that is responsible for the persistent presence as opposed to thinking about it. That's what depression means. It means to have these symptoms.
I mean, I'm going to you're so good at still manning these apparently irrational tendencies. I'm going to throw one at you that seems especially hard to steal. What about the finding that people when you ask someone if you can cut in line in front of them at the copy machine and your explanation is, can I cut in line in front of you because I need to make copies? They're much more likely to let you cut than if you just say, can I cut in line in front of you?
And you've literally give them no additional information. Like, of course, you need to make copies. That's why you're in line with the copy machine. And yet somehow that's compelling. What's going on there?
I think I'm probably not going to be able to give you a rational explanation for that. But I mean, I do think here's your some attempts. So one thing you're doing is giving people something in the form of an explanation, like it still has some of the same structural properties as a good explanation. And it could be that just for them associated reasons, we get some amount of satisfaction from that. Right. The other thing which I think could be going on in that case in particular is that.
At least acknowledge the need for an explanation, like if I just cut in line without offering you something which even looks explanation, like I violated a sort of social norm about the conditions under which it would be acceptable to do that. But if I at least say, you know, it's more like saying, can I cut in line? I have a good reason. I don't I'm not telling you what the reason is, but I'm at least acknowledging that he is acknowledging that you deserve a reason which may even be less about the information you're conveying and more about playing the social game.
Right. Right. So that could be going on in that case. But I would say other cases of similarly or almost equally minimal explanations also seemed to make people more satisfied. And I'm not sure that explanation works for for all of us.
This is just become like a joke line among me and my friends, like, you know, every time we ask for favorable side, like because I want to make copies. I guess the last explanatory device that I want to ask you about before we have to close is these teleological explanations which you've written about that I think are so fascinating, where like A you will ask a child like, why does why does the sun provide light? And they might say something like so that plants can grow.
And it's this weird kind of confusing. It's almost like confusing the effect with the cause. Like the sun has this beneficial cause. Therefore that must be the reason. Right. Sorry, sun has this beneficial effect. That must be the cause of the phenomenon of the sun. What is going on there?
There's a few different explanations. I think the first thing to say is that there are some domains where that's actually a completely reasonable way to explain things. So if you were trying to explain why a pen has a little clip and you say it's so that you can put it in your pocket. Right. And designed it. Exactly. Exactly. So in that case, it's totally fine. So we say so that you can put it in your pocket. And that's kind of a shorthand that we all understand for something like when it was designed, that part was put there with the intention that you'd be able to put it in your pocket.
So that seems straightforward or intentional behavior. So why did you go to the cupboard? So that I could get the chocolate right. I'm giving the effect of my action, but it makes perfect sense that what's going on is I had the intention to get the chocolate and the intention caused me to act a particular way. So there's lots of cases where it looks like what we're doing is explaining something by appeal to its effects. But in fact, you can give a totally reasonable someone anticipated those effects, right?
Exactly. Exactly. The cases that get trickier are evolutionary cases get tricky for lots of reasons, which maybe we don't want to get into. But let's talk about the cases like a child saying why are the mountains? There's mountains for climbing. Right. OK, how do you make sense of that case? OK, so one possibility is that they have basically taken this mode of reasoning. That makes a whole lot of sense when you're reasoning about human artifacts like patterns of human behavior.
That's intentional. And they've just overgeneralized to that case maybe just because it's familiar until they do it that way. The strongest version of that says they've basically assumed the world is designed. So the strongest version of that says basically people are creationists deep down and they're operating as if things are designed. And the reason they find this mode of explanation compelling is because it makes sense if you really assume a designer. I mean, maybe if this phenomena is only occurring in children or is strongest in children, maybe children just don't know that mountains weren't designed like this.
But if that's happening, if adults are reaching for teleological explanations, despite presumably knowing that the mountains weren't designed, I guess if you were doing the experiment on non creationist adults, then it's confusing. So I'll tell you what the literature suggests there. So the literature suggests that if you take non creationist adults, most of them will not accept that sort of explanation. But if you then make them respond really quickly, the kinds of errors that they make are the errors of saying that that type of explanation is right, rather than the error of saying that, say other types of explanations are wrong.
That's so interesting. So the person who's done most of this research is a someone named Deborah Kellerman. And so she and her colleagues have argued that maybe this kind of explanation is just sort of our cognitive default. It's like our our primitive preferred form of explanation. And that's why you see it in children. That's why you see it in adults undisputed conditions. She and I have a paper from many years ago when I was a graduate student where we looked at Alzheimer's disease patients and also patients with Alzheimer's disease showed the same kind of pattern.
So it seems like one way to make sense of that is that that's sort of our default. And then if you have appropriate education and you have appropriate beliefs about what actually led to mountains and so on, just the cognitive resources to perform and override.
That's right. Exactly. Then you can override it. But in the absence of that knowledge and that ability to do the override, you're going to see it emerge. So that that's one view. And Deborah Coleman has been the most strong advocate of that type of view. There's another view, which I think I'm actually on most days is more sympathetic to. And you will accuse me of, again, having a cheery, positive view of people as being not so rational.
I understand the tendency to try to steal the unapparent irrationality, but I'm a glass half full sort of person by your rationality. I don't disagree about the half empty, but I kind of focus on half also. Here's another way to make sense of it. If you look around. The natural world, there are some key and the manmade world, human made world, there are some things that have a very, very good fit between some structure and some function.
So, for example, my glasses are very well suited to fit the top of my nose. There's a very good fit between the shape of my glasses and the shape of my nose and ears and so on. Now that's probably not coincidental. So how do we make sense of the fact that this is a very good fit between the structure and its function? Well, maybe let's assume that there was some kind of a process of design or coevolution or something like that, such that you end up with this good fit.
So a good fit between structure and function is going to be a really good cue that something is the sort of thing that supports one of these two explanations. And in fact, for the case of my glasses, it's true that I can say, why are my glasses shaped like this so that they fit on my nose in this particular way. But that key is not perfect. You can get it wrong. So you could get it wrong if you said, why do we have noses so that we can hold up our glasses.
Right. Like it saying to you about the fit between the noses and the glasses. But there you got it wrong. And so the thought is there's this really good cue that a lot of explanation is warranted. So you can sort of feasibly make an inference that that kind of explanation is good. Divisibility, I believe defensively, meaning you can make an inference but that it can be defeated. Oh, OK. So it's sort of like a cue that this is the case, but it's not a perfect cue.
There can be exceptions. So when I see a good fit between the structure and function of my nose, my glasses, I think, OK, it's the logical explanation is good here, but I know extra stuff. And so in this case I know extra stuff that tells me that it'd be a mistake to explain that I have a nose so that if it's glasses. So if that's true, then what's going on? Is that what you see in kids and what you see in adults undisputed conditions is that they don't have the time or the resources or the cognitive abilities to factor in other information besides the single strong Q about the structure function.
And part of the reason I think that that might be compelling is because even under speed conditions, adults do not accept terrible to a lot of explanations.
So I can't think of a difference in cases where there isn't actually good fit cases where there's not a good fit and or worsen by the mountains because the sun is yellow. Yes, that's right. They will not do it under those conditions. Or, you know, why are there elephants for holding down papers in the wind?
That's better. That's actually the elephants could actually hold up papers in the wind right there are heavy. But that's not a good explanation for why the elephants use of an elephant. That's right.
That's right. So so I think the more charitable way to make sense of people's behavior in this case is that if you don't have the relevant prior beliefs, if you don't have the cognitive resources to take other sources of information into account, then you're just going to go with structure function fed and you're going to infer to a lot of explanation is warranted and that that may, in fact be a rational, bounded heuristic.
But like given limited time and resources, this is like the best explanation we can get. Yeah, I don't know what I want to go all the way to say that it's not even optimal, but I guess I'd be willing to say it's there is there is a it's a cue that is often right in lots of cases optimist's going a little bit stronger by saying and you can do better with other cues. I'm not sure I want to go there. Yeah, I was trying to just define bounded rationality here.
I wasn't big enough to make sense.
Excellent. Well, this is probably a good place to stop. If we have to stop before we close, I want to invite you to give the rationally speaking pick of the episode, which is some book or article or blog that has influenced your thinking in some way, like maybe it's changed your mind about something or got you interested in the field that you're currently in. So what would your pick. Can I cheat and give you two. You can.
People have cheated in the past. You've given me two or even three. Yes.
All right. Well, in terms of what got me into the field, I had a sort of circuitous path into cognitive science. And part of what started it was that I read a book by Steven Pinker called The Language Instinct, which I actually read as a high school student. And I immediately found it incredibly fascinating because it approached the topic of language which I was interested in to the extent that a high schooler who likes English and language can be interested in this.
Right. But it did so really formally and rigorously. It sort of seemed to me like it gave me a glimpse of what it might look like to do something like a rigorous science of the mind. And I thought that was really fascinating. And in reading this book when I was I think I must've been 16 or 17, he kept mentioning this Noam Chomsky guy.
And so I thought, you know, this sounds like someone kind of important that I should have heard of. And so I went to the bookstore and I bought a book by Noam Chomsky. And my criteria for choosing was that it was the shortest book. So for those who don't know Noam Chomsky, he's a very influential work in linguistics, but also a lot of work in politics and political theory. And so so this could have gone very wrong.
But I was wondering how this is going to turn out. But I happen to work in your life path, right? That's right. That's right. Chomsky book you back. That's right. I went with the language side that was good for me. And so I read this book by Chomsky where he kept talking about the cognitive revolution that sounded kind of interesting and important and so that. Sort of led me on this path of reading things related to cognitive science, and so for me, that was an influential book and sort of giving me a glimpse early on of what an interdisciplinary science of the mind might look like, why it might be interesting and so on.
So I would definitely recommend that today. Still, although in a lot of ways the field of linguistics in the study of languages has has changed since that book came out. The other thing that I really would recommend to I think listeners of this podcast is a resource that I find extremely useful, which is the Stanford Encyclopedia of Philosophy.
And part of the reason I really think this is phenomenal is because it is a publicly available resource. So anyone can go now online and find the standards. That would be a philosophy, but what it provides are very clear, peer reviewed summaries of topics and philosophy. So you sort of get some of the best of Wikipedia, but some of the best of the peer review process in academia, because these are publicly accessible, but for the most part written by leading scholars in the field.
And they have entries on topics like I recently read the updated entries on what makes something pseudoscience. So talking about the philosophy literature on how you demarcate science from pseudoscience, but you also find lots of things about historical topics in philosophy, contemporary topics and philosophy of modern science and so on. So I think when I need to point students to sort of the first thing to read and getting oriented to some topic that's been addressed in philosophy, that's usually the place that I want them to know is.
It also just opens like a crowdsourced project. No, no know. So in that way, it is not like Wikipedia. So well, maybe that's why it has such high, rigorous quality. That's right. I mean, it goes through a peer review process, which is similar to the peer review process that you would see for a standard journal publication in academia. But it is publicly available. Excellent. Cool.
Well, we'll link to the language instinct and we link to the Chomsky book as well. Or do you only want to recommend the language instinct?
You know, I'd have to figure out which one. I have it somewhere on one of the shelves, but I have to figure out which Chomsky book it was.
OK, well, we can always add it. All right. And we'll link to the Stanford Encyclopedia philosophy, as well as to your website and to your blog. Actually, I didn't mention in my introduction, but Tony also blogs for NPR and psychology, too.
That's right. So the NPR blog is called thirteen point seven Cosmos and Culture. And it's a it's a blog about the intersection of science and culture, broadly construed. So I and four other academics blog most days of the week. And I have blog for Psychology Today as well.
OK, excellent. Willing to all of that, Tony. It's been such a pleasure having you on the show. Thank you so much for joining us. Thank you for inviting me. This concludes another episode of Rationally Speaking. Join us next time for more explorations on the borderlands between reason and nonsense.