Transcribe your podcast
[00:00:14]

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I'm your host, Masimo, and with me, as always, is my co-host, Julia Gillard. Julia, what are we going to talk about today?

[00:00:48]

Masimo, today, we're going to talk about the science of applied rationality, which is a phrase that is near and dear to my heart since I sort of came up with it. The the name of the organisation of which I am now, the president is the Centre for Applied Rationality or far short. And so the science that we're going to talk about today is the science that inspired the founding of our organisation. Basically, it's first of all, what you get.

[00:01:16]

Is there such a thing as a misapplied rationality? Yes, I would argue that, yes. I think I think actually that's true. OK, go ahead.

[00:01:25]

Behind applied rationality is that cognitive science has learned a lot over the last few decades about cognitive biases, about the systematic errors that human brain commits when we try to reason or make decisions, and specifically what contexts or situations or environmental factors caused us to commit cognitive biases. And so there's been this wonderful proliferation of research in the field of mistakes and biases about how our brains go wrong. But what there hasn't been nearly as much of is research on how to fix it, on how to debase ourselves.

[00:01:57]

And and and there definitely hasn't been anyone so far until my organisation trying to take that research and use it to help people not make those mistakes in their own decisions about their career or their health or finances or personal relationships. And also the decisions that they make about how that affects society as a whole, like the decisions they make about who to vote for, how to treat other people or where to donate their money, that sort of thing. So essentially, applied rationality means using what cognitive science has learned about the human brain to improve our decision making.

[00:02:32]

So wait a minute, because the first thing that I that I thought when when I started reading about this is well done, the rest of us call this teaching. I mean, after all, isn't isn't the whole idea of teaching, let's say, about critical thinking or rational or logical fallacies or something like that? Isn't that part of the idea?

[00:02:52]

Yeah, this is definitely an example of teaching. One of the one of our core premises is that you can't actually cause you can't reliably cause any kind of significant change in the way people reason to make decisions just by telling them about cognitive biases that they're vulnerable to or or telling them about principles of reasoning. You have to actually have more in-depth, intensive practice of good decision making habits on and ideally on examples from the domain where you want people to be improving their decision making.

[00:03:26]

So in this case, on real life case studies. Right.

[00:03:30]

So I was talking about let's say let's talk about a different example. Well, actually, I suppose it's a related example, like gambling related fallacies. Right. So the fact that people have sort of intuitively pretty bad, I suppose, intuitions about probabilistic thinking.

[00:03:51]

Right. Right. Like, if if it's come up read three times so far, then we're due for exactly that sort of stuff.

[00:03:58]

Now, I would think that, you know, that's a very well known phenomena. I mean, that's been known for for a while that people are pretty bad at doing that, sort of engaging in that kind of reasoning, which, of course, is one of the reasons why gambling casinos do so well. Now, if you teach a course in probability theory or applied statistics or for that matter, even a general course in critical thinking in a philosophy class, that is certainly one of the topics you're going to cover.

[00:04:29]

And I would hope, I guess, that once somebody has been told perhaps better, as you were saying through examples, because it's always better if people work out their own examples anyway, regardless of what we're talking about. If you it definitely makes it more of an impact, if you're told not just about the theory, but you also actually worked your way through specific situations. Right. It's easier to internalize that much. So so that is the kind of thing that people have been doing right.

[00:04:57]

For, for a while. I mean, I'm talking about sort of within an academic setting. You know, if you take a course in applied statistics or in, you know, philosophy of critical thinking or something like that, that is the sort of thing that people are exposed to. Or is there or or is there something radically different that they're also talking about? So are we talking about bringing the kind of things that people either are doing or should be doing in the classroom also to a non-standard audience, you know, sort of outside of a college setting or.

[00:05:32]

Well, so it is that. A college setting where we're an independent, private and three non-profit and we most of our workshops are for adults 18 and up, although we are right now running a workshop for exceptionally talented high schoolers called Spark, the summer program on applied rationality and cognition. But this is all outside of sort of standard curricula. We might at some point try to get rationality applied rationality modules developed for standard public school or college curricula, but we haven't done that yet.

[00:06:04]

Actually, that's not entirely true. We are working with someone at UC Berkeley, a Nobel laureate physicist actually named Saul Perlmutter, to go after Perlmutter won the Nobel Prize. He decided that what he now wanted to use his his new found glory and fame and connections for it was to spread rationality and rational decision making. So he's he's designing a class on rational decision making for Berkeley undergrads, and we're helping him with that. So I guess that's an exception to what I just said.

[00:06:32]

But what's your question? OK, so your question was about how is what you're doing different from the standard way that critical thinking is taught?

[00:06:40]

Right. Well, for instance, let me ask, why would a physicist, with all due respect to somebody won the Nobel Prize, engage in designing a course on critical thinking?

[00:06:51]

Oh, well, I mean, critical thinking is pretty important to practicing science.

[00:06:55]

Well, yes, but most physicists, most scientists, and I can tell you from personal experience, don't actually think about critical thinking. I mean, that's what that's what they learn automatically, but they wouldn't know how to teach it.

[00:07:07]

Yeah, well, I guess that's part of why he's bringing in outside help. But but yeah, I mean, you're right. Part of what he was trying to do is, is think clearly, think explicitly about the principles that underlie what he does as a scientist, which he may not actually have have ever explicitly considered before. But this is sort of a very original question, which is what is what is it about the way we're teaching applied rationality that differs from the way it's presented in standard curricula?

[00:07:34]

So the the practice of giving people examples and practice problems that you described is great. And it's much better than just telling, like giving people passive or having people passively receive information. The latter seems, yeah, pretty incredibly unlikely to work. The former giving people examples and practice problems seems more likely to work. But there's still, unfortunately, this problem of domain transfer, which is one of the big sticking points in successful critical thinking education, which is that people, even if they can learn a concept in one domain and that domain might be statistics like a field, a particular class, or it might be a context of school like problems I do in school with a paper and pen, even if you get people to successfully learn a concept and get all of their practice problems right.

[00:08:27]

The trick then is still to get them to recognize when in real life they're in a situation where that principle applies. So like maybe in statistics class, you can get people to easily understand the gambler's fallacy and the fact that, you know, flipping a coin each time is independent. And so past conflicts have no bearing on what you should expect for the next conflict. So they get that perfectly. They get all the problems on their tests. Right.

[00:08:53]

But then they're there out in the world and they've they've let's see what would be a real world example of this. I don't know, maybe they're playing the lottery or something and they feel like they're due for a win or maybe that's actually too similar. Maybe it would have to be something like, I don't know, asking someone else. Maybe they feel like they're due for.

[00:09:17]

Wait a minute, are you talking about dating as gambling? I'm talking about dating is repeated trials with some probability of success each time. It's not a perfect example because you presumably get better each time. I would think so. Asking someone out.

[00:09:32]

So but but I mean, the basic point here is that people don't it doesn't even occur to them that what they learned in their statistics class is going to apply in this context. That's not labeled statistics are not labeled use critical thinking here. So that's really the trick in an effective biasing.

[00:09:51]

So let's stay on that for a second. So that is I mean, that's a general problem in education, period. That is that is it is in fact difficult to extend the domain of application of what people. Right.

[00:10:04]

Even in most of education, you're not trying to get people to behave differently in their everyday lives. You're getting them to understand and have a familiar Milioti with particular domain. Well, that's an interesting point for them to transfer.

[00:10:17]

Yeah, that's an interesting point, because actually my goal is to affect people's lives when I teach. But but, yes, you're right. That's usually not an education in general.

[00:10:27]

But maybe that doesn't apply so much to what you teach, which is more similar to what I'm teaching right now.

[00:10:32]

But what I was I was going to stay for a second on. On these problems, of course, transferability because I read one of the articles that you suggested, this is by Scott Lilienfeld and collaborators in the Perspectives on Psychological Science. And so one of the interesting things they say is that there is, quote, rather little research demonstrates that critical thinking skills generalize beyond the tasks on which they're taught. Now, of course, the fact that there is rather little it's a that's a that's a tricky thing to say, because the fact that there is very little research demonstrating critical thinking generalizability doesn't mean that are not generalisable.

[00:11:12]

It may simply reflect the fact that people very few people have actually done research on it. But I don't know what the broader literature in that area looks like. It certainly is an area where, yes, we not we ought to have evidence. We it would be nice since there is a lot of talk about critical thinking here and there and not only at the college level, but at the college level. These days, you can't come across a curriculum in high school.

[00:11:36]

That doesn't mention even earlier. That doesn't mention critical thinking, which I think is a good thing as a general idea. Except, of course, that people, different people have different ideas about what what constitutes critical thinking and how to teach it and whatever. So it's certainly more research will be interesting. But one of the things that sort of took me a little back about this was in the same article. So on the one hand, you know, research on the on the cross, transferability of critical thinking skills, I think is in fact a good idea.

[00:12:04]

And apparently it is an open question. What I found was a little more sort of strange by a strange claim, by the authors. Is this the capacity to think critically, surprisingly, none generalisable across disciplines. But if you look at that, the references that they cite there, one of the references, one of the two references is fine by 1985. And that's actually Fineman's by your biography, Richard Feynman's biography, which I don't really think it's a particularly good source of evidence for that kind of claim.

[00:12:35]

And in a few lines, you know, that may be I don't know what in that biography they were. Well, I read the biography. I don't see that. I mean, even if Feynman does claim that he knows people who are very good at thinking in one area and then they can't transfer that skill in another area, which is probably what he does so well, that hardly counts as anything more than anecdotal evidence, I would think. And then the other one, if you look at a few lines later, the same others say, well, some others have conjectured that highly intelligent people possess especially effective ideological immune systems because they're adept at generating plausible counterarguments against competing claims, although this possibility is yet to be tested systematically.

[00:13:17]

Now, that's a fairly big claim, which actually I have heard in the skeptic community, this this idea that, you know, even smart people do dumb things but.

[00:13:30]

Right. And smart people do dumb things. If I well, I don't know who you're referring to. I was, but I read that I thought of Michael Shermer and his the chapter that he added to the more recent edition of his book, Why People Believe Weird Things Right Afterwards, Why Smart People Believe Weird Things and theory not based on randomized controlled trials or anything, just based on anecdotal evidence that smart people are especially coming up with elaborate justifications for why they don't have to update their beliefs in response to evidence of how convincing that evidence might seem.

[00:13:59]

And that's that's exactly right.

[00:14:01]

In fact, that the authors of that article do cite only Shermer 2002 as as a source of evidence. Now, that's interesting because, you know, we both know Michael and he's definitely a smart guy. And I read that book and it's very interesting. And the hypothesis is interesting. I mean, it's certainly sort of provocative, right, that these idea that the smarter you are, the better you are rationalizing rather than being rational. That may or may not be some truth to that.

[00:14:27]

I don't know. But what does worry me and this is really tangential to what we're talking about. So the main topic and I want to go back to that in a second and let you go on with that. But but but I, I do think it's an interesting thing that I noticed more and more. And this example is just the latest in a long series that technical papers, sort of academic papers like this one cite sometimes as sources of evidence or sort of sources in general books that are really not peer reviewed.

[00:15:00]

And they actually written for a general public like SUMUS book. And that sort of worries me because we're getting to the point where there is this developing these fuzzy category of books where people make fairly bold claims that may they are interesting and they may or may not be true. And then these claims sort of entero find their way into the mainstream academic literature. Some of Richard Dawkins own books are often quoted like The Selfish Gene. They're often quoting in the primary literature.

[00:15:34]

Even though it's you know, that was a book for the general public, it was not peer reviewed and all that sort of stuff, certainly not original theater. So it kind of worries me about this trend that there's this reference referring to provocative ideas that, in fact, have really not been tested, that just being put out there by, you know, interesting people. And it's interesting, provocative idea, but that's about the end of it. There's nothing else to it.

[00:16:02]

Yeah, this is definitely an interesting point, but completely unrelated to the topic of devising, so.

[00:16:08]

Well, no, not entirely, because it is it is part of the argument that these authors make for why debating is important, because basically the argument is, look, even really smart people find my 1990 or whatever, 1995, Sherman 2002 can fall prey to these kinds of things. So clearly, there is a need for these kinds of interventions. I'm I'm suggesting that the evidence for that particular claim is somewhat sketchy, to say the least.

[00:16:35]

You're doubting the claim that even smart people do stupid things as a systemic thing. Yes. I don't I don't I don't think there is any evidence of that now. There is anecdotal evidence. Yes.

[00:16:46]

Oh, OK. So there is actually evidence and literature about a lack of a correlation or just a very weak correlation between standard measures of intelligence and rationality. And here, rationality means evaluating arguments objectively, showing consistency in your very basic, fundamental consistency in your preferences and the decisions that you would make in various hypothetical situations using deliberative reasoning even when you weren't explicitly instructed to. So I'll give you an example.

[00:17:19]

Actually, that's that claim doesn't surprise me at all. But go ahead. Give me an example. So there is there's a series of questions that cognitive psychologists have given to people where if you rely on your intuitive knee jerk answer, you'll get the wrong answer. So one example would be a bat and a ball cost a dollar 10. In total, the bat costs one dollar more than the ball. How much does the ball cost? And many, many people just say ten cents without thinking further about it.

[00:17:51]

Can't actually be right, because if that were true, the ball would have to cost a dollar more and it would be one ten and then the total cost would be one rather than one. So there's just some sort of intuitive heuristic that people use where they're like, oh, I probably want to subtract one of these numbers from the other, but they're not really thinking carefully about it so that people's people's success rate at getting that getting that question right is unrelated or very weakly related to their IQ.

[00:18:18]

There's a bunch of other questions like this where like if you just thought carefully about it and actually tried to get the right answer, instead of just saying the first thing that associatively came to your mind, you would get it right.

[00:18:28]

You know, sorry. Before and before you go ahead. Do you know if so, this is the relationship between rationality in this sense and in IQ. Do you know if there are? I'm just curious if there are any data about the relationship between a rational, deliberative thinking on the one hand and let's say scores on sections of the jury because they're going through a record examination actually measure. It's supposed to measure. I don't know that it does, but it's supposed to measure precisely that kind of, you know, thinking skills and analytical, deliberative thinking skills.

[00:19:03]

So my guess, if nobody's done it, my my proposal is that if somebody actually is going to check is that they're going to find a much higher correlation between rationality and jury scores rather than rationality and IQ scores.

[00:19:18]

That may well be. I don't know.

[00:19:20]

So you don't know if anybody looked into that? That that's that's an interesting question for our listeners. Is anybody if anybody knows that, I'd love to know.

[00:19:29]

So the other piece of interesting evidence I was going to bring up is one one of the really common features or one of the essential features of a rational disposition is evaluating arguments or evidence objectively, regardless of what of whether you personally agree with or like the conclusion. So there's this thing that some cognitive psychologists called the myside bias, like. Yeah, biasing the side that you're predisposed to when you're elevating arguments. And so they've done various tests of myside bias and then correlated it with IQ and other measures of cognitive ability.

[00:20:05]

And again, there's not really very much correlation. So the kinds of experiments that they've done, are they present participants with two hypothetical experiments, both of which are quite flawed and each of which purport to demonstrate some particular conclusion and and ask people to point out as many paths as they can in each of the two experiments. And people are able to come up with many more flaws in the experiment that supports the conclusion they don't like. There are other tests like can can people evaluate logically valid reasoning as being logically valid, even when it supports the conclusion that they don't like?

[00:20:46]

Can people generate arguments for a position that they don't like anyway? So this is another example of rationality not being that connected to IQ. But I don't you know, even if people who have higher IQ are less disposed or less predisposed to irrational behavior, that still seems worth it to try to bias them even if they're not as bad off.

[00:21:07]

Yeah, I certainly I'm sorry. I didn't want to give you the impression that I don't feel good. It's a good idea to do the bias people, as you put it. I was actually I was picking a specific bone of contention about the Feynman and Sherman Shermer based sortation.

[00:21:29]

Also, I'd also wanted to address the question you raised a few minutes ago about the paucity of literature on critical thinking, actually like generalizing to domains outside of the classroom. And it is like it would be great if this research had already been done and we already knew exactly what techniques to use to get people to to make rational decisions and the rest of their lives. So far, there's been a small amount of research, like there's a small set of things that have been shown again and again to work.

[00:21:57]

Well, a lot of those are very simple, like a strategy, if you can call it that, a strategy called consider the opposite, which is just the natural habit, asking yourself what, what what is a reason or what are some reasons why the thing I believe might not be true or what are some reasons why that might be the best plan. So this has been repeatedly validated as something that people can learn and that affects their decision making positively, although there's also research that but isn't classified technically under the debasing literature.

[00:22:31]

But it's still really relevant and I would call it a shining example of applied rationality, and that's cognitive therapy. So cognitive therapy is based around the idea that people's negative emotional reactions are often based on these implicit assumptions or beliefs that they have that they haven't examined and which are often completely distorted, like getting set when you're getting upset, when your friend pushes you to have another drink, because you assume that she must know that, but actually maybe examine evidence.

[00:23:03]

You'd realize she has no way of knowing that. Or maybe you've given her contrary signals. Right. Or there's a lot of all or nothing thinking or jumping to conclusions or negative, but one happy. So cognitive therapy has a really well validated record of getting people to notice the implicit beliefs causing their emotions. Ask themselves what evidence they have for them, how they would test them, and so on. So that I think that that's a great example of applied rationality and action.

[00:23:29]

And it's also evidence that you can get people to internalize a principle of critical thinking and actually turn it into a habit that becomes almost instinctive, which is exactly what we want to do. Right.

[00:23:40]

But if we go that that broadly, then then isn't the existence and functioning of pretty much every graduate school program another example of applied rationality? I mean, after all, that's precisely the kind of skills you learn in graduate school. And they are pretty successful, judging from the number of people that then go on with having successful careers and discover new things and apply what they learn. I mean, after all, what I what I when I think of a of grad students interacting within the laboratory, for instance, I think of that as an apprenticeship kind of workshop as as you learn not just the principles, the theoretical principles, because you're exposed to to certain classwork or theoretical work, but you really literally learn how to apply things on a daily basis, not to apply to reason better on a daily basis.

[00:24:40]

That's that's precisely what being a school is. Now, I'm not suggesting, of course, that that means that we have to put the entire population of the United States to a graduate level course. Of course work. But but isn't there that isn't that another example of sort of broadly construed apply rationality or would you not count that?

[00:25:00]

So if graduate programs actually did that, if they actually made people rational decision makers in other areas of their life outside of the laboratory, then yeah, I would call that a great example of applied rationality instruction. But but they don't seem to do that. I mean, so what we're talking about is getting people to understand not only that they need to be wary of, say, selection bias when they're running experiments and selection bias. Here is the problem that the sample that you're looking at to generalize to the population isn't actually representative of the whole population.

[00:25:35]

So so we want people to not only recognize that that's an issue when they're doing a statistical analysis, but when they're just collecting informally collecting evidence from the world about how the world works in ways that are really important to the decisions they're going to make. So in my life, my my shining example of selection bias at work, which I failed to recognize at the time, was when I was considering whether or not to go into a program. And I asked a number of professors whether they thought, whether they liked academia, whether they thought I should go into academia.

[00:26:05]

And the response was pretty positive. They liked academia. They would recommend it. And it was only much later that I realized that I had been selecting from a very biased, very skewed sample of professors because because they were the ones who could hack it and also liked it enough to stay. I wasn't actually interviewing any of the people who had considered being professors, but abandoned the idea or failed.

[00:26:26]

Right. But I don't think that, let's say a graduate student who is learning about, let's say, sampling protocols because of his work isn't going to be able to make that kind of leap and apply to. I mean, that's certainly what I did when I when I started being aware because because I was practicing and because I studied the theory of, let's say, you know, certain issues in sampling and probability distributions and all that sort of stuff. Then I started seeing bell curves everywhere and I started asking myself every day about everything.

[00:26:58]

Was this a question kind of process or not know what? Is it random or not? It sort of becomes automatic that you start thinking you are exposed to new ideas and you practice them within a domain. And I don't know, to me, at least, again, this obviously is personal anecdotal evidence. But so I don't pretend that that is general. But that to me, this was immediately translated into, oh, you know, whenever I don't know, maybe I watched the news or something.

[00:27:28]

And I was presented with a graph, for instance, that made me a question was always the Arabana or what is the what is the how does the sample I was the sample arrived on and that sort of stuff. You don't think that that translates empirically?

[00:27:43]

It doesn't. So we have actual evidence that it doesn't really. Yeah. So there are studies of statistics, students and even statistics professors where they're asked a question that involves a hidden statistical principle like regression to the mean and the statistic. Professors and students get it wrong, not quite as much as average statistically naive respondents, but at a very high rate, like much higher than you would expect if they had just internalized all of these principles of statistical thinking and use them, you know, across domains.

[00:28:16]

But maybe you're maybe you're an exception. I mean, you're unusual in a number of ways to which you teach classes on critical thinking in our public communicator of rationality. So. Well, probably not a representative case.

[00:28:29]

I probably am not a representative case. That's true. Yeah. I mean, although the evidence. Yeah, the equivalent in philosophy will be of some philosopher who knows about logical fallacies, and yet he's caught in everyday discourse committing, you know, adamantium and strawman and all that. So I would think that that's just a bad philosophy professor. You know, on the other hand, there is evidence, for instance, that professors are flaws if you are no more ethical than than your average person.

[00:28:58]

So perhaps that that falls into that category as well. You know, you don't you don't practice what you preach or what you understand at a logical level. So I don't know.

[00:29:09]

So I don't know if you've yet read Daniel Commins book Thinking Fast and Slow. But if you haven't, you totally said it's a great book and it's full of examples not just of biases in the general human population, but of biases that Kahneman himself has found himself committing, even though he knows like he knows about the bias. But he still finds himself committing, say, the planning fallacy, where you vastly underestimate the amount of time it's going to take you to complete some task or the halo effect where your estimation of the quality of like an argument or the quality of, I don't know, an essay that a student wrote is influenced by other completely irrelevant qualities of the person.

[00:29:51]

So, yeah, I mean, if Cornerman himself has trouble, has trouble getting past biases, and if statistics professors have trouble noticing when statistical principles apply to everyday situations, it seems biases seem pretty sticky.

[00:30:07]

Well, yeah, I again, I certainly don't doubt the biases are sticky. I just don't see any way around it rather than making people aware of them and in practice exercises basically. And you know, so. So now you mentioned Kinnamon. So let's go back to one of his most famous contributions. Is this distinction between System one and system two mode thinking. Gone where system one thinking is sort of automatic and based on your mistakes and presumably subconscious your mistakes, were they evolved adaptively or not?

[00:30:45]

I think we should leave entirely out of the question because it doesn't really matter and it will get us into all sorts of trouble with evolutionary psychological explanations. The fact of the matter is that it's an indisputable fact that we do have we do think subconsciously by you mistakes, so by shortcuts. So the difference between that and the system to which is, on the other hand, sort of control rule governed and in fact it has a different brain underpinning, basically.

[00:31:14]

So you move from from massive fast parallel subconscious processing of information in the case with the sticks to slower but more deliberate thinking, rational thinking that is actually mediated by essentially sort of conscious conscious processes. OK, so the whole idea, therefore, it seems to me, or at least I should say the whole idea, part of a big part of the idea, however, is to try to teach people to move from system to system to modify thinking.

[00:31:48]

Is that correct?

[00:31:50]

Yes. So that that alone would be great. Well, getting people trained in using skillfully using system to processes, but also getting them to recognize when system two is actually appropriate because it's not always appropriate. I mean, if you don't have a lot of time or if you developed a really strong intuition just based on repeated experience, then you're just the one who's going to do pretty well. So, yeah, so system two is is often appropriate when we don't realize it, but not always so.

[00:32:21]

But then the the next step, which would be desirable and which we're working on, is getting some of those system to deliberative reasoning habits to become automatic the way that it's automatic when when you notice yourself distorting evidence that's causing you to be anxious or depressed and instinctively ask yourself, just reflexively, do I actually have good reason to believe that? So there is I don't know how much research there is on that yet, except in the field of cognitive behavioral therapy.

[00:32:56]

But at the very least, it's like an excellent starting point to train people in the deliberative reasoning skills and get them to recognize when they apply. Right. But I just briefly clarify that the system, one system to distinction was actually that came from KIPP's Danovitch, who's a cognitive psychologist at UT Toronto and the author of a couple of books that I've used as my Rapsody speaking PEX in the past actually is wonderful. That's good. But it was popularized by.

[00:33:21]

Right.

[00:33:22]

So now you mentioned a couple of times that sort of behaviour therapy. And when we talked about this in the past, just because I'd like to make a connection as broad as possible, sometimes I mentioned that cognitive behavioral therapy is in some sense applied the applied version in modern settings of Aristotle's idea ideas about virtue, ethics. Right. So so I sort of thought that we we have certain habits that come naturally. You were talking about virtues and moral habits, but but the same obviously applies to sort of thinking about habits.

[00:34:02]

We have certain things that come natural. We have a certain predisposition to do certain things or others, and that if we want to improve on those predispositions, what we need to do is literally practice, practice, practice. Aristotle's idea was that practicing virtues initially is going to be difficult because it doesn't necessarily come natural except for a few individuals. It doesn't come natural. But the more you practice, the more it's like going to the gym, the easier it becomes up to the point in which ideally it does exactly what you were saying a minute ago, which is it becomes automatic right now.

[00:34:38]

There is research that does show that that is the case, certainly in the applied situation of cognitive behavioral therapy. But I think more broadly, there is research that shows, for instance, research and expertise that shows that novices do initially do have the wrong intuitions about how to proceed on certain in a certain domain, let's say playing chess, for instance, and then through mindful practice, by paying attention very slowly. So basically engaging system to kind of thinking they learn how to do it better and better.

[00:35:11]

And then at some point they make the leap of essentially shifting back the newly acquired skills to a system level system, one level thing. So it becomes automatic. It's like, you know, in some sense it's like learning how to drive a car. You sure you really have to pay a lot of attention? Then fortunately it becomes automatic because otherwise you wouldn't have to pay a lot of make a lot of effort. Just to drive down the grocery store.

[00:35:36]

Yeah, so that seems not a general human ability of of going switching between the two systems, pretty much no matter what the task is, whether it's a it's a cognitive thing in terms of sort of rational applications or whether it is probabilistic thinking or whether it is even moral behavior that we seem to have the or certainly mechanical task. We simply have to have these ability seem to have this ability to switch between the two systems. And when we become better and better at something, then we automate automate the process.

[00:36:10]

Yep. I mean, that's that's one of the main reasons why I'm optimistic about a lot of rational decision making habits becoming automatic over time. In addition to anecdotal evidence from myself and a bunch of other people I know who have been really keen on rationality for the last few years, that I have observed things becoming automatic and myself. So, yes, that's where we're also collecting a lot of data. We're running a randomized controlled trial on. So we admitted twice as many people to our last series of workshops than we actually had space for.

[00:36:46]

And we got them all ahead of time to take a but the reasoning test and also an in-depth survey of various metrics of life, success and satisfaction, like how many times in the last month or two weeks or whatever, have you made decisions that you later regretted or how often do you find yourself procrastinating? How satisfied are you with your professional life or your love life and so on? So so you collected all this data and then we essentially flipped a coin and admitted half of them to our workshops and half of them did not get admitted.

[00:37:18]

And then we're following up with them and I think six months and then 12 months to see whether the whether the change in reasonability and or life success is greater in the group that came than in the group that didn't. And I'm sure that a lot of the stuff that we're trying is going to turn out in retrospect, not to have worked that well or not to have certainly not to have worked optimally compared to other things we could have done. So the point of collecting the research is collecting the data is not just to contribute to the body of knowledge about devising, but also for our own benefit so that we can iteratively improve our classes.

[00:37:52]

Now, we have only a few minutes left, but let me bring up one more issue and your your thoughts about this, which is one of the articles that I read, talked about the potential barriers to devising. Sure. Of which there are several. And one of them is, of course, the bias blind spot, meaning that people are going to say, I'm not biased. What are you talking about? So that's that's one one thing. The other one is that people have to be convinced that what they are learning in terms of the biasing is actually relevant to their personal welfare, to the things that actually matter to them, which which I guess you're trying to address by moving to as practical applications as possible.

[00:38:37]

The other problem that these authors bring up is that currently the research simply is not capable of telling us whether devising programs need to be repeated or sustained over time. That's an interesting question. Know like, do you need booster shots for for your devising? Another one is the cultural differences. So perhaps certain programs might work with certain types of individuals belonging to cultural or gender and not others necessarily, because there obviously is variation in the human population for for these kinds of things.

[00:39:13]

So there are others. But what are your thoughts or in how you guys are going to sort of take that into consideration on these ones?

[00:39:20]

So I'll start with the first was the bias blind spots? People don't think they're actually biased. And there are a couple of a couple of responses to that. One is that at least in the near term, we're selecting for people who are already interested in and receptive to the idea of improving their Decision-Making skills. So we're not going out and trying to convince people who are really resistant that they should learn about rationality. That would be great if we could achieve that eventually.

[00:39:47]

But it's not the low hanging fruit. And then the other response is that there are actually a lot of fun exercises or demonstrations that you can do to reveal to people that they're biased, like the you may have at some point played the overconfidence game where you ask people to estimate various quantities, just all sorts of questions like the height of Mount Everest or how many platinum records this or that artist had and so on. And you ask them to give a 90 percent confidence interval for their estimate.

[00:40:17]

So, you know, a lower bound, an upper bound within which they are 90 percent confident. The true answer falls. And again and again, just about every time this experiment is performed, only about 50 percent of the answers are actually within people's 90 percent confidence intervals. So this is like a really. Stark and and unambiguous demonstration of an overconfidence bias, there's a bunch of other things you can do to show people that they that they actually have biases.

[00:40:44]

And then the second thing you brought up was the additional one was actually was the third one you've just read.

[00:40:51]

The first two was a cultural. Well, actually, you know, was whether whether whether there's a need for sustained sort of program, repeated programs or booster shot.

[00:41:01]

One of the ways that we're addressing this question is through the development of rationality apps or games like for the iPhone or Android. So I was talking about overconfidence and I was also talking earlier about the importance of practicing skills on the kinds of issues that you want to to be good at them, at issues in your real life. So this game called Action Game asked people to make predictions about things in their life, like upcoming issues that are going to get resolved one way or the other, like, is my boss going to say yes when I ask for a raise or even little things like how much is it?

[00:41:35]

How much is the total for the groceries I'm carrying going to be? And people are prompted to attach a confidence level to their estimates and then asked at the requisite time how it actually turned out so people can get a sense of a time of how overconfident or under confidence they are in various domains of their life. Right. So that's one example. But then the other thing I would say to the booster shot question is that one of the reasons that we're doing these workshops in person is not just to teach people about biases, but to create a community, a network of people who are excited about rationality and motivated to try to improve their own rationality.

[00:42:16]

Because one of the biggest effects in terms of actually changing the way people think and behave is the effect of the community that they're in. And I've seen this personally, myself to a large degree, that I find myself surrounded by people who display strong rationality habits and who say things like, oh, that's a good argument. I hadn't realized X, Y or Z, I'm changing my mind now. Or people who are just reflexively ask, let's see what evidence could we look for that would help us settle that question.

[00:42:50]

And so that repeatedly keeps the stuff on my radar screen and also provides these nice role models from you to aspire to.

[00:42:56]

So that's one of the things that we're at sidebar we're trying to create in addition to the curricula, although, as you said, that means that the audience is going to be largely self selected. I mean, you're talking about communities of people who are already interested, which is which is fine. I think that one of the things that concerns me more broadly is how do you bring these kind of stuff to a general population? Because, frankly, you know, it's nice to improve the rationale for people already think rationalities is important, that the next question is how to convince almost everybody else that rationality should even be on the on the plate.

[00:43:34]

But that's that's a discussion for another time. Yeah.

[00:43:37]

I mean, I think once the data starts coming in and we have a sense of what the most useful rationality techniques are in terms of improving people's own personal lives, that'll be increasingly motivating for people who aren't interested in the rationality just inherently, but still want to have better careers or love life or health or get rich and so on. I mean, the self-help, despite the fact of self-help, is not at all empirically validated or grounded in theory. So that was part of the reasoning behind behind framing our classes as being about helping you achieve your own personal goals.

[00:44:15]

Despite the fact also and one of the reasons that we're doing this, I think that making people more rational will have positive effects on society in the world as a whole. Yeah, I completely agree with you that it would be wonderful if we could make the world rational and not just the self selected part of the UN, but that's exactly the only anyway.

[00:44:36]

Anyway, so we're over time now, I got too excited talking about it, but since we're out of time, let's move on now to the rationally speaking PEX.

[00:45:03]

Welcome back. Every episode, Julie and I pick a couple of our favorite books, movies, websites or whatever tickles our fancy. Let's start as usual with Julia Spik.

[00:45:12]

Thanks. So I'm going to cheat very slightly and give to pick one of which is the website for my organization, Center for Applied Rationality. Its Applied Rationality dot org. You can find out about all the cool things that we're doing and you can also sign up to be notified of upcoming rationality workshops. So then my other pick is the blog of Dan Ariely, who is the researcher who wrote Predictably Irrational. It's a really enjoyable blog, not just because he's a he's a great writer with a really friendly and conversational voice.

[00:45:41]

And he talks about a lot of interesting research, but he also ties it into like everyday life issues. So one of my favorite posts from him was about how email is perfectly like the experience of checking your email is perfectly calibrated to be addictive, the same way roulette machines at casinos are perfectly calibrated to be addictive. So we'll link to that post as well. But I recommend reading a lot of his work. It's really enjoyable.

[00:46:08]

OK, my pick, on the other hand, is a paper that came out recently in Nature magazine, and it's called Measuring the Evolution of Contemporary Western Popular Music by John Sirah and several other other collaborators. And what I found interesting was that they applied quantitative mythologies to the evolution of music from the 1950s, I think, to until modern time. And they found some interesting things, one of which was, for instance, let me read you from the abstract of the paper.

[00:46:43]

Many of the patterns and metrics that they measured have been consistently stable for a period of over 50 years. In other words, music hasn't really changed that much over the last 50 years. But they prove important changes are translated to the restriction of pitch transitions, the homogenisation, the temporal palette and the growing loudness levels. In other words, modern music is is less variable and more loud than it used to be decades ago. That's that's pretty much the bottom, the bottom line.

[00:47:13]

Now, some commentators have interpreted this to say to to show, for instance, that that there's one of the graphs in the article that shows a peak in a measure called the beta, which is a measure of diversity variation basically in the way that people do music. It peaked that one peaked in the 1960s. And so some commentators have been interpreting this somewhat tongue in cheek, possibly, as, you know, the fact that music was actually really better, demonstrably, scientifically better in the 60s than it is now.

[00:47:48]

But, of course, a different interpretation was more variable, which means there was more crap produced in the 1960s and then things sort of converged to a particular aesthetic standard, whatever that aesthetic standard, of course, might be.

[00:48:01]

So anyway, it's an interesting article because it looks it makes you think about how what happens when you apply quantitative measures to otherwise entirely historical aesthetic questions, such as what is the quality of these pieces of music?

[00:48:17]

Cool. I'm glad you added that that additional that addendum about alternative interpretations, because otherwise you're going to make even more insufferable the people who are always talking about how everything was better when they were children. Exactly. Thank you for avoiding that pitfall. We are now out of time. So this concludes another episode of rationally speaking. Join us next time for more explorations on the borderlands between reason and nonsense.

[00:48:49]

The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benny Pollack and recorded in the heart of Greenwich Village, New York. Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.