Transcribe your podcast

Today's episode of Rationally Speaking is sponsored by Give Livewell Give Oil takes a data driven approach to identifying charities where your donation can make a big impact. Give all spends thousands of hours every year vetting and analyzing nonprofits so that it can produce a list of charity recommendations that are backed by rigorous evidence. The list is free and available to everyone online. The New York Times has referred to give well as quote, the spreadsheet method of giving give. Those recommendations are for donors who are interested in having a high altruistic return on investment in their giving.


Its current recommended charities fight malaria, treat intestinal parasites, provide vitamin A supplements and give cash to very poor people. Check them out at Give Weblog.


Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I'm your host, Julia Gillard, and my guest today is Jason Collins. Jason is an economist based in Sydney. He's the leader of the data science team for a financial services regulator and blogs at Jason Collins blog, which is one of my new favorite blogs that very in-depth and interesting book reviews of books in all sorts of areas of economics and commentary on different aspects of the important debates and behavioral economics and evolutionary economics and related fields.


So I'm excited to have you on Rattly speaking, finally, really pleased to join you.


A big fan of the podcast.


Oh, yeah. So first of all, one thing I wanted to ask you is whether you consider yourself a behavioral economist, which I mean, not that it matters that much, it's just a label. But, you know, I'm curious whether the the criticisms you are likely going to make in this podcast of behavioral economics are coming from an insider or an outsider. It just kind of changes the tone.


I think I'll call myself an insider because deep down, I am a big fan of behavioral economics. It's just a case of I think that the economics are probably, as I prefer to call it, a lot of the time, behavioral science could be so much more.


Why don't we discuss that distinction before we dive in? What why do you prefer the term behavioral science instead of behavioral economics?


Well, I think in some cases, behavioral economics is the right term. We're applying because bringing psychology into analysis of economic problems, economic decision making. So that's probably the right term there. But the label behavioral economics actually gets applied to a much broader area that includes a lot of just just pure social psychology, a lot of other decision making fields. And I think to give credit to people working in those fields, they're not economists. They're typically not working on purely economic problems.


And we really should give it a label that that captures the fact that they're adding interest. Interesting. Well, interesting inputs to to to how we think about human decision making.


Is the label behavioral economics just a historical accident? Because it started with people applying the stuff to economic models or or does it persist because it's somehow more respectable or something like that?


I think a bit of both there, because at the beginning you go back to the work of Kahneman Tversky, they really were setting themselves up against the work of academics. So the foundations of it really come from an economic angle. Now it's probably more a case of a marketing term. So when I'm talking to people, I'm trying to get their interest or attention. Quite often I'll use the word economics. You're going to get a sign of recognition straight away that the blank looks when you say behavioral science.


So, OK, one of your recent somewhat recent lectures was titled Please Not Another Bias. And in it you referred to the Wikipedia page on cognitive biases and you said there are not one hundred and sixty five biases. Can you explain why you say that?


Sure. And comes back to that in some ways, foundation of behavioral economics. So initial work of kind of into diversity. They and in fact, some of their predecessors, they looked at some of the axioms of of neoclassical economics and particularly around the idea of people maximizing good utility. So where you take a, say, a lottery, but choosing whether or not we're going to accept that lottery and expected utility theory tells us what sort of axioms are all around the person's choice and how they'll wait for the outcomes by according to the utility function and then look at the probabilities, do a bit of a sum and get get the answer and all that work with kind of a task.


He was pulling it apart and basically showing the first holes in that model, saying, look, people don't actually decide like this when we give them choices, other uncertainty. There's systematic deviations from this expected utility model. And that's a great start. And what's really happened over the last 40 odd years now is we've seen a growing number of deviations from that model.


So an endless array of points where people simply don't conform to the way this economic strawman stuff started.


And so now we have this kind of a mess. And you look at that, that page I counted about one hundred sixty odd on it. I think it's even grown since since then as people just simply go here. Here's another way in which people don't don't conform. But the problem with it is that we're not really in a world where the economic picture is actually the accurate one. So we're just simply creating more and more deviations from from the wrong model.


And in that talk, I used the example of an example from astronomy. So you go back to fifteen hundred and think I've got the right you there. And we had this model of the earth at the centre of the solar system and the sun and the planets moving around the Earth and, you know, does the. And model, but then, of course, you start to observe all these deviations from that model, so we see that Venus only appears in the morning or the evening sky.


Yeah, a lot of the other planets, such as Jupiter, will track across the sky, but it won't just keep going as you'd expect. It was the reverse direction and where people started creating or were called epicycles. So different patterns of movement of these planets and creating an incredibly complex model of the solar system. Then, of course, Copernicus comes along, points out that the the sun is actually at the center of the solar system, the the stars and the planets, Earth and the other planets orbit around the Sun.


And suddenly you've got a much simpler model. And I think that in some ways the world of behavioral economics is a little bit like that pretty Copernican world where we're spending all that time talking about deviations, what turns out to be be the wrong model.


Have haven't there been attempts to propose alternate models like would would prospect theory for which you won the Nobel Prize? Would that count as an alternate model?


Well, it's starting to move there, but prospect theory is probably a case in point where effectively the starting point is the expected utility and then they add in some different elements of it. So the utility function, instead of just being, I guess, you know, largely smooth, diminishing to a great extent as it gets larger, there's a reference dependence and loss aversion in it instead of just waiting directly from the probabilities. This is a function of, I guess, transforming the probabilities.


But but in the end, it's built on that neoclassical model and you end up with a model that is no more realistic example of no more realistic sort of, I guess, actual actual statement of what how human decision making works. It's a descriptive model rather than a model where she says this is the way that humans decide which which leaves you in sort of considered as well what what good Gigerenzer and friends are called sort of it's as if behavioral economics.


So the whole economics in the same way you've got neoclassical economics systems are talking about humans behaving as if you know, rather than going what's the real decision making function here? So that prospect theory, you're right, it's kind of starting to pull together a few things, but you've still got a lot of other cognitive biases that are outside of prospect theory and even prospect theory itself is this cobbled together thing, which has a lot of the same flaws, in fact, that the neoclassical economic approached us.


All right. So the unifying theory in the case of epicycles was heliocentrism and elliptical orbit instead of circular orbits. What is the unifying theory in the case of cognitive biases?


Well, if you had asked me this question several years ago, I would have jumped straight into, I guess, I suppose, an argument that evolutionary biology would be that unifying theory. I still think there's a grain of truth in it. In fact, I think a sort of any unifying theory will have to have an evolutionary component. I suppose what I'm a little bit less sure about now is just how clean that model ultimately looks like looking, say, evolutionary psychology.


And it has the modular theory of the mind where all these different modules of the brain can be triggered in different circumstances.


And it's not a particularly clean way of developing a new model as perhaps perhaps we may end up simply in some ways, that rational sort of actor model that we have from economics might just be our benchmark for a very long time.


I mean, to be fair, human psychology and behaviour and societies are much messier systems than astronomy. So, yeah, exactly.


Have a high standard to which to hold ourselves.


Yeah, it is in some ways a call for like a really clean, unified model. I guess it's just a particular from the we love clean, mathematically beautiful models and perhaps we're looking for something that actually isn't there. Can you.


When I asked about the Unified Theory a moment ago, you said evolutionary biology. But is there a way to state it as more of like a model instead of a topic like Human Decision-Making? Maybe that's a tall order, but there are something like humans make decisions in a way that maximizes. They follow risks that maximizes the genetic fitness of their ancestors or something.


Yeah, I think that's probably right. So basically, you know, humans pursue a set of proximate objectives that in the environment in which they evolve, would have met that ultimate objective of increasing their fitness, survival and reproduction. So the parts which evolutionary biology would add would be around what are the objectives that people are pursuing.


This one of the four things in economics, like we typically talk about maximizing a basket of consumption or something of that nature. Evolutionary biology sort of points out, well, what is it? Is it it's going to be things that lead to survival and reproduction. And then, of course, that might lead us to go, OK, well, this threat avoidance, there's disease avoidance, desires for certain foods and so on. And you start to build build from there.


But even then, it's you know, that's only a part of it, because I suppose that gives you an idea of objectives. But then what's next? How do we then think about the shape of. Using the word utility function for lack of a better word. But how then do people make choices over those different different options?


Yes, I actually came into this podcast already to to disagree with you about how much the evolution evolutionary lens adds. But it's possible that we won't disagree with each other as much as I expected. But through a couple of examples that anyway and see what you think of them. So one cognitive bias that didn't seem to me to fit the evolutionary framework is the conjunction fallacy. So the fact that we assign higher probability to in some cases, we assign higher probability to and than we do today alone, the classic demonstration of this being, you know, someone says imagine a person named Linda, she's left wing, she attended lot of protests, etc.


and then they ask, what is the probability that Linda is a bank teller? And also what is the probability that Linda is a bank teller and a feminist? And people tend to think the latter, the bank teller and feminist is more likely than the former, which is not possible because, you know, being a bank teller and a feminist is a subset of being a bank teller. So it can't have higher probability, that kind of bias. And there are others I put in the same category.


It doesn't seem related to humans maximizing their genetic fitness. It just seems like, you know, we're bad at reasoning about probabilities. I could give other examples, but how does that kind of bias fit an evolutionary model?


Well, I think you're right. On one hand, sometimes the conclusion may just be bad at some things which you've never encountered before in our evolutionary past. But I think a different way to think about us is instead to start to go, well, what are the sort of cognitive tools that evolution might have shaped?


What sort of heuristics might that have given us? And and this is, in fact, a territory of where Kahneman has had a lot of diversity and has had a lot of debates with good gigerenzer around you, particularly around that problem, the Linda problem. And there might be a case of, you know, the heuristic may simply be assumed that all information is relevant. You know, with tying that back to how that might have evolved is is a, you know, a few steps that need to be filled there.


But then you go you have someone who's entering, whether you're sick of always all the information you're given given is relevant, then you can start to say, OK, well, here's here's where people might start to make this leap and make this mistake. I have to say that there's this set of papers in 1994 where kind of a diversity of Gigerenzer actually debate this point back and forth. And back then, it's really quite entertaining debate. And I actually kind of land a little bit with kind of a diversity on the fact that there is really something here that that that's tough to reconcile.


So so as you get into each one, I think it's not a case of there's an evolutionary explanation sitting there ready to be picked up. But I think that that's probably a really good point of exploration and thinking about. Well, actually, well, what are these rules? How do they then apply in this new environment which for exposing them to? And then given that, what sort of errors might we expect to see?


Yeah, you know, I keep noticing this weird feature of debates over, you know, whether humans are irrational or, you know, to what extent cognitive biases are really biases. And the debate between Kahneman and Gigerenzer is a great example of that, where one side, you know, behavioral economics or cognitive science will say, look, people's choices fail to maximize their own utility. So they are irrational. And on the other side, the evolutionary side will chime in and say, ah, but people are making choices with juristic that maximize the gender genetic fitness of our ancestors or that, you know, we're got the right answer in the environments we evolved in.


And so they are rational. And as far as I can tell, those two perspectives are totally consistent with each other. Like one side is saying, our choices are suboptimal in the modern world or for us as individuals. And the other side is saying our choices were optimal for our genes in the ancestral environment. But these two sides seem to think they disagree with each other, whereas this whole debate feels to me like two sides talking past each other because they're using different definitions of rationality or they're like arguing about like how much should we wag our finger at humans or something, which is not really it doesn't seem like the most important question.


What's your take? I'm with you.


It's just it doesn't seem to one of those debates where they really do agree with each other. So looking at just a gigerenzer as work, in fact, so, so much of it is around how people fail to understand probabilities. And then course proposes alternative ways of presenting those choices to people typically using frequencies instead of numerical probabilities. And you get you get better decision making. And if that if it's a perfect example of a nudge, which is exactly the same thing that you'll often rail against.


So so for me, it really is a case of more argument, more around framing than real substance. Although again, going back to that 1994 paper, there are probably some points where I think they state that they disagree. So it's a gigerenzer would probably fight around more of the sort of where exactly is that line? I mean, he'd move it much closer to the. More of these decisions are consistent or error-free than perhaps others would. Others would say no.


And in fact, I should say the big bit, I have a lot of sympathy for Gigerenzer. In fact, I'm a big fan of his work. Is this idea that ultimately a lot of our decisions, they don't conform to this model of rational decision making. So they often involve excluding information from our decisions or difficulty making business decisions. But his point being that the design to operate in the real world, they're not designed to conform with these tenets that we've decided how people should should decide.


And so when you start going, OK, well, let's put someone in an uncertain, unstable environment with constrained computational power, what's the good decision making strategy is suddenly a little is I guess, biased approaches have a lot of value analysis and work for people such as Tom Tom Griffiths, which is a big fan of a lot of his work, which is you're saying, well, actually, if you look at computer science and the way they solve problems, you're actually seeing the same pattern that adopting biased and, you know, solutions.


But there are things that work well in the real world for the problems we're facing and the results that we have at hand to solve them.


OK, well, one key point that we I think we sort of skipped over was what are some examples of cognitive biases that do you think are better explained by the evolutionary lens, by the lens of humans trying to maximize their genetic fitness than by the standard behavioral economist, which maybe is one just the availability here heuristic where there of course, the idea that people judge probability based on simply how available recent examples are in their mind.


And in a world where you've got media trumpeting every small disaster and the like you, it's a fairly flawed tool. People, this has been a recent terrorist incident. People will judge the probability of terrorist incidents to be much higher than actually are shark attacks or anything like that. And this is, I suppose, sort of sort of thing where in the modern world it backfires, but then go back into an uncertain environment and then and the new faced with the same problem.


And in fact, you've got a simple small sample of of these events. And here you end up with a scenario where the actually availability heuristic actually becomes an optimal approach in that. In fact, I recall Tom Griffiths, when he spoke to you in one of his previous podcast, actually talked about a paper where they worked it out with I forget the name of his it was the co-author or the person who actually sort of did that did that work, that sort of the idea that in certain environments that's the way to go.


It's not a case of, like, really trying to get this representative sample. It's accepting you're only going to get a drip feed of certain examples. You're going to have to make judgments based on them.


Right. Right. You know, often when I read evolutionary psychologists or evolutionary economists talk about biases secretly being rational, they're talking about biases that serve kind of a signaling purpose, like people, you know, are overconfident or they overestimate their whatever good traits they have. Yes. Or they like the way they reach conclusions about political or economic or other ideological facts is biased by their preconceptions or their tribal affiliation or something like that. And the evolutionary psychologists say actually this is rational because our goal is not to reach the truth in these cases.


It's to signal various things to other people about, you know, our genetic fitness or about our loyalty to the tribe or things like that. Would would you also agree with that explanation? And do you would you call those behaviors rational for the individual or just like rational for maximizing our ancestors genetic fitness?


Yeah, that's probably the core question for for a lot of these ideas is this is one thing to say, okay, there's a behavior that some partner sometimes and evolutionary past maximized our whatever objectives we're trying to achieve. And how is that actually line up now? Because the reality is that most of us don't walk around seeking to maximize the number of genetic offspring that we have. We actually have have other objectives. And yet that's what a lot of these proximate objectives we pursue are are about.


So but I think you're right, the whole idea of, I guess, costly signalling theory is actually one of the really promising areas being like, well, why are people behaving in this way? And suddenly it pulls you into areas where you go, okay, you know, here's some evolutionary insight as to what's going on. So clearly signalling is around pointing out that your your higher quality, but relative to other people and the neoclassical model doesn't have to have that relativity.


You start to see that in all the behavioral literature. So it starts to give a foundation for, well, why do you want to be relatively better while you're competing for so scarce mates, which is a constrained supply of them? Just know other resources which. Similarly, similarly scarce nowadays is scarce property, scarce schools and the like, but your enter into a real relative competition and you need to either signal the quality or quality to to win them or alternatively win a relative competition to get access to them.


Is there any other kind of mismatch between the evolutionary environment and the world that we live in today that might make these evolved features of our cognition less advantageous for us now as individuals than they were for our genes? Well, I actually really like the argument that Geoffrey Miller makes in his book spent and supports this idea that in a modern consumer society, we're picking up all of these different ways of signaling equality, which are actually pretty crappy ways of signaling our equality.


So you pull out, you go buy buy an iPhone or something like that. And it's a pretty conformist step. It's not signaling a lot of wealth or status. And yet a lot of people really grabbed them thinking exactly that and said, you know, when the price was taken to signal to things such as intelligence or creativity or how caring we are, there's probably far better ways of doing it than engaging in some consumer spending instead. It could be whether it's about storytelling or, you know, making objects or engaging person to person, there's probably a better, more effective signals that we could be using rather than sort of, I guess, looking for the lightest little present that we can buy.


Yeah, it's a classic case of marketers and the like that they're looking for every sort of angle they can exploit to get us to buy their products. Right. And we, of course, particularly well adapted to to that degree of onslaught. The same thing happens in food. Of course, the most food nowadays, it's been it's been developed increasingly and tweaked towards something that's hugely appetizing for us. And we're not particularly a lot of people aren't particularly good at defending themselves against that temptation.


Right. You wrote an interesting blog post just a couple of months ago about cognitive biases, and you argued that naming bias is like the availability bias or loss aversion or whatnot. It gives us the illusion of having explained the corresponding behaviors. But actually we haven't explained anything. Can you elaborate on that?


Yes, naming something does give you a little bit of knowledge. So naming, say, loss aversion, it does at least give you the bit of knowledge that people may weight losses more than their weight gains. But it's just just knowing the name of it is a pretty weak or thin form of knowledge.


When you think about all the other questions and points, you might want to know about it. So why does why does observation exist? What context is that most powerful in? How does it operate? So what are the heuristics that that we use that ultimately lead to this loss aversion being shown and being able to think about answer a lot of questions like that that you that you get to a point where you can go, OK, well, and what contexts are we talking about sex?


And then how can we design so that people don't fall for it if it's if it's a problem in their decision making process, even go as far as saying that sometimes that knowing that something can actually cloak the real reality of what's going on underneath the one I think about a lot is overconfidence. And in fact, you had a great conversation with Don more on this very point previously where you can break it down into at least three different conceptions of overconfidence over placement, the idea that you're better than others overestimation.


So you're certainly better than you actually are and precision the idea that your, I guess, more accurate than you are. So you might give up narrow and 90 percent confidence interval, then you really shouldn't let then you think about those three. And it's not a case of which way pervasively over overpricing, overestimating and over we tend to over estimate in hard fought hard, difficult tasks, but we tend to underestimate when it's easy. And conversely, we tend to overplay on the easy tasks and underplays on the harder ones.


So you actually have this pattern where you're actually getting you know, if you use this word overconfidence, you're actually over and under confidence, depending on exactly what the task is. And it's only by getting past the name and thinking about, well, what are the different different ways we can describe this, then in turn, getting to that next level of questions or what context is actually apply that we get to a useful understanding as opposed to just a label that we can throw at poor decisions.


Right. Do you in fact, I'm curious, to what extent do you think people are are actually mistaken about the amount of of explanatory power the these cognitive bias names have? Like on the one hand, maybe they're just saying, you know, now that we have like the term loss aversion is pointing at a pattern that exists in the world. And so we can make predictions now that we know that this pattern is there. We're not claiming to have an explanation for why humans are set up such that they are lost.


First, we're just pointing out the pattern that would be like the non mistaken world and the more mistaken world would be one in which people are just kind of confused, like sort of like like saying, well, the reason that this ingredient makes you sleepy is because it possesses a soporific quality, which we're soporific just means a thing that makes you sleepy. So like it has ring of an explanation, but it doesn't actually it doesn't really explain anything. And I've had heard some convincing arguments that aspects of neuroscience are like this, like they'll point to a part of the brain that lights up when someone's doing a certain kind of cognitive behaviour and they're like, oh, you know, the reason that we do this behavior is because this part of the brain is active, but they're not really explaining anything, which is the situation that you think we're in with respect to cognitive biases.


A bit a bit of both. So. Well, not on loss aversion. I quite often you'll hear people say loss aversion. Well, it's a function of prospect theory. But in turn, the prospect theory is just a cobbling together of a set of biases such as loss of evidence. They actually actually see that right a bit here.


So people people fall for it. But from that first point around, simply being able to label it and say, okay, it's the general pattern, we can make predictions. I think people overestimate the extent to which you can do that. So across the whole literature on loss aversion and downward effect, which often gets wrapped into it, you know, there's actually a lot of, I guess, context dependence or at least results where you don't see see that pattern.


And it was at a point now where recently some papers I think this debate has been going for a while now. But, you know, just how pervasive is loss aversion, really?


And he says again and again in different pieces of literature, a confirmation bias. I was recently reading, you know, I guess an argument there that it's not quite as pervasive and useful as people think.


So I think it probably is a problem that. Yeah, I mean, another reason to think that there might not be one hundred and sixty five cognitive biases is just that some of them might not be real, like they might be the result of hacking or publication bias or some other methodological problem. So I'm asking you to speculate here. If you had to guess, which cognitive biases do you think are most likely to be real and which are most likely to not be real?


Well, in some ways, it comes back to my initial thing.


In some ways, I don't think there's a lot these biases don't exist because we're using the wrong reference point. So it's not so much a case of saying these, you know, some biases don't exist, but it's probably going if you think about them in a in a different way, would you still describe them in that way? So, you know, for instance, I think the availability heuristic representative, in fact, most daily work of kind of intervarsity, I think it's pretty robust, most of it that's highly replicated.


But ultimately, once you if you adopt a different model decision making, go. Our benchmark is no longer the model of rationality that we've came from economics and set. Our benchmark is real world decision making. Suddenly, you may not be thinking about availability juristic anymore. You might think about this as some different cognitive tool and which may or may not work in other environments. So so I think some of it won't simply survive. A more theoretical reframing on that list of the Wikipedia list, I think is just a lot of there that are going to disappear.


Not so much a case of hacking, but it's a case of having thousands of researchers around the world running endless experiments and turning up neat things. And all it takes is a single paper and you've got two and you've got something you can throw out to the Wikipedia page yourself. So a lot of logic is going to fall away, fall away from that angle.


But funny, anything with a lot of them is they are even in different directions to each other or have recently had someone talking about status quo bias and action bias. So I don't think actually bias is actually on the Wikipedia page, but it gets talked about a lot. This urge that people feel they need to take action. But of course, this you've got a status quo bias, which is, you know, people want to want to stick to the status quo.


Well, which is which is which in what environments do they actually go? Is one of them actually in some context, the other in other contexts or in the end, if we just simply got to results and each the results as as opposite effects.


Right. It's like having two opposing idioms that together encompass the whole space, like opposites attract and then birds of a feather flock together. And you can pull that whichever one to explain whatever you see.


Indeed. Indeed. You see a poor a poor decision from a from a CEO. And if they if they chose to do something well, that those overconfident or if they didn't make the call, they should have, though, a loss of sort of a bias in every situation.


Interesting shifting focus a little bit. Now, you had an article just earlier this month in which you expressed concern about whether the pendulum has swung too far in the direction of behavioral economics or behavioral science in policymaking as opposed to economics and policymaking. What is the problem there that you're concerned about?


It's simply a case of focus and investment of energy. So right now, you know, behavioral economics is a pretty sexy area, has been for for a few years now. And now we're seeing a real proliferation of behavioural generally branded as behavioral economics or Paravel Insights teams as sort of the main names given to them. But at the same time, we have a lot of these organisations that are really investing and doing that capability, and they either don't have or have much smaller economic capabilities.


And I suppose my complaint was called then you call it a complaint was really around the question of, you know, when the sorts of problems these organisations are facing, like how much of them are actually a pure economic problem or at least or a behavioral problem. And are we leaving a really important part of the toolkit on the bench? I should say, like it's always been a little bit broader than that. I think there's probably just a case of for these sorts of problems we should be bringing in highly multidisciplinary talk is to try and solve the problem.


So so in many cases, it won't be just behavioral economics or economics. It might be from other areas. It could be from various areas of psychology. We have an exploited yet it might be anthropology, it might be might be other areas. Now, to put a bit of flesh on why I think it's the problem is that many of the issues that these behavioural teams are getting thrown at are fundamentally economic problems.


So on one hand, it's quite often the case where they're trying to deal with effectively incentive problems. So people are say, you know, doing making poor decisions in a business. And yet the real reason they're making these poor decisions is basically because they've been paid to do to make those decisions. So, you know, that they're selling products to customers that they shouldn't be selling to. But why they're doing that because they pay to help sell more products.


And you've got behavioural teams that are being sent in to try and in some ways push against the. And they're not sometimes not being equipped or not not given being given license to go. Well, what's the real problem we have here? Like if we're going to go to a good solution, what should we really do? We see a bit of a similar pattern, I think, in just policy making today, where where maybe it's partly a question of political feasibility or partly a question of, I guess, people having the confidence and courage to put forward options.


Where the behavioral option is, is the easy one. So putting some, I guess, notes on there or some comparisons onto their power bill with their neighbors, they might reduce power is a lot more palatable and and easy than going. Okay, well, what what could be could be price cuts, say price carbon probably do to get that reduction or increase the tax.


So so quite often the I think the the more powerful solution just isn't getting getting a look in to the behavioral insights style solutions are both sexier and also easier.


It is a dangerous combo.


Indeed, indeed. And maybe even here, a case of availability. There's always endless presentations now around and they are successes. We shouldn't play that down the way people like to take a letter to increase tax collections or the like. Really cool ideas. But the less sexy sort of basic economics just don't get the same degree of play nowadays.


Would all of these kind of behavioral insights, style interventions fall under the heading of nudges, or is that just a subset of what they're doing? Well, I think that's partly the the issue is that I think a lot of teams and some I see their remit as designing nudges rather than providing a broader a broader toolkit of to table.


So there wasn't a debate involving George Loewenstein.


And I'm really forgetting who co-authored the paper with Richard Richard Thaler and Dale asking this question around about these policy implications of behavior.


Economics simply being too narrow and focusing on nudges and slightly comes back and goes, oh, well, look, this is this is, you know, I guess a ridiculous complaint in the book. I never saw him in Sunstein never said that. Not just with a panacea, the only application. But I think when it comes to practical application, quite often they are seen as the panacea, all the toolkit that they can pull on.


I see you mentioned that one other alternate angle for that you'd considered for this article was the question of whether behavioral interventions that look impressive in isolation are less so if we consider the systemwide effects. What did you mean by that? What would that article have said?


Well, there's two layers to this and one layer people know about but haven't really looked at. And this is so, I suppose, going well. How how does the intervention affect someone's behavior at a broad level? So think about putting comparisons with people's neighbors on their power bills to see if they're going to reduce energy usage.


So the experiments are generally run where they look at, you know, to see what happens to that, to the power bill of those who receive the I guess, the comparison and those who didn't. But the bigger question and really the objective is going how to reduce overall energy usage. And so are these people. After reducing the power usage of one angle, are they then failing? So are the morally licensed or simply, you know, more flush with cash that they then do other activities that increase their energy usage?


That's the first layer.


I think a lot of people in the behavioral economics, behavioral science have seen that. It's probably a bit more open question, but then there's the next layer going well, what's the actual effect of on people's mental health or, you know, the happiness on on being subject to these comparisons?


So all these experiments, they'll say that they'll they'll send out these letters and, you know, five, 10 percent shift in behavior is a real success, like seeing a really small proportion of the population change behavior. But but what about those who don't change their behavior? What is they? Are they are they better off or are they worse off because of those comparisons? And you get into some areas such as, say, debt collection or tax collection, where people may have really tough constraints and reasons why they haven't paid.


Does this now social comparison going by the way? You're you're a deadbeat and not paying your taxes. How the hell does that affect their mental health? And also, one academic at a recent sort of presentation suggesting this is the sort of thing, you know, it could have serious mental health issues to to be constantly comparing people to to others when they may not necessarily have the ability to respond and change their position.


So that's kind of a critique of nudging still within a utilitarian framework of what are the consequences for people's well-being.


There's indeed there's a different kind of critique that steps outside of the utilitarian framework and says, like, do we have a right to to nudge people in this way, even if it's very likely that it is going to make them better off? Are we depriving them of some autonomy? And I think a lot of lay people, not just ethical philosophers, feel a bit of discomfort around the idea of Nigella's as being in kind of this gray area of, you know, manipulating them subliminally or something like that or forcing their hand in a way that's not visible to them.


Do you have any sympathy for that concern?


I do at a general level.


So the idea that I suppose there's a lot to be said for the idea of autonomy, where people are, you know, deliberating and and coming to decisions on their own power rather than being, I guess, manipulated towards those despite the fact they have freedom to to move away. So at a general level, it does concern me.


I think what's kind of interesting is when you get down to each individual nudge and Cass Sunstein makes this argument sort of in response to some of these claims, it's going OK. But when you go nudge by nudge, you know, a lot of them that don't seem quite as problematic as that overarching angle could would appear so. So I do see why people feel uncomfortable. I feel it. But at the same time, a lot of not just at the end, I go, okay, it's probably not that big a problem.


And there's various reasons for that. Sometimes you can't avoid having a frame or an order representation, whatever it may be. So you're always going to be subject to some form of influence.


So there's something. So that we're going OK. OK, it's probably not so bad that people choose a non-random say ordering and it's not particularly as coercive or, you know, I guess autonomy destroying is perhaps the idea of trying to deliberately manipulate people towards something when in fact, they would at a more neutral frame. Right. Forehand.


I mean, are there any kinds of nudges that you would have qualms about?


So so the ones which have the biggest qualm about, probably around the use of a lot of the the opt in opt out choices, because they're what you have is probably one the most powerful forms of nudging.


And in fact, as I suppose when you go, well, what's something that's really going to stand up through the test of time, the idea that the people don't tend to change a choice? Well well, the designators that have opted in or opted out choice is one of the more persistent. But but there of course, you really do have this case where there may be placed into an option, which is really, you know, if they were engaged and asked to rationally think about it, they wouldn't do so.


So think about organ donation. So they have this system in it. Guess a lot of a lot of Europe where you're opted in is an organ donor and thereby opted in the means if you're a citizen of that country and you haven't gone and lodged a form at the relevant department, you are considered an old organ donor. And that to me is a fairly pernicious in some ways it's having having people labeled as this when they've probably never, never given giving thought to.


In fact, even Richard Thaler even has sort of said, as far as he's concerned, there's a far better option here is active choice where people are made to make a choice at certain points of of time.


Yeah. I mean, that's that that seems like even more autonomy than than the default, which was, you know, opt out or opt in. But people aren't even aware that there's a choice there that seems like a strictly better improve people's autonomy and also probabilistically increase the number of organ donors. So why not do that? Indeed.


Indeed. And also, they like coming back to my comments on social norms. I think when you say social norms is also an interesting one, because on one hand, there is this question of the other simply negative effects on some people that we're not currently measuring. But I think there's lots of interesting questions here about effectively, you're trying to manipulate people, not not for for their own good, even though they might have a bit of benefit. But it really is a case of the government wants a certain result and they're trying to shift people one way or the other.


And and once you get that line between is this for someone for their own good as judged by themselves, that's the sort of, I guess, the marker that Thaler and Sunstein sat down for, for a good nudge.


But in application like, is this really for the good of the government? Are they really the ones who are interested party here and the ones trying to get the result? Right.


I think in one of your most recent blog posts, you referred to someone else's critique of sort of theoretical critique of nudging. I think it was so sudden I might be mispronouncing him. So he he has critique, which I didn't fully understand. Maybe you can explain better with around the idea that nudging assumes these kind of latent preferences that people have. Is these true preferences with the nudges are helping them realize which they otherwise would not be able to realize because they have this like shell of irrationality around those latent true preferences.


And they argue that that's not really a good way to model people's psychology is. Can you explain it better than I can?


Yeah, well, in some ways, his complaint comes back to this idea we talked about a bit earlier around. Economists and behavioral economist is really reluctant to let go of this rational sort of, I guess, expected utility model of of humans.


And so the way that most of the approaches in the of economics where they sort of theoretically go, OK, what do we do? They end up in this world where they say, well, they effectively assume that the inside we actually have this rational person who has the full set of preferences against all the other choices. But it's almost a psychological little shell around it that is that has lack of attention, lack of computational power, lack of lack of willpower that leads to those inner preferences not not being realised.


And so a lot of the nudging is always framed as an attempt to allow people to realise their preferences.


And so Bob Sutton's critique of this is really the idea that basically these latent preferences don't exist. There is no internal rational sort of agent that that's coming up with them. And he uses an example of let's suppose that you've got someone and they're going into a cafeteria and again, drilling on Thaler and Sunstein work where the ordering of the food can affect their choices.


And so you put the cake at the front and they'll buy more cake and put the salad at the front. They'll buy more salads. And someone says, let's imagine you've got someone going in there and they actually don't have a preference between between cake and of the large. Indifferent. But I simply feel this urge to to to eat it regardless.


And it goes, okay, well, so let's let's imagine someone let's imagine another person who just like them. But they actually have none of these constraints. They have no willpower constraints. They have no constraints on their self-control.


They have full computational power. They can calculate anything. Let's suppose that same person goes in but has the same preferences. So if they don't, they have this initial sort of instinct of I want Cakra, I want salad. Then then what's actually going to happen is when they walk into that, when they walk into that cafe, they're only going to have the same feeling as that original person. So it's no failing of their rationality that's leading them to go.


I'm going to eat the cake at the first, simply the fact that they don't have this well-formed preference beforehand.


And this sounds a little bit know a little bit nit picky, that example, because I got to the end of it myself and I went like this really important.


Like it's like in the end, like so much like, is this a bigger problem?


And more, I think about the more more sympathetic I am to it because it's real, although it's sort of a trivial example. It points to the fact that we probably just don't have really well-formed preferences internally, like there is no psychological mechanism by which they form. So it's clearly a bit as the model. So trying to create a you know, trying to create this idea that we make decisions as if we're a rational person inside the psychological show. And that's clearly not not the case.


But there's probably a lot more serious decisions where, you know, we're not going to have this nice, stable internal preference. And there's another paper which he talks about this idea of someone's going in to get surgery. And this is, again, debating Cass Sunstein and so and then goes, okay, well, let's imagine this person is in there and he gets presented the surgery options and it's in a in a gain frame. So they say it's ninety five percent chance of survival.


And I suppose it's the right choice. So they're not going to say five percent chance of death. They give him the 95 percent chance. He goes, yes, I'm happy I'm going to have the surgery. But then what happens later on that day? He then gets showed so the lost frame and showed a different frame. And suddenly it's like, oh, I'm not happy about this anymore.


And which of those is actually the true preference? And I think it's probably a case of not there's not some true preference. You've got to tease out of this person. Literally, every time they exhibit, this preference will come to it. They probably are influenced by the framing of that of that choice. So to say, well, this person wants to do the surgery or not, that there's no there's no truly rational choice inside that person, you'll never be able to sort of really pin it down.


And so so so in some ways, we don't have to either go look, we're not going to do it as judged by themselves, by this. You know, in a rational person, we're gonna have to find some other way of justifying the nudge nudge that we apply. I see.


So if we can't if we can't justify it by saying it's it's just realising their own true deep down preferences, then we would have to justify it by like we're maximizing their well-being or something like that. Yeah. Yeah.


Effectively, we'll have to take a somewhat more paternalistic sort of reasoning behind behind what we do.


So we think this is we believe this is in their best interest based on the fact that most people want to, well, maximize their chances of survival or whatever it might be.


Yeah, I wonder if there's some kind of thing you could put on your official driver's license or whatever government idea. That's like I consented to being subject to these kinds of nudges or something, and then governments and companies could, you know, they would be allowed to nudge you since you sort of opted in in general to nudging.


Yeah, I agree, because I think for me, one of the easiest ways to overcome a lot of these problems is simply through the idea of consent.


So I just thought the problem might be that, like, if you ask for consent in any individual case, it might ruin the effect of the nudge or something like a broad consent.


Yeah. Yeah, possibly. Although there is a lot of evidence that telling people about not just does it actually diminish their effects as much as you might think so. So there is a bit of research on that point, which is hopeful. But you're probably right, though, and perhaps some of this is they're going to say, I'm going to consent to this at a fairly broad level. And it could be the case of perhaps when you I think Sirat around technology now sort of you your Apple could say to you, OK, do you want to sign up?


We're going to nudge you towards purchasing our products or better music or whatever it might be. Do you agree to us trying to work out what your better interests are and guiding you towards this? Would you rather have a serve? You know something? We haven't tried to make that judgment.


Right. Although I guess they could always make that that question opt out instead of opt in. And then there would be nudging you to accepting their nudges and indeed, an aggressive.


So, you know, in a recent episode with Chris Old, we we we discussed kind of it's kind of a parallel conversation, except we were discussing critiques of sort of economics in general as opposed to behavioral economics and behavioral science. And that episode, we sort of started out complaining about bad critiques of economics and then later transitioned to do well. Are there any good critiques in this episode? We've been talking about behavioral science and we've we've been talking mostly about good critiques of behavioral science or critiques that you hold or that you sympathize with like teams.


It's like so trendy and government behavioral insights teams have have neglected, you know, standard economic approaches in favor of these sort of sexy and easy behavioral insights, nudging approaches and and behavioral economics has been sort of cataloging these biases without attempting to explain them, things like that. Are there any, in your opinion, bad, common and bad critiques of behavioral economics, areas where it's sort of been strawman or misunderstood? You know, I think a lot of the critiques actually that sort of I thought the original critiques coming out of economics itself are probably not particularly good.


And I think so we tried to extend the defense of guess that the rational economics to find this book by David Levine, is behavioral economics doomed? And on one hand, I have a lot of I take a lot out of that book because, again, sort of relating to my earlier point that there is actually a lot of power in that in that rational actor model at times. But this is so many points. We you have to concede defeat and really say this is something going on here that that is isn't matched.


The example of costless up in, opt in, opt out is a great example where it's really at that point you're going you're going to say, OK, we've really this proof in the pudding, in the changes in retirement savings through the Save More Tomorrow sort of plan. So just to concede that there's something there but they don't want to let go.


So I think just completely and that can't possibly be consistent with a rational actor model. You're saying they want. Indeed. Yeah, indeed.


And even good Gigerenzer goes there at times. And so back to that sort of nineteen ninety four sort of papers debating with an Tversky, the back in the back and forth, he sort of goes, well look can be explained this way and then kind of advice. You come back and go actually look we looked at that a few times and it didn't quite work out that way and and just didn't want to, didn't want to let go.


And I think ultimately you have to accept that there are some there's clearly some deviations. There's clearly some problems. And that the question, though, the interesting point of debate is where exactly is the line should be drawn and how exactly should we think about those problems? That's that's a far more more fruitful area to to discuss.


Yeah. Well, that's probably a good place to wrap up, Jason, as you know, if you're a fan of my podcast, I like to ask my guests at the end of the episode to nominate a resource like a book or blog or article that they have some substantial disagreement with. But they nevertheless recommend, in the sense of this is worth engaging with and thinking about what would you nominate?


Yes, I'm going to nominate Angela Duckworth, work on on Garrix. And and there's actually part of it, like it's a broader group, which I'm really been engaging on this point. And some people I could have named James Heckman or Carol Dweck, but really surround this idea of one.


To what extent are these other traits in this case? And Duckworth, it's around grit which affect success or how much they really affect success. And then that in turn, how malleable are they. And so, like my point, I probably have a couple of points where I really, I guess, grapple with her work around the question of how how important these groups really towards success. Is it just conscientiousness rebadged? Is is this something genuine here? And similarly, like, to what extent can we change someone's grit through training and looking through a lot of literature?


I struggle to sort of see the power of these interventions. And and but at the same time, I find the way that she and she engages with her critics is really productive. And I quite like it when she sort of responds to the critiques in news articles. Or in fact, when she was pressed on, it was an episode of Talk where she was interviewed on this somewhat. Sometimes I think she's a real model of how to deal with critique, but I know just her work in general poverty challenges.


A lot of my pre-existing bias is partly coming from sort of the evolutionary sort of angle of some of my previous thinking around how malleable our people can we really change them and their outcomes in a substantial way, particularly through some fairly, I guess, smallish and costless interventions. Can that really make a big difference?


Right. Interesting that that's the perfect recommendation for me in particular, because I get so excited to find people who are who are good at responding to their criticisms of their work. And like a fair and nuanced way, that's like a red flag or sorry, red flag in the sense of waving a red flag in front of a bull for me to check out her responses to critics, maybe I'll I guess we can link to that episode of Econ. talk with her.


If you have any other suggestions for examples of Angela Duckworth handling criticism or disagreement that you can pass them on to me and we'll link to those on the podcast site as well. And then maybe to just one of her books on grit. Indeed. OK, cool. So we'll link to those unwilling to your excellent blog and several of the papers that came up in conversation like the 1994 conman Gigerenzer debate. And I think that's it. Jason, thank you so much for coming on the show.


This is an interesting conversation.


It's been a real pleasure. Thanks.


This concludes another episode of Rationally Speaking. Join us next time for more explorations on the borderlands between reason and nonsense.