Transcribe your podcast
[00:00:00]

Today's episode of Rationally Speaking is sponsored by Give Well, they're dedicated to finding outstanding charities and publishing their full analysis to help donors decide where to give. They do rigorous research to quantify how much good a given charity does. For example, how many lives does it save or how much does it reduce? Poverty per dollar donated. You can read all about their research or just check out their short list of top recommended evidence based charities to maximize the amount of good that your donations can do.

[00:00:28]

It's free and available to everyone online. Check them out at Give Weblog.

[00:00:47]

Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I'm your host, Julia Gillard. And with me is today's guest, Douglas Hubbard. Doug is a consultant and author of several books, including one of my favorites, How to Measure Anything Finding the Value of Intangibles in Business. Doug, welcome to the show. Thanks for having me, Julia. So one of the reasons I think that I've ended up recommending your book to many people over the years is that it's this neat hybrid between being, on the one hand, a very practical sort of advice driven book, and that's sort of how it's packaged.

[00:01:30]

But on the other hand, underlying all that advice is this idea that I think is actually very important and quite deep once you get it, which is that you can measure anything. And this is an idea that is kind of counterintuitive and that, as you describe in the book, a lot of people sort of instinctively resist. And one of the things I'm hoping to focus on today with you is why is that? Why do we resist the idea that we can measure things and quantify things, quantify our uncertainty and what are some of the ways to do that?

[00:02:06]

So. Let's jump in. What are what are some examples of the kinds of things that people tend to say are, oh, that's impossible to quantify, to put a number on or to measure that you would disagree?

[00:02:20]

Sure. Well, I can give you some real examples we've actually done. People are asking us to measure the value of information, governance or measuring drought, resilience in the Horn of Africa or the impact of dams on the Mekong River or the effect of some new environmental policy or how much fuel the Marines are going to use on the battlefield forecast 60 days in advance. Now, some of those seem like better units of measure. I mean, obviously, fuel we can measure in gallons.

[00:02:57]

Yeah, that one seems as an outsider like, oh, sure, you should be able to measure that, right? Yeah, it's hard, but I mean, there's a lot of uncertainty to it. But things like drought, resilience or collaboration or governance, those are much more ambiguous terms. And those are the sorts of things that people really think of that when they say that's immeasurable, not just hard to measure, but immeasurable. And those are there are several examples like that.

[00:03:25]

Even even sometimes she will include risk in that because it seems ambiguous to them. Right. Even though there's whole industries around measuring risk. But, yes, there's quite a few different examples there.

[00:03:39]

You just regarding the risk question, you had this funny anecdote in the book about someone at I guess it wasn't it was an actuary and you were talking to him about risk. And he insisted that some risk couldn't be quantified. And you were like, isn't that what you literally do for your career? Right.

[00:03:56]

No, that was actually I remember it was the it was someone high up in the IT organization, not quite the CIO, but direct report, I think, to who said it is impossible to measure because it is risky and there's no way to measure risk. And I told him, I said, you work for an insurance, you got it.

[00:04:16]

OK, that's that's one tiny bit more forgivable given that he wasn't an actuary.

[00:04:23]

But it's interesting because a lot of my original clients 20 years ago were in insurance companies, and you would think that maybe they'd be ahead of it in other areas in terms of measuring value and effectiveness of and measuring risk and so forth. But in fact, no, they were they were coming at it just as a new as anybody else.

[00:04:49]

One of the themes that I'm picking up on is that there are these two big reasons why people feel like some some risk or some quantity can't be measured, or one is just uncertainty and unwillingness to, like, make a forecast about something that isn't certain. And the other is ambiguity, right where I governances this kind of nebulous thing. And so we can't put a number on a nebulous thing. Does that seem right? I yeah.

[00:05:18]

And so I talk about three reasons why things ever seem to be a measurable another. And there are all three illusions. So I call them dot com system, concept, object and method. So if you want a new monarch, you can just think of dot com com in concept has to do with the definition of measurement itself. People might misunderstand that measurement is merely a reduction in uncertainty quantitatively expressed about based on observations. It's not an exact number. So sometimes somebody will say, I can't put a number on that.

[00:05:52]

Well, that's not what measurement means in the scientific sense, and it's not what it really means in the practical decision making sense. It may mean that pretty much just to accounting, that's just about the only area where it means that even in engineering, they use words like tolerance to represent a room for uncertainty. What does tolerance mean here?

[00:06:13]

Oh, tolerance is the way an engineer describes uncertainty. So the tolerance on the diameter of a bolt, they might say the it has to be one inch diameter with a tolerance of zero point zero one point zero zero one inches. Right. That means that's how that's how much the variation can be in a manufacturing process or something. Right. That's the that's what they call tolerance. And so that's room for uncertainty, variation in even what seems like an engineering context.

[00:06:45]

But most people, most certainly all scientists and impractical decision making where you make decisions under uncertainty, measurement really means a quantitatively expressed reduction in uncertainty based on observation. So not necessarily an elimination of uncertainty, rarely an elimination. If that were a prerequisite, then most things in. Physics couldn't be measured, so like if I have, I'm trying to estimate some property of a population like the average height or something, and I, I take a sample of 10 people from the population and measure their height, then I have I have a better guess about the height, the average height of the whole population, like the whole state than before.

[00:07:28]

But still, I'm still I can't be certain about what the average height is. Yeah, that's right.

[00:07:32]

Until you do a complete census, you're not going to know what the actual height is. And it has to be an instantaneous census because some of them are kids and they're growing while you're doing the sampling. So, yeah, unless you're talking about an instantaneous census, you won't know the exact average, but you can estimate better. In fact, there are surprisingly few samples that you might need to drastically reduce uncertainty. In fact, that's the point of one of the other items in this list.

[00:07:58]

The second item in the list is the object measurement. The object of measurement has to do with the finding, the thing that you're trying to measure.

[00:08:05]

That's where the ambiguity comes in.

[00:08:07]

That's where the ambiguity. So if you name something, anything, something that seems impossible to measure, what do you think it is?

[00:08:15]

How about quality of life? Quality of life?

[00:08:19]

Fantastic. So give me examples of what you see when you see better quality of life. You must have seen it very right. Yeah, yeah, you've seen variations in quality of life, so what did you see when you saw those variations? I guess I see people's mood varying, like will seem happier or more enthusiastic or energetic sometimes than others express specific expressions.

[00:08:56]

Right. The frequency of those specific expressions, in other words. Yeah, like your sample.

[00:09:01]

If I could just watch people, you know, on the surveillance camera over, you know, several days in their life, I could theoretically count the number of times they smile or something. Sure. And they like in an indirect way to get it their quality of life. Surely there would be some people who are happy but just never smile.

[00:09:21]

Yeah, sure. Well, how would you how would you have known that? How would you know about some people are happier than others and just don't smile.

[00:09:30]

Oh, well, I mean, there are people I know well, like I have a friend who I thought he hated me for years, literally, because he just never smiled when we hung out. And yet somehow, inexplicably, he kept wanting to hang out with me. And eventually friends of his were like you. You realize you need to smile when you like people, right? And he's like, Oh, so he learned to smile. So you could have an exception.

[00:09:50]

But that's like I am on a spectrum where I think there is sort of a fair amount of variation. But if you know someone well and they can talk to you about how much they enjoy their life or not. Sure. Yeah.

[00:10:03]

I mean, one act, one indicator of quality of life is what people are willing to say. Right. That's that's not zero information. Even though you can imagine situations where someone might have a reason to be dishonest. Generally, if somebody is telling you that their quality of life is terrible, you should probably take their word for it. It's probably more likely that their quality of life, they're not happy with it. Right.

[00:10:28]

That that's true. That is a more direct. It's funny when I when I think about ways to measure things, I think my instinct is maybe to get too clever and try to find clever indirects. Measurements instead of sometimes you just want to ask people that those aren't bad looking for frequency of specific expressions is not a bad thing. That's just coming up with the right sampling method then. But I think often when people talk about quality of life, they also mean other indicators that should lead to the perception of better quality of life, like low crime rates and decent income, longevity, things like this.

[00:11:09]

So all of those are indicators of something else. And often what we're asking people to do is unpack all of these things that they had under this one big, ambiguous umbrella. All right. So when people say, I want to measure innovation or I don't want to measure collaboration, I ask them for examples of it. What do you see when you see more of it? They definitely identify multiple things. That's the source of the ambiguities. They meant multiple things to begin with.

[00:11:36]

If they all if they only ever met exactly one thing, they probably would have used a different word, I guess. And then finally, the methods of measurement. That's the one thing you alluded to earlier, understanding how sampling works. People are often surprised what inferences you can make from relatively simple samples when you do the math. Three weeks ago, I was at a symposium, the Symposium for Statistical Inference that was organized by the Assaye. I was one of the organizing committee members, the American Statistical Association.

[00:12:07]

It is I can tell you that there's a variety. They're dealing with a variety of misconceptions about statistics, even among published scientists who kind of get certain things wrong. When they do their work, they're consistently misinterpreting certain key concepts like, well, have you ever heard the phrase I'm not saying that published scientists will say some of these things, but you've heard people say this before. Have you heard statistically significant sample size? Are you saying that? I'm not sure I've actually heard that phrase, but I guess they must mean a simple.

[00:12:44]

But how many nights their results would be statistically significant just based on your sample size? That's a great point. I, I realize that from your Web page it said you've got to be a statistics from Columbia.

[00:12:54]

So I'm not exactly your target scientist.

[00:12:58]

So they should probably have areas. Most people do use the phrase they'll object to a measurement saying that's not a statistically significant sample size. Well, there is no such thing. And I explain that to people. I said, well, there is no universal magic number of samples that you have to get to where if you're one short of it, you can make no inferences at all. And once you reach it, all of a sudden you can start making inferences that there is no such number.

[00:13:27]

Right. Tell by that alone. So I explain statistical significance to them and how they actually compute this and how it depends on more than just the sample size. And then I explain to them it probably doesn't even really mean what you think it means and you want something else anyway. So because it doesn't tell you whether or not you learn anything. Right, you could have a lot of uncertainty reduction and not have statistical significance and you could have no uncertainty reduction and have statistical significance.

[00:13:56]

So those three reasons together are the reasons why people might think that something is immeasurable. They're all three illusions. They always were. I think I'm not sure if people come to the conclusion that things are immeasurable because they believe those things in advance or if they constructed those beliefs, is kind of a defense mechanism for not being able to measure things.

[00:14:19]

Oh, interesting. I had I'd come up with a list of possible reasons why people resisted measurements, but that hadn't been on it. Like compensating for inability or perceived inability wasn't on my list that that that actually turns out to be a big one.

[00:14:35]

We did a survey, a fourth book. My fourth book was a spin off of my first book. It was called How to Measure Anything in Cybersecurity Risk. So our plan is to do a few spin off books like that, like how to measure anything in health care, how to measure project management, etc.. So we wrote that one that one came out a year ago and we did a survey. My co-author and I was the first book I co-authored.

[00:14:58]

So I had to get a real cybersecurity person on there, not just a third party quant guy who's trying to give his opinion about cybersecurity risk, but so he's a real cybersecurity risk guy. And Richard Knight conducted this one hundred and seventy three person survey. It was a one hundred seventy three participants from across the many parts of the field of cyber security. And in that survey, it was a rather large survey. In that survey were questions regarding opinions and attitudes towards quantitative methods and risk assessment and cybersecurity.

[00:15:34]

And then there were also ten questions that had to do with statistical literacy. What we found this may not be surprising is that the people with higher statistical literacy tended to be much more accepting of quantitative methods and much more excited about the use of them. And that much more resistant skeptical tended to score much lower in statistical literacy. But it was it was actually more specific than that. On all of the statistical literacy questions, one of the choices was, I don't know.

[00:16:03]

And the people who said I don't know a lot weren't necessarily the ones that were resisting the use of quantitative methods. It was the ones who thought they did know and were wrong. So it's not just and it's not just the lack of knowledge. It's profound misconceptions about statistics that people from actually using it. In fact, there's been a lot of research on this. Daniel Kahneman, you know him a big fat economics in twenty two, I think it was.

[00:16:34]

And so he did he was a psychologist, said he never took any, of course, in his life, by the way, despite having won the Nobel Prize in economics.

[00:16:43]

Yeah, exactly. But I interviewed him for my second book, so I was talking to him and we talk about a lot of different research, a lot of different areas of research that he was working in. But one of the papers that I studied up in advance of my interview with him was one about inferences from samples even by trained scientist as well as naive subjects. He called them naive subjects, non scientist, I guess. And the fact his his point was that there are fundamental misconceptions about sampling methods, random sampling, and that everybody gets us all wrong and it has a big impact on actual research.

[00:17:23]

And so he surveyed a bunch of published scientists and people who work, publish scientists, just students and so forth. And so the fact is, is there are profound, persistent misconceptions about how sampling actually works. And what it tells us and what people do is they kind of remember some things and they'll throw out words like. That's not statistically significant, and they didn't they didn't really do any math to make that claim. It's not like they did a calculation and then included it was just a fancy way to say I disbelieve that result.

[00:17:56]

Yeah. And I I just believe it. And I and I'm telling you that I vaguely remember some statistics.

[00:18:01]

Just those are the two things conveyed right by indictment. I they've all depending on who they're talking to, they will also convey that they didn't remember anything correctly about it. But they'll also say something like, well, correlation is not evidence of causation. Right. And I'll say, well, actually, that's not quite true. Correlation isn't proof of it. But I can show you a Bayesian proof that says it is evidence of it. Yeah. And I show that in the third edition of my first book, a Bayesian proof for it.

[00:18:32]

So, I mean, things like that, people are just winging it all the time. They'll say, well, you know, there's this potential bias in the survey because this bias exists. That means that no inference can be made. I recently came across someone who said they were talking about my calibration training, where we train people to subjectively assess probabilities and said, well, if you broken this down into age groups, because if you haven't broken it down into how well people do by age groups, you can't make any extrapolations from this.

[00:18:59]

I said, well, we don't ask people their age when we test them. I'm telling you the results of the of the tests as they are produced. These the population tends to represent the population, people that make up my clients. And that's who we're forecasting for. I said, are you saying that if there is some variation, randomly assigned variation, even in the populations that unless we account for all possible variations, you can't make inferences? You said, no, you can't.

[00:19:27]

I said, well, then all science is wrong. I said every controlled experiment in the world doesn't control for every varying factor. You misunderstand how it works. I said, no, you just have this wrong.

[00:19:42]

That's a particularly strong example. But I think you could say the same thing about. The use of really any experiment that wasn't done in the exact same way.

[00:19:54]

Yeah, I mean, if somebody it is true that sometimes you have to be very careful with experiments and apply the exact same conditions to get the same outcomes.

[00:20:04]

But because there's always a little bit of inference or extrapolation you're doing when you realize the results of any one experiment. And sometimes it's a very reasonable extrapolation where you should expect the results to carry over. And sometimes it's not a very reasonable extrapolation. And it actually seems a little difficult to me to put a principle on when it's fair to extrapolate from the study.

[00:20:25]

I think the problem that people run into, though, is they they hear or see. They read about situations like that where it was very difficult to replicate something because this actually happened once in two labs at one lab was trying to replicate the results in some life sciences study of another lab, and one of them used a different stirring method in a solution than another. And that actually changed.

[00:20:53]

But people conclude from that that therefore, unless you do all these things perfectly, which are extremely difficult to pull off, I can make no difference whatsoever from observations. Well, that's not how you live your life. What are you talking about? Of course, you live your life making inferences from observations. If you can't make inferences using statistical scientific method and statistical inference, well, then how are you doing it with just your life observations? Because you're doing that with selective recall.

[00:21:25]

Yeah. And flawed inferences. Right.

[00:21:27]

So I was once teaching a class on exactly calibration. It was just sort of estimating like trying to quantify your own uncertainty, put a probability on your beliefs or your predictions. And and someone in the class just kept insisting that you can't you can't know what the right probability is. And I kept trying to to get him in the mindset of how he actually makes decisions in real life. And I'd be like, well, you know, let's say you buy a sandwich and you eat the sandwich.

[00:21:58]

You that like if you eat it, that implies that you probably put a very low probability on it being poisoned. And his response was, no, no, no, you can't like I'm not worried about being it being poisoned, but there's no way to know the probability of it being poisoned, which the fact that he made that distinction suggested to me that people have this and I've seen stuff like this many times. He's just one example suggested that people have this difference.

[00:22:21]

They have this compartment that they put anything quantitative in where there's this super high standard and you're not allowed to make any estimate unless it's completely rock solid, whereas in your day to day life, you just like do whatever seem sensible to you. And that's just a difference in magisterium or something.

[00:22:42]

Well, see, here's the way to refute that is empirically with studies that actually show that if you tracked all the times that, let's say meteorologist's said that precipitation was 80 percent likely, it actually was actually was precipitation about 80 percent of the time. And of all the times he said it was 90 percent likely was right about 90 percent of the time. So, yeah, there was another researcher, Paul Mial, that I cite a lot in my research.

[00:23:13]

Since the nineteen fifties, he was gathering studies and meta studies comparing human subject matter experts in a variety of fields to relatively simple statistical models. And these statistical models were consistently outperforming the human experts in all these fields, prognoses of liver disease, outcomes of sporting events, which small businesses were more likely to fail, et cetera, et cetera. In all these areas, the humans weren't doing as well as the statistics, as the statistical models. So in fact, the claim that we don't know an exact probability, therefore we can't put a probability on something and say, well, if that were true, how come probabilistic models do better than you like?

[00:23:56]

You're holding the incorrect standard, right? As you said, they put a different standard on anything quantitative than they do on their own subjective decision making. The fact is they're subjective. Decision making is routinely outperformed by statistical models in areas that they would have insisted that only a human could possibly understand. I did a lot once in the movie industry for forecasting the box office receipts for new movies. So in other words, you have a movie script, you have a description of a movie project or proposal for it, and you're going to investors.

[00:24:31]

Right. And these investors want to make good bets and they have to look at a lot of proposed movie projects and they get the read the script. They get the sometimes they know who the actors are or the director, et cetera, and they have to make a decision. Well, the people who do this are called script. Readers often say they read the script on the behalf of the investors and they make an overall appraisal of an assessment of the viability of this movie project.

[00:24:59]

Well, they were convinced that there was no way you could quantify their sophisticated judgment process where they consider, according to their words, hundreds of variables in a holistic, artistic network. They really had a very fancy, highfalutin image of their mental processes far beyond what most people would think they're capable of doing. I did an analysis. I did a regression analysis on the last three hundred and ten project movie projects where they made an assessment and then somebody eventually made them, but maybe not that particular group of investors.

[00:25:38]

When you look at the three hundred and ten movies comparing actuals to original estimates, the correlation was zero.

[00:25:46]

Oh, wow. So you and I pick the industry average every time would have done just as well as the experts who were convinced. I'm telling you, they were convinced that they were considering all these fantastic interactions of hundreds of variables in their head and these subjective artistic things. I came up with a really crappy model, probably the worst regression model I made that had a correlation of zero point three. So an hour difference. Yeah, two point one.

[00:26:16]

And of course, it's worth millions of dollars a year. So the individual, by the way, the individual says you can't put probabilities on things. I know I can set up bar betting games that I'll play with him over and over again until he runs out of money. I'm happy to do that.

[00:26:34]

Introduce me to him and I'll just because you can you can do that until they run out of money or admit that they fundamentally misunderstood something. So here there is a though a fundamental issue, though, with the word probability, because even statisticians don't agree on it, as you know. But most people for Practical Decision-Making need to treat probability as a state of the observer. It's not an external thing that you're measuring. It's not a state of nature. It's your state.

[00:27:05]

Yeah, you had this great quote in the book. I don't have it on hand. It was something like when the topic if the topic is your own uncertainty, then you are the world expert. And yeah. And it's actually like when you're trying to put a probability on something, the thing you're measuring is your own uncertainty.

[00:27:22]

That's right. And, you know, actually, when you take people through calibration training, I don't hear those objections anymore. They see they see themselves putting probabilities on things and then going back and see how often they're right. And then after the training, they see that when they say they're 90 percent confident, they actually have a 90 percent chance of being right.

[00:27:41]

Yeah, interesting that your point about people like one of the problems, one of the causes of resistance to the idea of quantifying uncertainty or making estimates, being that people think that the uncertainties in the world and not in their own perception, it reminds me of this something called the mind projection fallacy, which I think was a term coined by a physicist named James. It's a phenomenon where someone will say, like, broccoli is gross instead of I dislike broccoli and I think like.

[00:28:16]

To some extent, with with like broccoli or other things that people understand are subjective, they like that's just a figure of speech. Broccoli is gross, but often it's not like they might say that painting is beautiful or that person is beautiful. And they might actually stand by the claim that like, no, the beauty is a property of the painting. And my reaction to it, oh, that's what's happening with the uncertainty.

[00:28:41]

I think there is there's that that misunderstanding, that miscommunication about the concept. Edward T. Jones, who you're referring to, he was a quantum physicist who was also a pretty devout Bayesian for Bayesian approaches to things. And of course, there's a lot of physics to really take a strong Bayesian approach. There's actually a whole group of them now that say Bayesian approaches to understanding probability are actually fundamental to physics itself, called cubism.

[00:29:14]

If you look it up, cubism, I've never heard of that fact, namely capital Q hyphen ism or bizim I think isn't right. Right. So not painting style. Well I think that's true.

[00:29:27]

I mean we need to if you believe that probability is the way that Fisher described it, well then your classmate was probably right, because the way that Ronald Fisher described it, it is a mathematical abstraction with no possible application in the real world of probability. The way he described it is it's a it's a a purely random, perfectly repeatable process and an infinite number of trials. It's really an idealized frequency is what it is.

[00:29:58]

But then I guess we shouldn't be talking about we should. I just want to have a word that people like the student of mine with the sandwich will can just use to mean like. The the thing that described how I would behave in those situations under certain like this was this was a student of yours, not a classmate. I was a student. OK, all right. Well, I guess I, I kind of go further because when I'm training people, I say you possess some profound misconceptions while you're much more blunt than me.

[00:30:31]

I just kept saying, oh, that's an interesting take. Let me pose another thought experiment. Oh, yeah.

[00:30:35]

I just say no. You have to imagine that you're you have some fundamental misconceptions. We should all be willing to accept that about things. Right. So it's not really in the food world.

[00:30:46]

Yes. Yeah, yeah.

[00:30:48]

We it's not too much to ask someone. And as it should be, one of the epiphanies that students come across is that they had profound misconceptions going into college. Right. So you've got that person has profound misconceptions and the best way to prove them is set up. These barbet in games just play over and over. OK, if there's a number of number of tests you can put together where somebody would insist that you can't put a probability on something, yeah, there is a a a method of some scoring methods are called proper scoring methods or evaluating how well people put probabilities on things that describe a couple of them in my second book.

[00:31:36]

So one of the proper scoring methods I'll just describe mathematically is you take that the difference between the probability that someone put on something and the truth value of it. One, if it happened to zero, if it didn't take the difference in square, that you add those up over time and you want to try to keep that score low for a given number of forecasts that someone's making. And the fact is that some people will perform much better than others at that.

[00:32:00]

So clearly, if you can't put a probability at all on something, what's his explanation for saying that some people are better at putting odds on things and repeated? Why would that be? That would make any sense. Yeah. And so, in fact, you can make betting games out of that where the people who are better at putting odds on things actually make more money. And this person would apparently be indifferent between someone putting a 10 percent likelihood on a 10 percent probability on something or an 80 percent probability or something.

[00:32:35]

They would apparently have no preference at all. And you could set up indifference bets along those lines and actually demonstrate that, hey, I'm losing money. If I keep playing this game over and over, I must misunderstand something or I'm going to keep playing because I'm sure I'm right. Right.

[00:32:50]

So either I would change his mind or I would get a lot of money. And either way, I win.

[00:32:54]

Yeah, because at least you don't want that person to be in charge of a lot of resources.

[00:32:58]

Right. But very, very pragmatic. I would rather have someone who understands these concepts, resources to allocate. So. Yeah, but that's true. And people hold this like you're talking about their religion or something. I mean it's no probability means it's not. Now you've had it wrong this whole time. No, sorry.

[00:33:19]

So just to round out our list of why people are resistant to the idea of quantifying uncertainty, we talked about well, just now we talked about people sort of having a misconception of what probability means and thinking it's an objective property of the world that we can't ever know.

[00:33:39]

We don't that I missed it.

[00:33:41]

Yeah, no, you're correct. But that is the that is the Bayesian view, Ronald Fisher, that the frequenters you actually said it is an objective feature of the universe. So I see.

[00:33:53]

So in that case, if we are charitable and assume that these people are sort of stalwart frequent tests and they're resisting my use of the word probability, then them then the resistance, because the resistance is them being unwilling to. Yeah. In terms of what odds they are.

[00:34:10]

Yeah, that's right. You can explain. Well, you say you might you might be closer to being right if you were a frequent test, but you were definitely wrong if you're Bayesian. But actually there are cases where you're wrong if you're a frequent is to say you're wrong either way in that case. So I mean, a person said of a coin flip, there's no way to put a probability on it. They misunderstand even the frequenters definition of it.

[00:34:32]

Yeah, they couldn't ask just ask even even for some one off event where there's not a meaningful frequency to like, you know, probability that Russia and it's Poland or whatever. But even then, they should be willing to use the concept of, you know, over the long run out of the times that I put a probability of ninety percent do. Ninety percent of those things actually come true that right.

[00:34:58]

Yeah, exactly. You can you can set up a game where somebody says, look, would you rather have spent a dial that gives you. A 50 percent chance of winning the thousand dollars and 50 percent chance you lose nothing or win a thousand dollars if you know your prediction. If this prediction turns out high value adjusted, do so, they're equivalent. And so if they if they're really believe in this position, they would consistently have no preference between the two.

[00:35:31]

Regardless of what you set the payoff on the right, you can make the dial an 80 percent pay off, a 20 percent pay off. It wouldn't matter. They would consistently different. And as soon as they start making preference choices, you say, well, apparently you believe that the probability is less than X and more than this other thing. So it's a you kind of get them in a trap that I'll throw that one up for future use.

[00:35:55]

I'm sure I'm going. Oh, so.

[00:35:58]

Well, anyway, you were saying you were going through our list. Yeah.

[00:36:00]

Yeah. So we we talked about the confusion over the notion of probability itself. We talked about people kind of. Worrying they're going to be held accountable. And so not wanting to make any estimate because it's not perfect. We talked about. People compensating for their own inability to form estimates by claiming that it's not possible to form estimates and one that we didn't quite talk about, but it seems important to me, and at least some cases is maybe tell me what you think about this.

[00:36:35]

There's some cases where we are implicitly putting a value on something that we want to be able to say is immeasurable or invaluable, like the value of human life. So we might say, like there's no limit to the value of human life. It's sort of immeasurable or infinite. But in fact, like from our behavior, just like with the betting example or the eating the sandwich example, we don't actually believe that because if we did, we would you know, we would set the speed limit to 15 miles per hour or something that couldn't possibly kill people or would be very likely to kill people.

[00:37:07]

And we don't do that because there's a tradeoff there. It's, you know, makes our lives slower and less efficient and gives us less autonomy. Yeah. And all those things against, you know, the risk of death to some degree. And so we set the speed limit somewhere in between, you know, no speed limit in 15 miles per hour. You put it like 50 or 60 or something like that. And that's like an implicit sign of of how we value human life.

[00:37:30]

That's the measurement. But it's implicit and we don't have to talk about it. Like once you start asking people to put a value, put a number on human life and suddenly we're violating this sacred taboo.

[00:37:40]

Yeah, you know, actually, there's two problems with that position, I suppose. One is the belief that if the answer were infinity, that's not a measurement. No, actually, that's a possible answer to a measurement.

[00:37:52]

Right. It just implied too weird things.

[00:37:54]

Sure. But you are correct. There actually is a whole school of thought around this. It's called the VSL or value of a statistical life. It was at the Harvard Center for Risk Management, I believe. I talked about it in one of the books. It's actually it's a value that's used by several government agencies, any government agency that has responsibility for human health and safety. I've used it in models that included things like human health and safety, among other economic consequences, et cetera.

[00:38:22]

When you look at how people actually spend their own time and money to just slightly reduce their chance of death each year. Yeah, like right now there are medical tests you could choose to take that might have some remote chance of detecting a condition that if you intervene now, would save your life. Right, but you choose not to do it because like me, you know, it's not worth your time and money, you don't think. Right, or you could have spent more money on a safer car or you can put up a fourth smoke detector in your home.

[00:38:55]

Sure. Or you could just drive less or drive less or drive much slower. Start earlier in the morning to get to work and drive slower. Right. Or take a big pay cut so you can commute less. So these are all things that people are only willing to do so much to even reduce their own risk of death. And the VSL, the value of a statistical life. A series of surveys shows that most people behave as if they value their lives.

[00:39:25]

At somewhere around 11 million dollars or so, we usually put a range of two million to 20 million on it. And of course it varies from person to person, but average across many people looks like it's about 11 million dollars.

[00:39:38]

To say that it does. But the family number, yeah, it does. It feels vulgar until you realize that people have to make practical decisions about the allocation of limited resources because we could all save more lives right now by doubling our taxes, we could pay twice as much in taxes and fund more basic research on fighting disease, et cetera. Right. And people are only willing to do so much of that. So they've already behaved in a way that puts a limit on the value of human life.

[00:40:13]

They do not behave in a way that indicates that they believe life is priceless or they priced right. They as soon as someone says life is priceless, they immediately become hypocritical by virtue of their daily activities.

[00:40:29]

Yeah, I, I suspect we want that ideal. If we could choose and not have to acknowledge that we were choosing would be to act in one way and espouse values that contradict that. Yeah, that's what actually serves our goals.

[00:40:44]

I think the problem is, is that somehow people have this negative connotation to just quantifying things to begin with.

[00:40:50]

But that's a whole other category that we actually didn't talk about is just gone.

[00:40:55]

Right. I mean, I think if somebody wrote that, if this friend of yours who turned out like you all the time, that like you all along, you know, he was Banon's Meyler.

[00:41:05]

Yes. Suppose he wrote a poem about how much he cared for you. Right.

[00:41:11]

It would be really, really surreal to have him deliver that heartfelt poem with a complete deadpan, frowning face.

[00:41:19]

But if somebody did was was seeking your affection and they wrote this poem about you, you wouldn't say you can't reduce my life to words. How dare you reduce me to words to the English language?

[00:41:35]

Well, that seems odd, but we say things like that when we talk about reducing people to numbers. Right. We don't we never say we've reduced someone to language or words, but we say we've reduced someone to a statistic. It's a descriptive thing.

[00:41:54]

You can just say if we felt as passionately about numbers as we felt about word, then that would then we'd be fine with reducing someone to numbers. Like I think it's not quite a fair comparison because we have so much more worth of so much more emotional valence for us. So it feels less vulgar or trivializing to to, quote, reduce someone to words.

[00:42:14]

Yeah, I mean, I think that is an odd difference. You know, in a way we abstract our environment all the time. We reduce things to words. We reduce profound experiences to words. We reduce them to pictures. We reduce them to our emotions. Our emotions are abstractions. We reduce things, complex situations to a much more primitive emotions all the time. So we don't have to think of, you know, quantifying things is a really interesting human.

[00:42:48]

I don't know if we can call it a human invention, but it's it's a method of looking at the world that seems to be rare among species. Right. Language may not be that rare among species. There's some communication methods, but math does seem to be really rare among species. I maybe we should think of this as an intrinsically human thing. This is a fundamentally human thing that we could do a nice reframing.

[00:43:14]

This is what makes us. Yeah, yeah.

[00:43:18]

This is one of the things that makes us human. So it's not producing someone to a number. It's it you can elevate someone to a number to make it happen.

[00:43:33]

Right.

[00:43:33]

So there are some poets are better than other poets and likewise some quantitative models are better than others. So, yeah. So I think part of the I think it's partly a defense mechanism, because maybe people are insecure about their their abilities to do some of these things. And I take a tongue in cheek, cynical approach, I suppose sometimes to teaching people quantitative methods. In a way, I'll say, well, actually, it's kind of fortunate for some of us that so many of you have these profound misconceptions, because my clients are making lots of money because they're outperforming others who have these perceptions.

[00:44:15]

So I'm kind of glad that a lot of you can't do this. On the other hand, to be serious for a moment. I know that their choices, they're the choices they make the of the policies they support, the public public policies they support and the the products they consume and the actions they have have external consequences. They do affect me, actually. So it's to be serious about it. It does matter to all of us that other people kind of get this stuff straight.

[00:44:48]

The fact is, you know, this is culturally different, too. If you look at the John Allen policies book at innumeracy, which has been out for, it's getting close to 30 years now. He talked about how it's almost a little bit more unique in certain Western culture and especially the United States. It's you don't hear these objections to being quantified in, say, India or China. It's less common there. It's perceived differently. It's perceived as a natural human expression.

[00:45:19]

Right. And here somebody will say, well, I'm more of a people's person. I'm not a numbers person, as if they were mutually exclusive. Right. But in India, that might be perceived more. And this is John Allen policies books saying this in India, that might be perceived more as I'm a people person, not a literate person.

[00:45:38]

Right. And that people would react might be likely to react similarly to the statement, the way they would a statement about someone, you know, saying they're not literate.

[00:45:49]

Yeah, I'm not a numbers person. I'm not a I'm not I can't read. I'm a people person. So that doesn't sound that sounds incongruous.

[00:46:00]

Not something you would brag about at a cocktail party.

[00:46:02]

Yeah, exactly. There are these misconceptions and it is apparently there is a cultural aspect to it. Hopefully we can overcome this because as you know, our our STEM performance compared to other developed nations is not good. We have to do better at math. So. So was there anything else, any other questions we've got.

[00:46:26]

Yeah, I'll, I'll let you go. But but before you do, I just wanted to invite you to give the rationally speaking pick of the episode. So this is a book or a paper or blog or anything that has influenced your thinking in some way. What would your.

[00:46:41]

Well, you know, actually, the book I mentioned earlier, Innumeracy by John Allen Paulos, I think the first edition was early nineties, late eighties or something. That's really important. It talks about mathematical illiteracy in the Western cultures in America in particular, and how that tardiness, but also another one, you the book where I first heard about calibration calibrated probability assessments was called it was called Decision Trappes by Rousseau and Shoemaker. Oh, no. And that was also about 30 years ago or something.

[00:47:19]

The first edition has it stood the test of time? Well, yeah.

[00:47:23]

I mean, I think it first introduced me to this concept of calibration that I came across Daniel Kahneman to work. It was shortly after graduate school for me. I was still in management consulting at Coopers and Library and I was coming across these books about this because I at my experience in management consulting at Coopers and Lybrand was really some of my early inspiration for my current work because I would work with clients. I was the guy doing more quantitative models only because I had a little bit more stats and quant stuff than a couple of my peers.

[00:47:57]

I just barely tipped the needle in my favor. So I got that work right. And then but I would run into clients once a while that said that something was immeasurable. And at first I would just take their word for it. I mean, I didn't know I'm brand new to this stuff, but later on, I would hear people say that in regards to things I knew I had just measured at another. And so I said, well, I don't can't always be right.

[00:48:22]

So I started doubting if it was ever right. And then I started keeping notes. And that's why I wrote my first book ten years ago. And as I was really writing it for a few years before, that is keeping notes on it for many years prior to that. But it was really based on these series of interactions I've had with people about why they felt certain things were measurable and even. Debates I've seen, among others, other people at management level, why things were measurable and I would see I would see these fundamental philosophical misconceptions being invoked, know that something is immeasurable because it would be offensive if it were measurable.

[00:49:00]

Right. You know, like things like that. These there's a series of bizarre arguments ible. My staff and I have joked that we should make an app that lists all of the standard objections and arguments and has standardized refutations of them because they're almost scripted now. We feel like we've got this now. Somebody says a particular weird objection we've heard. Well, that's number 70 to that one. So. So don't don't follow both those books. Actually, kind of help me set me off on this direction while I was reading that at Coopers and Nice.

[00:49:41]

Well, we'll link to both innumeracy and Decision Trappes as well as to your own book, How to Measure Anything that I read years ago. And I just the last word on the book in this whole sort of body of work. I think one of the things that I really like about it is that it it presents this sort of constructive side of rationality and skepticism and critical thinking where like on the one hand, I think people are really used to the idea that, you know, rationalists or skeptics or scientists or critical thinking, self professed critical thinkers keep telling them like you don't know as much as you think.

[00:50:21]

You know, you're overconfident. You have all these, you know, unjustified beliefs, etc. And that is true. There's a lot of that. But then there's the flip side that doesn't quite get as much play. That's sort of a little more uplifting, which is that you also know more than you think you know, in the sense that, like, even though we there's there's we have this instinct to say, like, well, that's I have no idea how to measure that or I I like we can't possibly estimate that that's a measurable etc.

[00:50:51]

that actually. No, you can you can like, reduce your uncertainty significantly just using some sort of basic tools of introspection or thinking tools or thought experiments. And that's kind of a cool, cool counterbalancing message.

[00:51:09]

Sure. Yeah. No, I, I, I think there's ways to demonstrate this practically. I think in a way people come to these conclusions because they're not really dealing with real decisions with those values being explicit. They're kind of dumb as abstractions and as an abstraction. You can have lots of beliefs about it. Totally. So as long as something like probability seems like such an abstraction to people and it's not actually informing repeated bets, that's starts to get real.

[00:51:39]

Yeah. But I think the really good way to sum it up and it's weird how. Decision. I'm doing a terrible job of wrapping up the episode here, but it's weird how decisions that are actually real decisions. Can feel abstract, like you can end up thinking about them in abstract mode and end up feeling like there's no end up concluding sort of false or absurd things that you wouldn't conclude if you were really thinking about the decision in sort of concrete terms.

[00:52:09]

That's right. Yourself, how would I actually bet on this or how it actually behave if the chips were down?

[00:52:13]

You know, when push comes to shove, exactly as long as it's at arm's length, you can have lots of weird beliefs, I suppose, right? Yeah. Well, you know, thanks for your time. I thank you.

[00:52:26]

It's been great having you on the show. Thanks for having me end the episode since I was failing to do that. But there's no more.

[00:52:32]

OK, well, we got it. That's always a good thing. So, um. All right. Well, thanks you. Thanks a lot for your time.

[00:52:39]

This concludes another episode of Rationally Speaking. Join us next time for more explorations on the borderland between reason and nonsense.