Transcribe your podcast

Today's episode of Rationally Speaking is sponsored by Give Well, a nonprofit dedicated to finding outstanding charities and publishing their full analysis to help donors decide where to give. They do rigorous research to quantify how much good a given charity does, how many lives does it save or how much does it reduce? Poverty per dollar donated. You can read all about their research or just check out their short list of top recommended evidence based charities to maximize the amount of good that your donations can do.


It's free and available to everyone online. Check them out at Give Weblog. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I'm your host, Julia Gillard.


And with me is today's guest, Amanda Eskil. Amanda is in her final year of her philosophy, Ph.D. at NYU, where her research focuses on all the ways in which incorporating infinities into our decision theory and our moral theory causes problems. Amanda, welcome to the show. Thank you for having me. So I've I've been meaning to invite you on the show for a while now, but the immediate impetus was in a recent episode with Wil MacAskill. We're talking about normative uncertainty, among other things.


And we ventured briefly into the territory of Pascal's Wager, which very briefly is the philosophical argument put forth by Blaise Pascal in which he said people should really try to believe in God, because if you're wrong and there is no God, then, oh, well, that's fine. But if there is a God, then believing in him is going to be much better for you than not believing in him. And so the sort of expected utility of belief is much higher.


And this is one of those arguments that I find there's this non monotonic relationship between how much philosophical exposure people have and how seriously they take that argument in that a lot of people without a ton of education are like, yeah, that makes sense. You should you should believe because like higher payoff. And then a lot of people who are like well educated are very dismissive of Pascal's Wager. And they're like, oh, that's ridiculous. And they have one or two objections to it.


And then philosophers are like, well, actually it's harder to dismiss than you might think, which is kind of I like I like things with that shape of people's belief in them. Yeah. Yeah. And I think I kind of personally went through that with Pascal's Wager where when I initially heard it, I was just like, this is absurd. Of course that's of course that's wrong. Gave like a couple of standard objections to it. And then I think did the appropriate thing, which is like test those objections and then have them.


Yeah. So I, I think it's important that if you're going to kind of raise objections to an argument like that, that you want to be like, well, what's the best defense that I can make against these objections? And then when I started to do that, I was like, actually, I'm really not convinced by any of these objections. At the very least, we should be taking this argument quite seriously. Interesting. Yeah. In fact, to finish my thought earlier will pick the recommendation he gave at the end of the episode was a post that you wrote about the most common objections to Pascal's Wager and in one hundred words or less, why they fail or how they fail, which is great.


So maybe I'll just tell you what I. So I also have my my views on Pascal's Wager have become a little more nuanced or uncertain as the more I thought about it. But I can tell you what I always historically thought was the most like the knock down objection to Pascal's Wager, which is, well, sure, we're currently thinking about this possibility of a God who will give us infinite utility and heaven if we believe in him and maybe give us infinite dis utility in hell if we disbelieve in him.


And if we're just thinking about that God, then Pascal's wager makes sense. But what if there are other gods? What if there's another God? Who will who will put you in an even better heaven or an even worse hell if you believe in him and disbelieve in the first God? And you can sort of posit any number of possible gods with different preferences about how you believe. And so it wasn't at all clear. Like if you're going to start positing that it wasn't it's not at all clear like what you should end up believing or disbelieving yet.


So it's interesting, I think. Yes, this is the kind of many gods objection to Pascal's Wager. And I think that one question I usually have for people who read this is whether they think that let's just assume for the for the sake of argument that there's only one kind of heaven because things get a bit more complicated if we think there are lots of different kinds of heaven. And so that our infinite outcome. And then I say to you, well, would you prefer like a greater chance of getting that outcome or a lower chance of getting that outcome greater?


And so then the No. Pascal's Wager starts to look a lot like a kind of standard decision problem. So suppose you said to me, I have this great cup of coffee for you and the way that you can get this cup of coffee is to give me two dollars. And then I might be like, oh, but there are other ways that I can get that coffee, I can steal it from you, I can do these other things and stand.


A decision theory says, well, you should do the thing. That's if this is your desired outcome, you should do the thing that's most likely to lead to this desired outcome. And then the question is, why isn't the same true with Pascal's Wager? So if you see to me, well, there are lots of different gods that you could believe in. And I say to you, OK, so like, is there one that you think is more likely to lead to this outcome that you want?


And if so, it seems kind of sensible to think that you should do whatever pleases that God. And if that's the case. Then, you know, we're already we've already bought Pascal's Wager at this stage, we're just debating about which action to undertake. And that's my worry with the many gods objection is it's it comes after you've already kind of bought the argument.


I think sort of implicit in my version, at least of the many gods argument was that we don't have any way of assigning a higher probability to one God than another hypothetical God. Like there's some gods that happen to have been proposed by humanity throughout its history. But we could certainly imagine other gods and we can't rule out the fact that they don't exist for some definition of God. At least it's not like there's one God that I'm more confident exists and we should just be like optimizing for heaven, given that belief.


Yes, I think it's interesting because then the thought is something like, well, I have just like a kind of equitable distribution over all of these gods. And in cases, if you're quite responsive to evidence and you think that we can have evidence in favor of one God over another, say the testimony of someone you really trust is really easy to break that kind of equi probability to suppose that you just read a really good article by a scientist that you think is very credible and they say, oh, I have evidence that this God exists.


You may not find that in any way compelling. And it may be it may make almost no difference to you, at least is doing a lot of work.


Exactly. It's just going to break the symmetry. This equi probable, I think that actually people don't assign this improbability to different gods because some of them just have more plausible properties than others. It's just that when we get to really small probabilities, people almost seem the same. Yeah, they start to look very similar. OK, let me actually revise my statement.


I think the thing that made it feel like all a wash was not actually that I think all possible gods have the same probability, but rather that you can always construct another hypothetical God that such that even if it has a really low probability, has an even higher payoff for believing in him. Yes. Or it. Yes. So this is where things get kind of complicated because this is the this is the kind of different heavens. Oh, right. Point.


Or maybe and it is is interesting. So the question might be suppose that the Judeo-Christian God says, hey, I'll give you let's assume that we add this to the Bible or something where it says heaven is a series of like one utility days, you know, and just just to be clear that this is what heaven is like. And then another another kind of religion comes along and they say, hey, we know that you think that we have like a quarter of the chance of being true compared to the Judeo-Christian religion.


But in our heaven, it's five util days that you get for an infinite period of time. I'm seeing like a competition for market share. Yeah. And so then it does become more complicated where because the natural thing to see and I think it's it can be tricky to get this out formally, but the natural thing to say is you care about both the the probability of the reward and also how good the infinite reward is. And again, we're kind of already at the point of accepting the wager.


It's easy to see this is a reductio, but it's actually not a reductio is more like, oh, that's like a difficult decision problem. So maybe I reductio here. You mean it's easy. Easy to say. Well, that invalidates the whole concept of Pascal's Wager. Yet we see this kind of difficulty and we're like, oh, so therefore Pascal's Wager doesn't work. But it's unclear how we could make that step. You know, we can say, oh, I have I have all of these grievances across different gods.


And let's see that in the case that I just gave you prefer something that you have a Allura Credence is true, but offers you a higher heaven. You we can imagine just constructing a decision theory that would yield that result. If that's the case, then the problem that we're facing is just one of like deciding what the best thing to do is. And I think that what people are kind of we haven't yet got to the point. So I think what people are foreseeing is counterintuitive.


Things are going to happen if we go down this road. And when counterintuitive things happen, I'm going to just reject whatever is that January to Pascal's Wager. So even if it's a fairly plausible decision rule that says you prefer higher probabilities of infinite outcomes to lower probabilities, prefer better infinities to like less good infinities, I'm going to ultimately reject one of those plausible looking premises if it gets when I look at the outcome that this has for me decision wise, and if it makes me chase infinities for my for my whole life, then that's the kind of reductio that people are moving towards there.


But in and of itself, it's not like an objection to the to the wager, if that makes sense. Let me see if I understand what you're saying. Are you saying that this problem we run into the same problem for all sorts of things that don't involve gods or infinite? Utilities are small probabilities, like you could say, well, not the small probabilities, but I think that's still important. But for example, you could say like, well, I want to go to a nice restaurant and I like the most plausible way to do that seems to be like, I don't know, think of a restaurant I like can go there.


But like, it's always possible that by doing that, there's a tiny probability that I'll run into an accident. Maybe like there's some tiny probability that if I sit here and wish to be at a nice restaurant, that it's possible there's a genie who just responds to wishes made about nice restaurants on this particular day or something. And I can construct sort of arbitrarily low probability, but higher value. I don't know. I could have kind of a mangled example because, like, the restaurant itself isn't high enough utility to justify not doing the possible thing.


But like I could imagine, arbitrarily bad and low probability and arbitrarily good probability outcomes for anything. You have discards. Yeah. So I think yeah, I think the thought is that standard decision theory says nice things, you know, like I should like help my neighbors or I should help people in other countries. I should do what I can to create, you know, large but finite amounts of utility in the world. And we may be concerned that if we start to take Infinity's really seriously and then I say, well, how do I how do I get infinite utility in this world?


And suppose that I see I think things like, well, maybe if I could invent a time machine or if I could invent a machine that would let me travel much faster than the speed of light, this would let me sort of, you know, maybe I could have a taxi service where I move people around an infinite universe and generate you know, we can create kind of outlandish scenarios. And our worry is that the kind of actions that we would take if we became the kind of chasers of infinity would be just intuitively completely wrong.


And they wouldn't be things like helping people. They would be just taking these like huge risks for some, like, you know, sci fi kind of scenario that involves, like helping infinitely many people in the universe. And at that point, we might feel warranted and saying, hey, I don't quite know what went wrong along the way, but I know that something went wrong because that shouldn't be the conclusion. And I think I kind of you know, I'm actually really sympathetic to that.


And I think that Pascal's Wager or this kind of personality and thinking is only really going to be plausible if it doesn't lead to those kind of absurd conclusions. You know, if it ends up being that, hey, you should take Infinity's into account if if you happen to be in a scenario where it would change the ordering of what you would do. But that's very rare. I'm less worried about that. So I'm less worried about a theory that just says about infinities.


But actually, most of what you should do is kind of what you're already doing than I am about a theory that says try and invent, try and invent a time machine.


What kind of theory would say you should care about infinities at all? But that shouldn't change most of what you're doing, like don't infinities, if you take them into account, just swamp.


They do what they do swamp. And so I think this is in some ways it feels like it would be a little bit too lucky. This is maybe a worry that I have. On the other hand, you might just think, well, what could I do to lucky? It would be too lucky if the world were such that the actions that happened to create the most kind of finite value. If we're in a finite world and we can only ever create finite value like line up neatly and perfectly with the actions that create infinite value.


So one reason so you might think something like being a good person, most religions agree that being a good person is just something that is much more likely to get you into heaven than the not. So this is a fairly consistent action if you're looking at the religious hypotheses that may be kind of infinite and expectation bearlike, but luckily, like being a good person also has these huge finite benefits to the people around you. So it's like, oh, well, that's that's fine.


I'm kind of happy with, you know, an outcome that just says, by the way, you've got much more reason to be a good person than you thought you did. I'm not super worried about that because I was already happy with the idea that it should be a good person. But we may just worry that it does tell you to do things that you do think are kind of strange. So in the case of like in the case of religions, maybe, for example, it means that even if you think that these religions are just incredibly unlikely to be correct, they should nonetheless kind of devote yourself and put your life to becoming a believer in that religion, or you should raise your children to have that religion.


And I think when we get into actions like that, which may even be harmful, you know, you can imagine that you can imagine a case would be very psychologically distressing for your children or something that becomes a lot harder to kind of accept that you should take this finite hit in just some tiny hope of getting the kind of infinite benefits. Yeah.


So I think my true rejection of Pascal's. Wager type arguments is the thing that you were talking about of like, you know, if you find yourself chasing infinities, as you put it, and doing things that sort of intuitively seem crazy and like not the best thing to do at all, just because your decision theory is taking into account infinities, then something's gone wrong somewhere. Yeah, that that to me feels like that feels like a strong, strong evidence that there is a solution to Pascal's Wager.


But it doesn't it doesn't in itself a solution like we still have to figure out what went wrong.


Yeah, no, I think this is and I think this has been the thing that kind of troubles me because you obviously have a kind of, you know, so, yeah, when you get an argument where you're like, OK, every one of these premises seemed plausible, but the conclusion just seems very implausible. One kind of possible reaction is to see actually the conclusion isn't as implausible as you thought, because these actions you should undertake are things like being a good person.


Another one is just to say, well, I know that something went wrong because that conclusion is just the negation of that conclusion is more probable than the premises that I put into to the argument. And when that's the case, I think it is important to note that you're left in this kind of difficult situation because it's not enough to just say, well, I know something went wrong somewhere. Farewell argument. The thing to say is like I knew something went wrong somewhere, but I know only to identify what it was, because one possibility is that actually I was just wrong about the conclusion being super implausible or that I find some reason to doubt that was incorrect.


And I should actually take into account really small probabilities of extremely large amounts of value. That's the kind of situation I'm in with Pascal's Wager. And I think just with infinite decision theory generally, and it's a kind of point of tension where I'm like, are you internally? Yeah, I think for me internally, where I'm like the what has gone wrong? And am I going to have to give up on something really fundamental about what I think decision theory should be about or ethics should be about?




So one other class of solution that seems plausible to me is just to say that this idea that we should be kind of maximizing our expected utility, maybe that's the problem. There's something wrong with that idea. Yeah. And maybe, in fact, like, you know, so so I find that rejection of expected utility maximization pretty plausible in other cases. Like, I think it's fine to be risk averse for like high stakes one shot deals. Like, I would much rather have a short like if I'm if I'm looking at a decision that affects my whole life.


Yeah. I would much rather have a sure bet of, like, a pretty good life then, you know, a small a very small chance of like an amazing life and like a large chance of a bad life. Yeah. So I'm happy to reject expected utility maximization there. Couldn't we also therefore say that, you know, well, maybe I just don't care infinitely much about infinite good outcomes, like. Yeah.


And I think this is this is kind of I think this is in some way a case that can split for me at least, the kind of ethical theories from the like decision theory, because I don't find it that plausible that people have bounded utility functions personally so that over the course of my entire life, it's not the case that I can get kind of infinite benefits and there's just some kind of upper bound on on utility. You know, we find that plausible in the case of like dollars because there's only so much I can do with dollars.


So maybe we just think that if you give me infinitely many dollars at some point, I just, you know, these things become useless to me. And so I start valuing them. But in the case of, like, actual pleasure, I'm like, no, if I if my life could potentially go on forever and I could keep having great days, I'm like, that's just worth like that has unbounded value. But some people are going to reject that.


Some people are going to say, you know, when the kind of person case we can have or even do have banded utility functions gets tough.


Actually, when I worry, maybe in response to my own point, that when I look at an expensive, infinite happy days and I'm like, well, I don't care about that infinitely much. I worry that, like, I only say that because the end is far away. But if I were close to the end of those of my finite happy days, then I would continue to care about getting more happy days. Yeah, sort of like how people say, you know, oh, I don't I don't really care about living to 80 versus 90.


Yeah. But when you're seventy nine you care a lot about living to 80. Yeah. I mean for from a personal note, the thing that often strikes me is that this feels and maybe this is where you start to transition into the kind of ethical realm because it feels a little bit immoral to yourself. You're wrong in your future. Do you feel like we really wrong or future selves in ways that, you know, seem to me quite unjustified?


So I've thought about this in the context of a completely different topic, which is like promise making where you make promises on behalf of someone of your future self know people. Sometimes talk about how you can, like, ruin the life of your future self by doing things like failing to invest in a pension. I thought this with things like, if I promise that, I will suppose I promise to dedicate a year of my life to the fire department in my local area when I'm 40.


And I do that when I'm 18, I'm like, should you be allowed to make that kind of promise? I mean, technically, you're of age, but it feels like the reason we should be kind of suspicious of that is that people just don't care as much as they should about their future selves. And like, you don't just have a kind of we kind of think of them as having this control, that there's nothing kind of unacceptable that you can do to your future self because it's you.


Right. But I'm not sure that's true because in that case, the 18 year old just isn't they really just aren't caring enough about this person who to them currently seems like a very distant person. And so I do worry that when we're like, yeah, I'll take this great finite life, we're just like, you know, we're just trading off our future selves for. So usually we kind of mean we're right. And this becomes like more to me, this is like a stronger argument when it comes to other people.


So in a case where we might think that personally we have kind of bounded utility over the course of our life, we're bounded. Just to clarify for listeners and make sure I'm right bounded would mean like I I care a lot about getting, you know, a hundred happy days as you increase the number of happy days that you're offering me from like one hundred two thousand to a million to a trillion and so on. My my the value that I put on those things does go up, but it doesn't go up proportionally.


So the amount I have extra value that I placed on each additional increase starts to go down. And importantly, that it tends towards a kind of upper bound, so some finite amount so that you're like any given kind of lifetime value can only be finite. So in the case of you, so sometimes when we do like temporal discounting and we think of our future selves, the idea is that we're just giving less and less value to each other away.


It isn't time yet. So if you give me a dollar today, that's great. If you give me a dollar tomorrow, that's slightly less good and so on and so forth. Until if I could look at the amount of value we get for all of those dollars, for all of my days, it's always finite. This is like even if we think that this is kind of true of like your kind of something called your prudential or self-interested value, to me, there's something a bit odd about it with respect to moral value, because like when those days, are other people experiencing those thousand or a million actually in happy days simultaneously, like in parallel, you're not in sequence in one person.


I mean, that's possible. So strange, because in the case where there could be infinitely many people or where your were lives going for like infinite periods of time, you have to discount across space time in that case. And then it's like, well, why do I care? I think it was PAFA who said something like, you know, pain, pain in three days time or like you said, pain in the future is going to feel just as bad as pain does.


And so there seems to be something already for ethical, something already a bit audibert about prudential theory and more or less about pain in the future. Yet, but there's something like extra old about us caring about this. If we if we're just disinterested and we're just caring about people and what's of value to them, why then would I care less about these future people than these present people? Before we leave Pascal's Wager, I just wanted to ask, like, what's your current best guess about what we should do in cases of Pascal's Wager?


Like, I assume you don't you know, you haven't chosen to believe in God because of this theory or you or that you wouldn't choose to believe in God if you could push a button and make yourself believe in God. I've wondered that before. If I could just so I that's a tough one, because I feel like like most people, it's very difficult to control your beliefs and to not make them merely responsive to the evidence. But I do also think that at the moment I would say the reason why I might not press the button here's the case would be made very difficult if you said to me, you can never press the button again, you get your one chance.


Yeah. Then I'm like, actually, I break out in a cold sweat and it would actually be kind of tough, whereas the reasons against it are just this kind of uncertainty that I mentioned before where all of this just feels like something might have gone wrong. But the you know, the the things that people have come up with, like the many gods objection. Or banded utility functions, they don't quite convinced me. It's like the thing that's going wrong.


And so, sure, there is something that has gone wrong.


But the fact that you haven't yet been able to put your finger on it gives you pause. Yeah, I'm basically in the moment I'm just stuck with a kind of impossibility theorem. Right. I know everything that I can't have and I don't quite know which thing I should get rid of.


At least one of these things must be wrong. And that does mean that, you know, from a kind of visceral point of view, even the original religious Widger, these are the things that you consider Pascal's Wager don't have to be religious, but I find it kind of psychologically compelling sometimes. You know, the thought I remember like first like reading about or understanding what the concept of hell was. And I was just like, this is actually extremely psychologically distressing.


This is even a possible state. I agree.


I first heard about it when I was I I grew up in a non-religious family, but I, I remember playing on the playground with another seven year old who told me about hell. And and I couldn't sleep for like a week. And I couldn't understand how other people, like, went on with their lives, even believing that this was a thing, even not even if they thought they were going to hell, but like just knowing that anyone could go to hell.


How can you just live normally? I was I was appalled. Yeah.


And yeah. And I think it means that I have more sympathy for people who evangelize. Oh, yeah. Because I'm like, if you literally think that other people are going to hell, if they don't believe this, then it seems like almost immoral. I know. To evangelize to people. Yeah, I agree. But yeah, I think it's and so those considerations do give me pause and I'm like, well if if a belief and even if I didn't think it was evidentially based and I could predict that it wouldn't have a huge influence on my life as a whole, I, I would you would put me in a very difficult situation.


There's actually one other Pascal's Wager related thing that I wanted to ask you, which is so the reason that it came up in the episode with Will is he was talking about giving some weight to moral theories that you disagree with but you think might be true. Yeah, like maybe you put a 10 percent chance on them being true or something, and that should affect your decision making. You should act as if you should act in ways that wouldn't be catastrophic on the off chance, on the 10 percent chance that that moral theory was true.


And a number of commenters afterwards said, well, that's basically Pascal's Wager. And in response to that, which I can't remember will give or not, but in response I would give to that is like a 10 percent chance is like substantial. Like you make decisions all the time on the 10 percent chance that something like like I take I take precautions, even though I think there's a way less than 10 percent chance that I'm going to get robbed. Yeah, I still take a precaution because it's not completely negligible yet.


And so I guess I'm wondering, like this kind of question comes up when people are talking about, like working to reduce global catastrophic risk or existential risk. A an accusation is like, well, this is like Pascal's Wager. You can just say, like the the stakes are so high if you're talking about risks from technology or natural disasters that could end humanity, then even if that probability is incredibly low, you bet. That forces you to devote all your resources and energy to it.


And so, like, something's gone wrong there. And other people come back and say, like, well, it's not that low. Like, I don't think this is a Pascal's Wager situation. And so I'm wondering, like, do you think it makes sense to have a cut off where you're like, you know, if a risk is like above one percent, according to my best guess, then it's worth taking into account. And if it's below one percent, we should take it to zero or something.


I think this is what people naturally do. It strikes me as quite like a refugee. Yeah. And I think that, yeah, I think we tend to run down to zero in a way that doesn't doesn't make sense. So I can give my kind of thoughts on the relation between those two things. I think on the one hand, it's worth noting that Pascal's Wager really does involve infinities. And so the strange thing about infinities is that all you need to get them off the ground and doing all of the work that they do is like a non-zero probability.


And that's super easy to get. Like non-zero probabilities are things that are they don't even have to be consistent with the laws of physics because you don't you don't have certainty in the laws of physics. So, you know, I can start to just invoke magic. And and that's enough to get like a kind of non-zero probability in cases where it's like just really large, finite amounts of value. And I think that we're kind of inclined to brownstown kind of too quickly.


And my suspicion, and maybe this isn't correct, is that the thing that we don't like is especially when we're uncertain about our estimates, you know, so sometimes I want to be like, well, imagine that there was just a button. And if that button is pressed, like everyone's going to die instantly who currently exists. And it's extremely unlikely to see that the buttons are only going to be pressed if someone falls on it. And it's just very unlikely and quite a safe place.


And I'm like, oh, I've actually found a way of creating a barrier around the button to just make it even less likely that it can be pressed. I feel good. Yeah, that feels good. And all I've done is just changing. And it was already a small probability and making it a little bit less. But when the value is like high enough and the probabilities are concrete, we're kind of happy with that idea of just multiplying in that case.


Maybe so maybe the rules should or should not be just, you know, it seems less than one percent, then it's not worth we should round up to zero. Maybe the rule should be if it's less than one percent and uncertain. So it's like I'm very unsure how to estimate this risk, but if you force me to pick a midpoint of my distribution, it'd be one percent or something. Those are the things that might be worth running down.


Or if, you know, if there was like a natural disaster that happened one in a million years or something. But it was a reliable thing. And every year there's a one million chance and we could reduce that. And that would be worth.


Yeah. And I think that I've recently started to think about what I mean, an area that I'm kind of calling the moral value of information. And I think that we are I don't think that we should embrace this as a decision rule, but kind of as a debunking explanation for intuitions in these cases. So I think that when we're given when we're really uncertain about what the actual probability is, what would we do in that kind of natural environment as humans when we're uncertain about that environment?


Oh, sorry. I'm just thinking like an everyday in everyday life. So the part of the world is your enlightenment, abstraction or the world where our decision procedures involved, because sometimes it can be good to kind of try and imagine that world. We're kind of worried about the rationality procedures. So, you know, I think that, you know, sometimes I think about this as, you know, you're just you're really uncertain about you have no idea really about what probability to assign to a given outcome.


So, see, you're in a completely new environment and you just don't know where the predators in your environment are like. And you're like and you don't know what probability to assign to there being like tigers in your local area.


One thing you probably shouldn't do is act either as if there are definitely tigers or is if there are no tigers. The thing that you're going to end up doing is trying to get more information about your local environment. So you're going to want to try and constrain those probabilities, because when you're really uncertain, it's extremely valuable to get information about what your environment is like when you currently don't have very robust probability estimates rather than just to act on these really known robust probability estimates like throw.


Maybe there's 10 percent chance that there's tigers. So let's like act as if there's a 10 percent chance and go hunting as we would if that were the case is like instead you should try and figure out more, because by sending out someone to go and find out what your environment is like, you're going to get a lot more value than anything else you can do. And I think that in cases like global catastrophic risks, maybe people are like worried about this being kind of a waste of time, in part because they're like, well, I just you know, I see these things and I'm like, you can't put a good number on that.


You can't give me a good probability estimate there. And they kind of like we're inclined to throw up our hands and do nothing or and kind of wait. And that makes sense if what you want to do is wait to get more information. But then you want to say to people. Right, but two things, I guess. One is that a lot of the people who are working on kind of global catastrophic risks are actually just trying to get more information about about them.


You know, they kind of agree that information is super valuable here. And the other is that sometimes there's this very large cost to delay. Like sometimes you can't delay and wait for more information to come in. You have to kind of act under the assumption that something might be Magal kind of badly wrong. And so I I don't know how convincing this is is like a kind of debunking explanation for why people are inclined to just be like, oh, just pretend zero.


I think what they're really seeing is something like, let's just wait and see. And I suspect that you can give a kind of answer to that, which is like no other way to to get more information here and to make these probabilities more concrete is to actually work in this area. And so that's good. This is we actually more reason to work in this area than we than you might think.


I wonder if part of the intuition behind rejecting worries about small but important risks is that it's like the kind of thing where where you're likely to get scammed in some sense, not that people are deliberately deceiving you, but that this is the kind of thing that it's just it's the kind of argument that it's like really easy to rope people in with. And so we don't we don't necessarily it's not necessarily appropriate to use sort of standard decision procedures. We should have more cautious or more skeptical decision procedures when people are telling us things that like, yep, are commonly scams.


Yeah. And also for which we can't get like there's not like immediate feedback, you know. So if I give you a small probability of like a really high reward, you know, so take I mean, just take a standard lottery case. I buy a lottery ticket and and I and I lose every time. I mean, this is. System with everyone losing the lottery and it being kind of actually, you know, no one's getting no one's getting a payout.


And so you may you may worry that in this kind of case, you know, someone can always say to you, if they're doing this small probability of of getting venture and they say, you owe this property, it's almost certainly going to fail. But if it if it if it if it succeeds, we're going to be like bigger than Google. And if they see that, then you're, like, totally consistent with what they say that they feel every time.


And you might think this is like this means that you're having to just kind of use your something like your intuition or about how like, you know, or you're having to, like, analyze and they're kind of proposal.


So maybe it's like the incentives to exaggerate or sort of even unconsciously deceive. Other people are greater when they're not. It's not going to become quickly apparent that you were exaggerating or deceiving them. Yeah, yeah. So if I if I if it's consistent with everything that I see that you will never get a payoff for me, then maybe you're kind of like worried because this does give me a strong incentive to say, oh yeah, when the payoff comes, will like really great.


And this is like a little bit like the kind of Pascal's mugging cases where, you know, someone comes along and says to me, you know, I know you think it's really unlikely that I'm going to give you a great payoff, but how unlikely do you think it is? Well, I think it's like, you know, maybe one in a thousand or one in 10000. And it's like, oh, we'll offer you ten times like that.


So you should you know, I'll give you ten thousand dollars if you just give me like a dollar a day and then you're like, hang on. Right.


I think one thing worth mentioning in relation to that, though, is that it can be this stuff is kind of complicated. So when someone makes you that offer. So suppose that I say, you know, they say how, you know, this is this is just the kind of Bostrom sort of Pasko mugging case. They say to me, how likely is it that you think I will keep a promise to you unless I see like one in a thousand?


And then they say, well, if you give me a dollar today, I'll give you 5000 dollars tomorrow. So in expectation, you're like five dollars better off. So you should do it and they can kind of, you know, increase the offer depending on how skeptical I am of their claim. But there's like kind of there's a complex set of factors at work here. So like as the offer that someone makes me becomes like higher, the chance that I think that they can actually provide it gets more.


You know, if someone is like offering to just completely change the universe for me, then I'm like, I just really don't think you can do that. Right. And they also have like a larger incentive to deceive. That's true.


But that that only really feels like it applies in cases where it's an individual person promising that they have resources that they can give you. Or like if the claims are more about the state of the universe and what's going to happen in the universe, then it's less clear to me that, like, large stakes are less probable.


Maybe they are. No, no, no.


I think it's like I think that it's important. Sometimes I like to just distinguish between I'm like I call them like I think I call them like Pascal's muggings and Pascals trades, you know, where I'm like with the standard kind of Pascal's mugging case. My credence that they're going to give me the reward is really quite closely tied to the reward that they're offering. Yeah. So I'm just like for those reasons that I gave their incentive to you know, it's not by magic that it's connected to the outcome, but they're more they've got a higher incentive to deceive.


And I just don't think that they can actually provide it. And so that drops off really quickly. Yeah, it was a bit like the difference between you open up a book and it's got a voucher for like a free metal bookmark in it and you're like, awesome. And you open up your book and it's a voucher for like a million dollars. You're like, throw it in the trash. Right. But there are lots of cases where you just feel like this offer being me to completely make sense.


So there is a sense in which, like, if, you know, the mugger comes up to me and is like, hey, what's your queens? I'm going to keep a promise. If, you know, if you give me a dollar today, I'm going to give you 5000 tomorrow. I'm like, yeah, it really seems like you just kind of made that number up. And then they're like, no, here's like a bunch of evidence that I'm actually like a multimillionaire.


Plus I'm like super lost and I really need this dollar right now. And and they just start to like pile on the evidence. And then suddenly I'm like, you know, it could easily reach the threshold. So even if there's even if the threshold below. Yeah. So even if I think it's like quite unlikely someone could in fact meet it. And this just seems like the optimist in me about this case is there's nothing going wrong. Your intuition was totally correct in the case of being mugged or scammed.


And but but the thing that was making you be mugged or scammed wasn't just a low probability of some really high valued outcome. It was just that the probability was low. Kept getting more. It was never an expectation, the thing that you should do, you always had enough of a queen that you were being scammed, that this was never a valuable trade for you. So it wasn't the structure of it. It was, in fact, just something generating the kind of underlying dose that made it rational for you to not accept the mugging, but may make it rational to accept some cases of.


So, OK, is the rule that you're proposing that you should take into account the probability that you're being scammed and that in some cases will be related to the amount of payout that the person is promising, though not in all cases. In addition to that, you should be taking into account the evidence yet about whether this is real sorry. The probability doesn't just depend on the payoff structure. It also depends on evidence about how plausible this is.


Yeah, yeah. So it's essentially to say that like maybe standard expected utility theory just does fine with these cases. So in the case where a stranger just comes up to you and offers you this like really ludicrously high payoff, or in a case where you get a voucher in your book, in your book that says, you know, a million dollars are you know, it's yours. I'm just like this book store owner has like probably doesn't have a million dollars.


They have no incentive to give this to me. And there's almost certainly some kind of catch here or it's just like lying to get my attention. And so your probability that you sorry, the probability that you assign to you getting a million dollars might be so low that it's not even worth like doing anything other than throwing it in the trash. But that was that was that's consistent with, like, standard expected utility theory. And the key thing is, is there any kind of you know, so one rule might be to reject like expected utility theory and say, hey, just sign these things, something like probability zero or treat them as if they're probability zero.


Another thing to say, and the thing that I'm more sympathetic to is just no expected utility theory will tell you not to not to accept the offer of the Pascal's Mugga because because you're credence that they're going to give you the reward diminishes more steeply than the reward that they're offering. This doesn't mean that all cases of small probability, high value outcomes are ones that you should ignore. You just take a case where I say to you, hey, would you like a ticket to a lottery that has a non-zero chance of, you know, has a point zero zero five percent chance of getting you 20 million dollars?


If I are saying if you just rounded that down to zero, you'd be like, well, I'm just I'm indifferent between having this and not. But you're not indifferent. You'd probably rather write the ticket.


OK, so that that's kind of support for the rule that I come up with a few minutes ago where I was like, if it's a very low probability of a high outcome, but it's very certain. Yes, very well defined, then you should take into account. Yep. But otherwise not. Yeah.


Yeah. And I think again, I just think that we're you know, this is just like her reaction to kind of ambiguity or uncertainty about about probabilities. That's interesting. Like we almost want to ignore them and I don't see any kind of rational reason to do that. So in a case where suppose they said to you, I really don't know what the chances it has of getting your million dollars and it's really low, it could be anywhere between like zero and zero point zero zero five percent.


Still seems to be the dominant strategy for you is to take the ticket. It becomes more complex when we have, like trade offs to make, but we seem really averse to like treating these cases where we're uncertain about the probabilities, taking the expectation and just acting on it. In those cases, I think ultimately we can give an explanation for that in terms of the fact that the rational thing to do in those circumstances is usually to seek out more information, but sometimes you just can't.


You know, sometimes the cost of trying to wait and get more information is too high. Like if I'm just like I can, there's no way you can only take the ticket now or never take it. And you have to be you know, maybe we will give some kind of tradeoff, maybe a sense actually too high in that case to take any kind of treatment or I'm inclined to just think we can explain why people don't like acting on those things.


But, you know, it's not clear that there are. Yeah, right. Yeah, yeah. Before we wrap up, I wanted to invite you to recommend a pick for the. So that's a book or blog or journal article or something that influenced your thinking in some way. What would your pick be. So Valentine and lovers have this paper called Infinite Value. More is always better. Do they have a thesis they're defending or is it just a sort of survey of different problems in the in that paper?


They're kind of really looking at these principles that I like, these kind of extensions of of Peruto. So principles that you can if you. Pretty to be pretty clear, and you take certain other principles to be pretty clear in this area, then what sort of rules do you end up endorsing? And I like that kind of, uh, I like the sort of train of thought in that paper. So recommended. Cool. Excellent. Well, Amanda, thank you so much for coming on the show.


It's been a pleasure. Thank you. This concludes another episode of rationally speaking. Join us next time for more exploration on the borderlands between reason and.