Transcribe your podcast
[00:00:00]

Today's episode of Rationally Speaking is sponsored by Give Well, a nonprofit dedicated to finding outstanding charities and publishing their full analysis to help donors decide where to give. They do rigorous research to quantify how much good a given charity does, how many lives does it save or how much does it reduce? Poverty per dollar donated. You can read all about their research or just check out their short list of top recommended evidence based charities to maximize the amount of good that your donations can do.

[00:00:25]

It's free and available to everyone online. Check them out at Give Weblog. I also want to let you all know about this year's Northeast Conference on Science and Skepticism being held in New York City June twenty ninth through July 2nd. I'll be there taping a live podcast and there will be lots of other great guests, including the Skeptics Guide to the Universe, my former co-host, Massimo Plushy, the amazing James Randi and keynote speaker Mike Massimino, former NASA astronaut.

[00:00:51]

Get your tickets at Nexxus dot org and you see ASPHAUG.

[00:01:16]

Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense, I'm your host, Julia Gillard. And with me today is my good friend and today's guest, Wil MacAskill. Will is probably most famous worldwide for creating and helping create and popularize the effective altruism movement, which I've talked about in several past episodes of the podcast. He co-founded and is the CEO of the Centre for Effective Altruism and is the author of the book Doing Good Better.

[00:01:49]

But today we're not going to be talking about effective altruism, at least not primarily. We're going to be talking about Will's work in philosophy. Oh, I didn't mention in my list of Bill's achievements that he's also he was when he became a tenured professor of philosophy at Oxford. He was the youngest person in the world to be a tenured professor at Oxford. And so we're going to be talking about will sort of main research focus in philosophy, which is moral uncertainty.

[00:02:17]

And this is a topic that's on my list at least, of philosophical questions that I think are meaningful and real and important to like actual real life decision making and that are unresolved. But I feel like we might potentially be able to make progress on this is not a long list of topics, but moral uncertainty is on it. So Will, welcome to the show. Thanks so much for having me on. It's I can't believe I haven't had you on sooner.

[00:02:47]

I've been like a desired guest from very early on. But as I was telling you earlier, I, I kept having other people on to talk about effective altruism. And then I'd be like, oh, well, I can invite. Well, now I have to wait a little while. I just had been told on or Peter Singer on. So this episode has been a long time coming. Well, I'm thrilled to be on. It's probably one of the very few podcasts in which we can have a conversation about moral uncertainty, explaining what the play pump is and maybe too so.

[00:03:15]

Well, do you want to just explain what do you prefer, moral uncertainty or normative uncertainty? So I think let's stick with moral uncertainty. Normative uncertainty would cover a wider range, including decision theoretic, uncertainty and uncertainty about rationality, epistemological uncertainty. But they get even more tricky and tenacious as philosophical issues than moral uncertainty, which also have the clearest practical implications, triggering some moral uncertainty. What is it? Why is this a needed concept? So in order to explain moral uncertainty, it's easiest to start with thinking about empirical uncertainty or uncertainty about matters of fact.

[00:03:58]

So in ordinary life, when we're deciding what we ought to do, we don't just think, oh, this is what I believe. We tend to think these are a variety of things that might occur, you know, perhaps not consciously, but you think these are more likely than not. This is how high stakes things would be if this were to be the case or not. And then you take an action that is in light of all of that uncertainty that you have.

[00:04:23]

And if, for example, if you are speeding around a blind corner, normally would think that's the wrong thing to do. That's an immoral action. And the reason we think it's wrong is not because if you speed around a blind corner, you're probably going to hit someone instead. We think it's wrong because there's some chance that you hit someone. And if you do, then it's very bad. It's a very bad action or bad outcome and decision theorists to formalize this using a notion called expected utility theory, where in order to make a decision, what you should do is look at all the possible outcomes.

[00:05:02]

So in this case, that would just be two. One is that you speed them the corner and just get to your destination faster. The second is that you speed them the corner and hit someone. Well, there are two for the purposes of a simple illustration of that. OK, yeah, of course. This is loads of other things you could do. You could jump out the car and you could accidentally hit the next Hitler. Yeah, good.

[00:05:24]

Yeah that's fine. Yeah.

[00:05:25]

So lots of. But let's keep the simple example. There's two possible outcomes. Speed and get to the destination faster. Hit someone, let's say kill them. And then there's values assigned to those outcomes as well. So if you speed but get to your destination faster, that's a mildly good thing. You've saved a little bit of time travelling. And we can let's just for the purpose of the example, give that number say that's what plus one or something.

[00:05:55]

I mean, just it's a little bit good. But then if you speed round the corner and hit someone, that's really bad. So maybe that's like minus a million, it's a million times worse than getting to your destination a little bit faster. There's good. So then the next step, once you've assigned the values, the different possible outcomes is to think what's the like? Heard of each of those outcomes and let's say there's a ninety nine percent chance that you wouldn't hit anyone but the one percent chance that you would.

[00:06:23]

The idea of maximizing expected value or maximizing expected utility is you take the sum of the products of the outcomes and the probabilities of those outcomes, the value of the outcomes and the probabilities of the outcomes. So in for one action you speed, that's a ninety nine percent chance of plus one if you but it's also a one percent chance of minus a million. And let's just say what kind of say it zero is to just go normal speed. Now if you take 99 percent times plus one plus one percent times minus a million, that number's clearly less than zero.

[00:07:08]

And that means that not driving has higher expected utility expected value. And so that's the action that would be recommended. Right.

[00:07:17]

What's interesting is that a lot of people, maybe most people sort of bristle at the idea that you should be even implicitly doing this calculation of whether the payoff to you of getting somewhere a little faster is worth risking someone's life. And they'll say that that's like no amount of time saved for you is worth risking someone's life. But in fact, like they are making that trade off. They are making that implicit calculation every time they drive this all the time to speed.

[00:07:46]

Even if you drive, you have a chance of hitting someone, even if you're not speeding, even if you drive at 20 miles per hour, I can only go into those local shops. You're still saying there's some chance I'm going to hit someone, right. So the idea that sometimes people say, oh, life is of infinite value, so we shouldn't do anything, this doesn't make any sense. Right. So would it be fair to say you're making this calculation under the utilitarian frame?

[00:08:10]

I think importantly not so. I think absolutely anyone should be thinking in terms of expected utility and don't need to be a utilitarian. So maybe I'm using the word very broadly or something. Yeah.

[00:08:25]

So as I understand it, till 10:00 a.m., it says, I don't think you need a caveat that I had. Perhaps it's me being very British, right. Utilitarianism is the view that you are always to maximize the sum total of wellbeing. And there's three ways you can depart from that. One is that you could value things like art or the natural environment, things that aren't related to people or people's well-being. The second is you could think of a side constraints.

[00:08:59]

So you ought not to kill one person in order to save even five people or a hundred people. And then the third is that perhaps there are some things that's permissible for you to do, even though there's some other course of action that would allow you to do even more good. So if you're just above the poverty line, perhaps you're not required to give even more than you already have done in order to save people who are even poorer than you.

[00:09:28]

You have some realm of the merely permissible. So we've started to get into different moral theories diverging from sort of pure total utilitarianism. But I derailed you a little bit in talking about in venting about people's unwillingness to acknowledge that they are, in fact, making trade offs. You were explaining you were starting to talk about empirical uncertainty and so wish more uncertainty from that. And so decision making under the pedicle uncertainty is extremely well studied, going back all the way to the 50s, 40s, even going back to Blaise Pascal.

[00:10:02]

And so we have very good formal understanding of decision making and empirical uncertainty. We haven't solved all of the problems. I know you talked about Newcomb's problem in previous episode, so there's still some underlying philosophical issues, but in general, it's pretty well accepted of a rational thing to do under imperative. Uncertainty is something like maximizing expected value. But now the question is, if that's the case for empirical uncertainty, uncertainty about what's going to happen, is it also the case for uncertainty about your values or uncertainty about what's morally the case?

[00:10:39]

So now consider another example. Suppose you have some you deciding what to order for dinner and you have one of two options. You can order the steak or you can order a vegetarian risotto and you think probably animals don't really have moral status it that you shouldn't. There's no point in worrying about there.

[00:11:01]

There's no brain or their death. Exactly. Yeah. We shouldn't worry about harming or killing animals. And just the same way as we shouldn't worry about destroying something that I'm nonsensical. And let's say that's true of the Third World view, maybe. You even thought about it a lot and feel quite committed to this, however, it would seem quite overconfident if you were to be absolutely certain in this 100 percent certain. In fact, if you are 100 percent certain, that would mean that there's no conditions under which you change your mind.

[00:11:30]

And in fact, a lot of these animal welfare advocates and vegetarians, and they have what seemed like a fairly compelling arguments. So it seems like at least some degree of confidence that they have the right view and you don't. So let's say you just put 10 percent confidence in that view. Well, now this looks kind of similar to the speeding around the blind corner of example, where if you choose the state, you're doing something that violates is probably morally permissible.

[00:11:59]

It's probably fine. And let's say it's even a slight benefit because you enjoy eating meat, but you're taking a risk because there's a 10 percent chance, given what you believe to actually animals do have moral status and it's severely wrong to be incarcerating them in factory farms and then buying and eating the flesh. This is a bit of a confusing example because it's so tied up with empirical uncertainty, because I can have uncertainty about whether a chicken is conscious or has the capacity to suffer.

[00:12:32]

And that's an empirical question that might determine my values. So what if it were I mean, feel free to reject it, but what if the vortex are the situation where let's say I know how conscious chickens are and the question is, just is it wrong? Is it morally wrong to kill, to take the life of an animal with that degree of consciousness or something? Yeah, that's right. So in all of these examples, we can imagine that we just know all of the empirical facts and matters of fact.

[00:12:57]

Right. So, yes, we want to make it believe, first of all, the chickens, like, have in front of us and we can just swing that chicken's neck. And I just I don't know whether it's bad to kill the chicken or not. Yeah. I don't know whether that's a wrong thing to do, even though I know all the facts about chicken consciousness and so on. There's just this further moral fact about given all those empirical facts, is it wrong to act in a certain way, this fact?

[00:13:21]

I mean, this I was complaining a few minutes ago about people's unwillingness to acknowledge empirical uncertainty, at least consciously acknowledge it. But I think with moral uncertainty, it's even more so people it's not just that they want to acknowledge that there's a chance their moral theory might be wrong. I think they act as if they're positive. They're right. Exactly right. And that's why when I have this idea seven years ago now that why don't we take moral uncertainty into account in the same way we take empirical uncertainty into account.

[00:13:55]

The reason it was so interesting for me was precisely because so few people seem to think in this way. Instead, people have what I call the football supporter approach to moral philosophy where they all support her. Yeah, like the football fan. Yeah, I don't like football fan. Yeah.

[00:14:13]

So it's so adorable that your turn football fan is a football player. Yeah, I know in my head I'm thinking of soccer. I'm thinking about the foul as well. I don't know. OK, or does it do pretty well like. I think so. I don't feel like that's a weird thing I was saying but I'm not. It might also just be between the two of us. We have enough sports knowledge. We already have any idea of the football team support at least.

[00:14:43]

Yeah. You would say like, oh, there's a stadium full of cheering football supporters or football team supporters. OK, I don't think you're a football fan going to advise now.

[00:14:54]

OK, the football fan model where you have a team that usually we support and you just identify with that team. And that's the way kind of people think about my theories. Often they will say, oh, well, I'm a utilitarian in the same way as they might say, yeah, I'm a Rangers fan, I'm a Arsenal fan. And it's like they have an allegiance to a particular moral view. But that's very different from having a certain degree of belief in it.

[00:15:22]

Right. And that means that when people think about practical moral issues, they tend to say, well, I'm a utilitarian. Therefore, I believe that you ought to kill one person to save five, or they will say, well, I'm some sort of natural law theorist and therefore I don't think that animals have any relevant moral status. It's not going to kill them. Therefore, it's OK for me to eat meat. But that's very different. Somehow we ought to be an empirically because they're not taking into account the possibility that they might be wrong about their moral values.

[00:16:02]

So let's talk about what it means to be wrong about our moral values. When I if I were to express my empirical uncertainty, there's a sort of that can be cashed out in something. I felt like I mean, there's different ways to do it, but I might say, well, there's a 50 percent I'm 50 percent confident this coin will come up heads that might mean by that. I might mean that, you know, out of all the pass coin flips, of which I'm aware of, roughly 50 percent of them were heads or I might say something like, I'm I'm.

[00:16:31]

Twenty five percent probability on Trump being impeached before his term is up. And by that I might mean that I would bet at those odds, like I'd be willing to pay you ten dollars if he gets impeached, if you are thirty dollars and get some cash if you pay me ten dollars. If he doesn't, that's the thing. That's right. So those like those statements of uncertainty can be cashed out in sort of concrete meaning. But I don't what would it what would be the equivalent for moral uncertainty?

[00:16:57]

I think one equivalent to moral uncertainty is what you might think of the end of a very, very long period of reflection. And so supposing you could have a hundred thousand years and your brain was swept up in all of these ways. You just you didn't have any empirical sailing knowledge. Yeah. You you were able to like, you know, mess around with your cognitive architecture to remove your biases. You had an IQ of a thousand. You had all sorts of ways in which you being swept up.

[00:17:30]

You could discuss indefinitely with friends. You didn't have any pressing needs that might distort your thinking as well. What's the use in some sense, a very idealized version of your own self? What would you believe at the end of that process? It seems to me debatable whether that would still be me in any relevant sense, but I'm also not confident that that matters. Like maybe the question I really care about is not what I would value, but what would a person I would consider an ideal person an ideal moral reason.

[00:17:58]

What would that person value, even if that person is so different from my current self that they wouldn't really be me? Does that seem right? Yeah. Yeah. So this idea is exactly called. Yeah. Kind of ideal lisner view. Moral uncertainty would be my uncertainty about what this idealized version of myself would end up valuing after all of these improvements to reasoning and and to knowledge, including having all the empirical facts, I think, or maybe just having I think I have right now.

[00:18:28]

Yeah, I think that's right. So for the purpose of these examples of moral uncertainty so hard alone. Yeah. Just as I did my thesis, we can just assume, you know, all the relevant empirical facts. Right, because we don't want to get into those issues with that the same time. Right. So that's one way of operationalizing it. It does depend on your metaphysics, though. So it might be that you have a very simple subjectivism, according to which what you want to do is just what you want now, in which case moral uncertainty is just uncertainty about what you want now.

[00:19:04]

And I don't think that's a very plausible metaphysical view, but it would have an implication about what it means to be morally uncertain. Or you might have a very robust form of moral realism, according to which your uncertainty really is just uncertainty about what the world is like such. But it's possible that even after this very long period of reflection, you would still be radically wrong. Got it. And there if you think obviously you can't explain it in frequenters terms like like you did, like flipping a coin and on average a fair coin will come up heads fifty percent of the time.

[00:19:44]

But nor can you do that for something like what's the thousand digit of pi. Right. Where naturally would have ten percent credence in each of the digits. But you can't explain that, at least not obviously in sequences terms, because either it is the number two that is and it's just I don't know. Right, right. Right. OK, so maybe the right thing to ask first is we've been talking about sort of simple binary cases like is it wrong to kill the chicken or isn't it?

[00:20:14]

How many different moral theories could one take into account? And sort of. Yes, uncertainty, calculation. I think ultimately the answer is a lot. So in the first few years of research on this topic, people focused a lot on very simplified or idealized cases. So one is this Meeteetse in case where the argument was that because of the asymmetry of the states, that it's much worse to eat meat if meat is wrong than it is to not eat meat.

[00:20:48]

Eating meat is permissible. You ought to in general. Thanks for meeting me. I can see someone complaining that that's kind of like Pascal's Wager that like, well, you know, maybe God is unlikely, but if he does exist, then we're way better off believing in him. So I'll play it safe. Like in the animal case, that seems less absurd. But I still am uneasy about establishing a system of moral decision making based on that principle.

[00:21:12]

So people often bring up Pascal's Wager. When I mentioned this argument, but I think the analogy isn't great, so I think it's more like should you wear a seatbelt when you go driving? And most people think, yes, and they don't say, whoa, you're just saying, look, it's one in a thousand chance. They get into a car crash. One hundred thousand chance. And I don't like wearing a seatbelt.

[00:21:32]

I guess the difference the reason that it could be a good analogy, I think, is that it's so hard to pin down what the actual probability is of some very burdensome moral theory that if you follow this policy of saying, well, the the consequences if that moral theory were true, are so great that I should just follow that. Like because it's so subjective, there are going to end up being a ton of like actually very bad or very implausible moral theory that you're going to end up kowtowing to.

[00:22:02]

Yeah. So I think there is a question about, well, what Kleenexes, what degree of belief ought to have these different moral views? I think at least as far as the idealised examples go, it seems like, well, what degree of belief, what you to have as long as the animals again, if you imagine you had a thousand years to think about these issues, what's the chance that you would come to believe? This seems strange if you were putting it less than one percent.

[00:22:31]

Yeah, OK, so in the animal's case, it's not. Yeah, it's like well within the range of plausibility. Yeah, but animals have moral status and therefore it's not like Pascal's Wager. Yeah. Although I think I'll come back in a way where there might be an analogy in just a moment. OK, but before then I can think about certain other idealized cases as well. So animals is one second is our obligations to the very poor, to distant strangers.

[00:23:05]

So Peter Singer for the long time has argued that it's just as wrong to refrain from donating three and a half thousand dollars, which is the amount needed to save a life with a malaria antimalarial.

[00:23:20]

Bednar That's fine to do the amount of good equivalent to saving a life in a seven and a half thousand dollars to strictly speaking. So it's just as bad as that cost is just as bad as to do that as it is to walk past a child drowning in front of you, which we would see is clearly, very gravely morally wrong. And the question then is, well, supposing you don't believe Peter Singer's argument, you think there is some morally irrelevant difference.

[00:23:55]

Distance or psychological salience really does make a more relevant difference. The question again is, well, how bad is it to walk past a child drowning in front of you? And secondly, just how unlikely do you think that is? And again, you would likely do you think it is that he is correct, that those are morally equivalent? Peter Singer is correct, morally equivalent. And again, it seems like it would be very overconfident, given the difficulty of ethics, given how much change was in terms of generations as subsequent generations of moral change, given the obvious biases we potentially have in this area, in the overconsume, again, to have lower than, say, 10 percent, to clear of belief that Peter's actually correct about this, in which case, again, it's like, OK, I'm going to walk past the shallow pond in order to protect when they suit, because I'm not sure that someone out there it might be it might not be who you think is still worth the cost to investigate because of the chance that you're going to save a child.

[00:25:01]

Right. And some people even go further and argue that there's just not a distinction between tax and emissions at all, and therefore that failing and failing to act between acting and failing to act, and therefore that failing to save the child in sub-Saharan Africa you could save with seven and a half thousand dollars is actually just as morally wrong as going over there and and killing someone. And that's the song of you. Maybe you have somewhat lower credence. And then but again, that means you've got potential to be doing something very seriously, morally wrong.

[00:25:34]

And this isn't all idle speculation. Think back hundreds of years. Think of the way that people treated women or homosexuals or people of other races. Think of people who kept slaves. Aristotle spent his entire life trying to think about ethics and how to live a good life. And he didn't even he never thought that slavery might be immoral. And so he kept slaves despite his best intentions. And so we should actually be very open to the idea that we're doing things that are perhaps very seriously morally wrong.

[00:26:08]

If we don't want to engage in those practices, then we should be thinking about other ways we could be acting, but obviously morally wrong. In the way that we look at slavery now and so there's one final example that I think is caused as a psychological matter, caused a lot of animosity towards the idea, which is the application of this to abortion, where some people have made similar style of argument, which is that let's consider a 20 week abortion.

[00:26:37]

And you might think, yeah, I think probably the case that a fetus at 20 weeks is not a person and therefore it's permissible for someone to kill that fetus on grounds of not wanting to have a child at that particular time in life. So we'll put aside cases of rape or seriously compromise the health of the mother of a child. It's just a case where it's not a great time for the parents to have that child. You might think, well, shouldn't be that concern.

[00:27:15]

Lots of views of personhood as they started much earlier than 20 weeks, in which case and if it's the case that those views of personhood are correct, well, then you're doing something as long as if you were killing a newborn baby, which everyone well, some exceptions, such as Peter Singer, but almost everyone is a very serious moral long. And if you think, well, even if it's only one in 10 chance that you're killing a newborn baby enough, that risk enough would be sufficient to mean you shouldn't do that, even if it was at significant cost to yourself.

[00:27:57]

Mm hmm.

[00:28:00]

I mean, you could even push it farther back to the point where the fetus is really just a cluster of cells. And there's no there's really no empirical uncertainty about whether it's at all conscious or whether it can feel pain. But there's still significant disagreement, moral disagreement over whether it has moral status, whether it's wrong to kill that cluster of cells. Exactly. Well, I would be less confident about the moral theories than I am at 20 weeks. And it would, of course, be reasonable to be less confident.

[00:28:31]

Yes, but there is this, you know, Catholic tradition that thinks the president begins at conception. And this takes me to the ways in which I do think this is similar to Pascal's Wager. But it's not for the reasons you might think, in my view. The reason why when you hear Pascal's Wager, you think something's up here, is that it's a very it's kind of like trying to do surgery with a hydrogen bomb or something that's very powerful new intellectual weapon that you've taken and then you're trying to just do one very small thing with it.

[00:29:11]

So in Pascal's case, why does Pascal's Wager not immediately work? So he says, OK, you ought to go to heaven, you ought to go to church because of the let's say it's one in a million chance of going to heaven, which is infinitely good, or avoiding hell and of avoiding hell. But you might wonder, OK, why should I go to heaven? Why should I go to church rather than flip a coin? And if it's heads, I go to church.

[00:29:34]

If it's tails, I don't say that I never wanted that has now gone well.

[00:29:41]

So Pascal's argument is that going to heaven is infinitely good. We'll put hell to the side. Going to heaven is infinitely good. So any chance of going to heaven has infinite expected value because we said look at the probability and multiply by the value. Even if the probability is as low as one in a million, the value is infinite. Therefore, the expected value is also infinite. That's going to that's going to outweigh you'll be greater than any merely finite amount of merely finite amount of good right thing you might have by not going to church and instead having an nice or something on it.

[00:30:17]

But the issue is that any probability is infinitely good. Any probability.

[00:30:23]

So times and infinitely infinite is an expected value is infinite expected value. So as far as this argument goes, there's nothing to choose between going to church and flipping a coin, going to church of its heads. Yes. Similarly, me drinking a beer has some chance of getting me into heaven. Right? Right. And that is that therefore also has infinite expected value. I guess I did sort of think of this counter argument in the form of, well, what if there's another God who will bring you to heaven if you don't go to the church for the first God?

[00:31:00]

And or you can just construct arbitrarily, you know, different levels of infinity construct constructs or. Yeah, you can construct even better heavens with other required actions in order to get into them.

[00:31:14]

It quickly becomes, I don't know, maybe some people have figured out some way to systematize all that, but it seemed uncomfortable. Yeah, so there is there are some people trying to do this, including a friend of mine. Amanda Haskell is actually working on this for you. That is a key part of her Ph.D. And the she has this wonderful blog post, which is 10 responses to objections to Pascal's Wager and hundred words or less. And there's not enough like a serious matter ethical click here.

[00:31:45]

Exactly. Yeah, it's a niche audience, but we'll link to that on the podcast website. Yeah. So I actually think the Pascal's Wager, there's a much better argument that people give it credit for. I do think the response I gave is a very serious and I think devastating challenge to the argument as stated. But I don't think it's yet a reason to reject Pascal's Wager entirely. I think it's just a way in which it's showing that dealing with infinities is currently something that contempt decision theory is not able to do.

[00:32:15]

Right. And in the context of moral decision making is so I can imagine maybe some versions of utilitarianism having trouble with outcomes that have that we stipulate have infinite goodness or infinite badness. But it also seems to me like it might come up for moral theories, moral views of the world that just insist that there's no exchange rate. But like, you just can't lie. Lying is just always wrong. There's no amount of good that you can do in the world with a lie that would make the lie OK.

[00:32:46]

Is that kind of like saying that lying has infinite badness?

[00:32:50]

Yeah. So there's two there's another analogy to Pascal's Wager that I'll come back to in a little moment. But one case is that very similar problems do seem to arise when you start thinking about absolutist moral views, where an absolutist moral view thinks that there's something that's good just has increasing value. So saving lives, but there's some things that are bad and are never outweighed. So killing is always wrong, no matter what circumstances. Even if 100 civilian lives were at stake, it would still be wrong to kill.

[00:33:25]

And so then you might think, well, OK, it's a billion lives at stake, the end of the world, but I could kill one person to save that those billion lives. Most people would think, on reflection, you ought to do that. But the absolutist view would say, no, it's wrong. Perhaps it's the way to represent us. It's infinitely long. And if it's not you that even a small boat, no matter how low probability you have in that view, then you would get the conclusion that you ought not to do it under moral uncertainty if you're maximizing expected value.

[00:34:01]

Also, it seems. And it wouldn't just apply to killing, which is one of the more plausible, I still think, very implausible, but one of the more plausible examples for an absolute sane constraint does apply to, say, lying or extramarital sex. I mean, there's many things where you might imagine some of you and it's more of you you might have very little confidence in saying there's an absolute constraint against this action. And if we represent that as saying this negative, the infinite negative value to that action, then you should seemingly just not do it no matter any cost, any cost.

[00:34:45]

And I think are so very many issues when you start trying to take moral uncertainty into account in the same way as you do in medical uncertainty. And so my research was firstly making the case for thinking why you ought to and then just going through all the many problems and trying to kind of work through them. In this case, I think one thing we can do is a bit of a partners in crime argument where we can say. Even under intellectual uncertainty, as long as you think there's some chance of gaining an infinite amount of value, it seems like expected utility theory says you ought to do that.

[00:35:19]

And that suggests that when it comes to very, very small probabilities of huge amounts of value or even infinite amounts of value, something's going wrong with expected utility theory. This is just a bug theory. And so maybe it happens a little bit more often in moral uncertainty, but it just the same sort of bug. So we know we need to kind of iron this out in some way. We don't know exactly how. And that's why it's definitely still a problem, something we want to address, but not.

[00:35:52]

Necessarily one that's very distinctive from our. So that's one approach, but there is another approach as well where. So far, we've been assuming that. Different moral views are compatible in the sense that you can say how much is at stake on one moral view compared to another, where if I say if I'm killing one person to save five. There's a meaningful answer to the question of how much is at stake for the utilitarian who says you ought to kill the one to save five versus the non consequentialist?

[00:36:31]

Just think of as a.. Utilitarianism, he just rejects utilitarianism or similar theories when they say you ought not to kill one to save five. This is a question for which theory is the more at stake? And it's not at all obvious that we ought to be able to make comparisons between series. So perhaps it's possible for very similar theories. Perhaps I've got one form of utilitarianism that only cares about humans and obviously really cares about humans and non-human animals.

[00:37:04]

But when you've got these very different moral values like absolutism and utilitarianism, perhaps it's just not possible to say that actually one is more state high stakes than another.

[00:37:17]

And high stakes here would be like. You know, for the utilitarian, that one life, one life to save five lives, there's some utility created, their net utility created. But in the grand scheme of the world, it's not a huge deal. But for the non the entire utilitarian, what if you kill the one person, save five? It's a huge deal. And you've just created a huge moral wrong. So in that so then it was like there was a compatibility, then you could say like, well, we should like defer to the anti utilitarian because the stakes are higher for for their view.

[00:37:51]

That's exactly right. And earlier we were saying that for the animal welfare of you, eating meat seems higher stakes than for the non animal welfare. Yeah, OK. But so but there is another possibility, which is that you could think actually there's just the idea that one of these is much higher stakes than another. That's a kind of illusion. These are such different moral views that instead you should not think, not think that we are able to compare the stakes across different moral views of the units on stakes.

[00:38:22]

So I think the natural unit. So I just kind of terms and choice of worthiness, choice worthiness. But I think the natural unit is kind of along this degree of wrongness. Right, where we very naturally say that lying is a bit wrong. But punching someone that's more wrong and murdering someone, that's much more wrong again. So I think we do naturally think in terms of quantities of wrongness, even though we might not ever use that term, because you also might think the.

[00:38:57]

You know, the difference in wrongness between punching someone and killing someone is a larger difference than the difference between telling one sort of lie in a slightly worse lie. You know, again, we seem to be able to make sense of that idea, and that's the unit in this case. But then the question is, is the unit of wrongness the same unit? Can you make sense of it? Or is it like saying the difference in temperature between 20 degrees Celsius and twenty two degrees Celsius is the same size as the difference in length between a two centimeter object in the four centimeter object?

[00:39:35]

Just a category. That's a category that's just meaningless. Right. And so one of the things I do is actually explore. And if that's the case, you just can't apply. Expected utility theory is just like trying to push a square peg into a round hole. It just it's just not the right formal system for that. And so one of the things I do in my PhD and in this book that I may well one day finish my eight year one four one one three books you started.

[00:40:06]

You finish the virus. Well, no, I'm 50 percent that well, I've started this model uncertainty, but I haven't finished it. OK, fine. But anyway, I am hoping to finish in the next few weeks to develop conformal system that would allow you to make what seem like more reasonable decisions, even in light of theories that are very, very different and therefore incompatible and even in light of theories that don't even give you quantities of wrongness.

[00:40:42]

So perhaps this absolutist theory just says, no, it's not. There's degrees of wrongness. You can't make sense of the idea that murder is very, very wrong. It's just that there's two categories of actions, the actions and his wrong actions. And we don't need to go into the technical details of the proposal. But the key insight is to think of the different moral views in which you have some degree of belief as kind of like a parliament and you can take a vote among those different moral views, because when we go to the voting booth, we don't compare.

[00:41:18]

Yeah, we don't compare intensities of people's preferences, although I actually think you should. But at the moment of the episode, I can just I can rant about current voting systems. I could do that for hours. And even in countries like Australia that unite candidates, you say this is the best. This is my second favorite. The third favour. And that's still not comparing Pleasant's intensities. It's just giving an ordinary banking. But then you can put that into a voting system that then spits out this is the best candidate.

[00:41:53]

And in the same way, I can use a weighted voting system where let's say I have three different model views. One is this absolutist view. The second is a utilitarian view. So it is very different. Again, that's its virtue, ethical view. And let's suppose I just have one percent credence in the absolutist view, in which case I would give that one percent weighted vote. And in your parliament of one hundred people inside your head, one person who thinks that that is correct and maybe I have fifty nine percent credence in utilitarianism to get fifty nine votes.

[00:42:30]

And then the final view, virtually ethical view, would get 40 votes. And then turns out that despite current voting systems, this is a huge amount of research that's been done on different voting systems, the mathematical properties and which are more less desirable. And all of the ones you have heard of, the voting systems that are actually used are among the worst in terms of their mathematical properties. So first past the post is about as bad a voting system as you can get single transferable vote or the alternative vote, which is what Australia uses, is probably second worst out of the commonly discussed ones.

[00:43:12]

It's actually so bad that. In some circumstances, if I was going to vote for let's use the US example, I was initially planning to vote Republican and Democrat second, and then I changed my mind and switched from Democrats second to Democrats first, that can cause the Democrats to lose. So it violates a property called Moneta City where having one person increase the level of the fairness for the candidate to make that candidate do worse. Wow. But OK, so step it up then.

[00:43:45]

As I said, I could tell talk this, but I believe you.

[00:43:51]

So it turns out you can just import all of this great work that has been done by voting theorists and social choice theorists from economics into this domain. So in cases where you can straightforwardly apply expected utility theory, you can treat the different moral views, kind of like, yeah, kind of like people in your head and you give them some greater weight, depending on how plausible you find the view and then you take a vote amongst all of them. And that has the implication that you can still make better or worse choice even when you can't make comparisons of strength or wrongness across these different views.

[00:44:31]

And it also seems to have the effect that you won't get swamped by these low probability but very high stakes theories because what the like what that swamping the relied on was there's very low probability theory saying this is just hugely important, infinitely more important. That's not something they can say on this model. But so this is very cool. I'm wondering if it might be and I'm sure you've thought about this, but as I'm thinking about it for the first time, I'm wondering if it might be even better to say, OK, let's say, you know, as I'm fifty nine percent credence to utilitarianism being correct.

[00:45:11]

If I if I follow this voting system, as I'm understanding it, that means that if there are a hundred situations in which I have to choose how to act like fifty nine percent is the plurality, I guess the majority. But even if it were just a plurality, it would win every time. So one hundred out of one hundred cases I would do the utilitarian thing. But what if instead I just randomized. OK, ok, so that's actually not the case.

[00:45:35]

Let's not forget that you would win one hundred times. What would be the case if you were using first past the post of defending your proposal? So all the best voting systems have linked to voting. So let's say if I have five options available to me, you look at each theory and say, well, how do you order them? And the utilitarian would say, save 10 lives as best say and save lives a second and so on. It to give you not just this is my favorite and these ones aren't and said would give you a ranking and some theories would have would only ever make things into two categories.

[00:46:12]

So I said to perhaps the absolute best, just as the two categories like long oh and the overall voting still works and the overall voting. So you just have many tight options. I see. And that would actually mean that sure. If you had fifty nine percent credence and utilitarianism, it would win often, but it wouldn't always win. And in fact it can be the case that because you're ranking these different actions, you can choose an action that is not the most preferred any according to any model view.

[00:46:44]

Perhaps it's just second best according to all all the model views, which you've got some degree of belief, even though it's top like most prefer, according to them. Interesting. We're almost out of more technically over time. But I guess if I were to ask one more question about this, it would be why I'm assuming this is not the consensus view among philosophers who have thought about ethics. Is that just because it's still a relatively. Well, I guess I guess I'm referring to partly to your specific take on how to deal with moral uncertainty, but also to be slightly broader circle of like the claim that we should be figuring out what to do about moral uncertainty at all is the reason that philosophers don't all agree that this is an important issue that demands a resolution.

[00:47:33]

Is that just because it's not very well known or is there some other objection that philosophers have to the kind of reasoning you've been laying out?

[00:47:39]

Yeah, the main alternative view among philosophers, which is one that I struggle to have sympathy for.

[00:47:48]

It doesn't keep me up at night is the view that under moral uncertainty, you ought to do what's actually right.

[00:47:56]

So that's kind of begging the question like, yeah.

[00:47:59]

So people who isn't like you and I actually tend to tend to not find this like a compelling argument, but we don't know what's actually right and we don't know.

[00:48:07]

But yeah, I guess you could say what you want to do is be presenting this view charitably. No, this is absolutely the view.

[00:48:17]

So in the same way that you might think into the physiology as well, that what I ought to do is believe the truth and disbelieve the false. And I ought to do that whatever my evidence is like. I mean, actually, very few people have this view, but you might have that view. And on this view. Yeah, what you ought to do, even if so, should Aristotle have kept slaves, for example? I think that is a plausible sense in which we want to say, no, he shouldn't have done.

[00:48:48]

He acted wrongly in keeping slaves, even though given what he knew, given his evidence, perhaps it was very likely. Well, then I feel like he thought it was extremely unlikely that he was going to keep slaves. Well, then I feel like philosophers. Ah, if they're using the word out that way, they're just they're answering a different question and the question of what should I a person with imperfect reasoning and moral knowledge, do it.

[00:49:11]

So that's my view as well. There's just two different senses of what's going on. But then you have this problem to how many senses of all of it, because I talked about what you want to do when you're unsure morally what you want to do. Yeah, you shouldn't be that sure of my view.

[00:49:26]

Oh, no, no, you shouldn't be.

[00:49:29]

So I was defending this man. Right, utility of you modified with voting theory a little bit, but, you know, that's a new view. You shouldn't be super confident. So is there another senator? Correct. It seems like it's. Yeah. So what you do is then take into account your uncertainty about what you've actually got to do on the moral uncertainty. And then I'm sure you're going to see where this is going to go because you shouldn't be unsure.

[00:49:51]

But take that into account and you're leading an infinite regress. Do you have an answer to that that can fit in the very short remaining time?

[00:50:00]

Honestly, I don't actually. I think it's exciting. Yeah, I think there's a few plausible. I do. I think there's a few plausible things you can say. I've tried to do work arguing that, you know, you should just go all the way up there, speak of and what you ought to do is what you want to do in the limit of the end of the hierarchy. But I think at some point when it comes to rationality, you do have to say this is just what's correct, even though it's not accessible to you, even though there's no way you could have known that this is correct.

[00:50:34]

Yeah, there's just there has to be some objective element. Yeah.

[00:50:38]

I mean, the the infinite regress doesn't seem like a unique problem, like very infinite regress as in all types of knowledge or decision making. And I mean, I guess I find them somewhat troublesome like like induction or something, and I find them somewhat troubling. But I guess I consider like solving infinite regress is like this other separate problem. Yeah, that would be nice to work on, but it doesn't the fact that it appears in lots of these other problems we're trying to solve doesn't I think and the reason it caused us to instantly give up on our solutions to those problems.

[00:51:12]

That's right. And the reason I don't. You know, keep this alternative view doesn't keep me up at night is because, you know, I'm really trying to use my life to do as much good as I can. I'm really thinking about what the biggest global priorities for the world. And in order to answer that, we don't you know, despite two and a half thousand years of Western philosophy and even more known Western philosophy, we don't have the answers to these moral questions yet.

[00:51:41]

And so we need to act. We need to make decisions in light of our current state of moral ignorance. And so the question is, what's the best decision? And if someone says, oh, well, this is what you should just do what's right. I'm just like, you're just not being very helpful.

[00:51:56]

Yes. Yes. I feel like the people having that taking that approach can't possibly have been part of, like, real discussions about decision making, or they would have heard themselves. They would have like listen to themselves like, oh, I sound like a tool.

[00:52:12]

I was going to name some of the people who might be able to do that. You could have done that earlier and then I could have changed the way I felt that you were down this path now.

[00:52:22]

So anyway, before we close, Will I always invite my guest to introduce the Russian speaking pick of the episode, which often prompted give is just like what's a book or article or a website or something that has influenced you. But for you, I'll give you a more specific prompt. I would like to hear a book or website or article or something that you disagree with, at least significantly, but that you still respect. Yeah. So the answer I'll give to this is a book called Anarky States in Utopia, but everyone knows it.

[00:53:01]

And it is an imperfect book in a lot of ways. So but what the book does is really lay out the philosophical grounding for libertarianism and what seems like libertarianism as a theory of justice, which says that all there is to justice is that did you acquire your property in a way that was just you didn't take it from anyone else? And did you transfer it in a way that was just and I'm not a libertarian, I think is incorrect. I think it's very clear.

[00:53:36]

I think that many very, very powerful arguments. And and I think it's a moral view that at least we need to kind of back in with and take seriously. Very apropos. OK, well, thank you so much for having me on. It's been a pleasure. Well, thanks. I'm glad we finally did that. Yeah, you too. This has been another episode of roughly speaking. Join us next time for more exploration on the borderlands between you and.