Transcribe your podcast
[00:00:14]

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense, I'm your host, Julia Gillard. And with me today is our guest, Professor Kenneth Warren, who is a professor of philosophy at Texas A&M University, where he specializes in some fields of philosophy, including epistemology and mathematical logic and the philosophy of math.

[00:00:58]

Kenny, welcome to the show. Hi, how are you? Good, thank you. Thanks so much for joining us. And before we begin the episode, I just want to remind our listeners that there are full transcripts of every episode now posted on our website, rationally speaking, podcast, Dog. So if you prefer to consume your information and insights, dense podcasts by reading instead of listening, then just go download the transcript there. So today I want to talk about a controversial paradox in philosophy called Newcomb's Problem or Newcomb's Paradox, and we'll discuss why it's important and how it's shaped the field of decision theory and maybe what it has to say about other philosophical topics like free will.

[00:01:39]

So, Kennie, to start things off, why don't you just give some context for for this paradox? You can explain it in your own words and and you can relate it to the work that you do. Yeah.

[00:01:52]

So traditionally, decision theory is based on an assumption that there are some parts of the world that you control what we might call your actions. And there are some parts of the world that you don't control, which we might call the state of the world. And traditional decision theory is developed by Leonard Savage and other statisticians. And in the middle of the 20th century, assuming that these two things are independent and that the outcome is the product of your action, plus the state of the world and the way and suggests that the way you should decide what to do is by figuring out what's the probability of any given outcome, given each of your actions, and then do the one that has the highest expected value, which you can calculate mathematically.

[00:02:34]

So, for example, if I were trying to decide between taking a job, your job and I maybe have a more variable salary and job B, so a higher chance of of getting no money this year, but but a small chance of getting a ton of money. I might take job B because the expected value is higher, something like that.

[00:02:57]

It depends on exactly what the probabilities are and what the possible values of the outcome are. OK. Right. And yeah. And then this arises out of traditional study of gambling in particular, is the way that this is initially set up. So we think of the dice or the lottery ticket as the state of the world and your choice of whether or not to play or how much to bet is the action. And so in that sort of setting, this makes a lot of sense.

[00:03:27]

But now there's a lot of situations in the actual world where the things that we want to describe as the states of the world aren't clearly independent of your choice of action. So here's something that's a way this initially emerged. So the nineteen fifties, the statistician Ari Fleischer was actually he was so in his early life, he developed a lot of the important groundwork of statistics and biology that shaped much of the 20th century. But later in life, he was actually working for the cigarette companies and he was arguing that the evidence we have so far, at least in the 1950s, doesn't prove that smoking causes cancer.

[00:04:08]

What he said is, for all we know, there's just certain people that tend to like smoking. And this is caused by some sort of biological feature. And it just a coincidence that the people who tend to like smoking the same trait that causes them to like smoking also tends to cause lung cancer, he said.

[00:04:28]

And so feel that kind of a reach, doesn't it? Yes. Yes, it does. But his claim is, if he's right, if he was right, which I'm sure he's not, then that would mean that your choice of whether or not to smoke would not directly affect the outcome of your decision, the outcome of getting cancer. And and so he suggests, well, in that case, you might as well just smoke if you like smoking, because if you like smoking, you're probably going to get the cancer anyway.

[00:04:58]

So might as well get a little bit of pleasure along the way. And in that case, it looks pretty silly. But we actually see this sort of thing arising all the time whenever someone talks about not wanting to confuse correlation with causation. Right. And so I think there's some famous studies of school choice initiatives, for instance, where you find that the students who get enrolled in charter schools do better than students who stay in the public schools. But in many cases, it turns out that the students who try to enroll in charter schools do just as well as the ones who actually succeed.

[00:05:34]

And it just because. Their parents are motivated, and that's actually the bigger factor, right? Another example of this that I like is some hospitals have higher death rates and and people often assume that this means that the hospitals are worse, but in fact, they're better hospitals and they're just taking on riskier cases of people who are more likely to die anyway. And the hospitals are increasing their chance of surviving. But still, you know, the base rate of death is just higher because those are tougher cases.

[00:06:01]

Right.

[00:06:01]

So in all of these cases, it looks like there are ways in which your action is related to the outcome, that they don't seem to go through the effect of your of your choice that. So the the way the policy is often put this problem. There's a slightly different version, which is a bit more science fiction, which perhaps brings out some of the worries more clearly. And the way that this raises challenges to standard decision. And this is the one that's traditionally called the Newcome problem.

[00:06:33]

So so yes. So say that you are at a carnival and you walk into a booth and you find a strange game being played there. There's a mad scientist who offers you the option of taking just what's in an opaque box, or she says you can take that box plus a bonus thousand dollars. However, as you were walking into the tent, her machines were scanning your brain and your body and her computer's predicted what you were going to choose.

[00:06:59]

And if they predicted that you would choose just the box, then she put a million dollars in it. But if they predicted that you would take the box plus the thousand dollars, then she put nothing in. And while you're deliberating and trying to figure out what to do, you learn that her predictions in the past have been fairly reliable. So a majority of the people that she predicts will just take the box, just check the box and got the million dollars and a majority of the people that she predicted would take the thousand dollars.

[00:07:30]

She predicted that. Right. And they only got four thousand dollars and didn't get the bonus million. So so in this case, if we tried to set this up as a traditional decision problem, what you might think is that, well, the state of the world, well, there's either already a million dollars in the box or not. So it looks like your choice is, do I take the box or do I take the box plus the thousand.

[00:07:54]

And so you just have to figure out, well, what's the probability that the million dollars are there? And whatever that probability is, you're better off taking both taking the bonus thousand, right?

[00:08:05]

You won't. It almost seems like you don't have to figure out what probability. Yeah, there is of a million dollars being in the closed box because no matter what it is, you might as well take both. Otherwise you're leaving money on the table. That's right.

[00:08:18]

That's right. And that's the sort of reasoning that the philosophers call causal decision theory. They say, look at your action, see what it's going to cause, assuming the world is already fixed, and then just look at the effects of your action. Meanwhile, there's also a group of philosophers called evidential decision theorists who say, well, look at this. The people who just take the box, they end up rich. They get a million dollars. The people who take both generally don't.

[00:08:48]

So what you should be doing is look at what's the probability that a million dollars will be there, given that you just take the box versus what's the probability that the million dollars will be there, given that you try to take both? And so they said you should take into account your choice as part of your evidence for what the value of the action is. And so they say you should just take the one box, and I think I think a lot of people find find it very hard to decide what's the right response in this in this version of problems newcome problem.

[00:09:26]

Yeah, in my experience, people have a very easy time deciding what's the right choice is and and they can't understand how anyone would choose the other thing, except people are, you know, evenly divided between. I think the originator. So I was at Nosik who came up with this problem or this initially.

[00:09:44]

I know that Nosik is one of the early places where it's discussed, but the fact that it's named the Newcome problem and suggests that there's someone earlier named Newcome, but I don't think that explains why that is.

[00:09:55]

Although I heard there's there's another law that states that discovery, scientific discoveries are never named after the person who discovered them with the name of the laws, but it's not named after the person who discovered that lost anyway. The quote from Nosik, at least about about the nuclear problem is to almost everyone. It is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem with large numbers thinking that the opposing half is just being silly.

[00:10:21]

That's right.

[00:10:21]

And I think I think the slightly different versions of the problem, though, do get different intuitions like in the smoking case or in the charter school case. Once people understand what the causal structure of the situation is, once they understand it's just correlations and there's no causation involved, most people go with the think the causal decision theorists advocate. So they say if Fisher was really right, if smoking does not cause lung cancer and it's just that smoking is correlated with lung cancer, then you might as well smoke.

[00:10:51]

I think most people go along with that intuition, and I think there's different versions of the problem that push things much farther the other direction. So I think another classic problem that has some of this same structure is the prisoner's dilemma. So for those of you who aren't familiar with that necessarily, there are a lot of people probably are. Imagine that you and your twin have just robbed a bank and the police have caught you. They don't have enough evidence to convict you of robbing the bank, but they do have enough evidence to convict you of a minor crime on the side.

[00:11:26]

And then they give you this offer. They separate you in the twin and they give you this offer, if you will testify against your twin will drop this minor charge. And with your testimony, we'll be able to convict you. Now, of course, they're making the same offer to Twin and and so, of course, if you testify, you get a year knocked off your sentence no matter what. But if your twin testifies, then you get five years added to your sentence.

[00:11:53]

And so what you'd really like is for your twin to not testify and for you to testify.

[00:12:00]

This presuming you don't actually care about the well-being of your twin. That's right. That's an important stipulation of a thought experiment that probably in real life. Yes.

[00:12:07]

That's that's very important for the situation of this one.

[00:12:11]

But in all of these situations, we have the same sort of setup. There's some feature of the world which we are unaware of, whether or not you have this biological lesion that causes a desire for smoking and cancer or whether or not the million dollars is already in the box or whether or not your twin is testifying. And then there's a choice that you have and one of your options is better than the other, regardless of which state of the world is actual.

[00:12:38]

But one of the states of the world is highly correlated with one of the choices that you could make and so on.

[00:12:47]

And so this just undermines the setup that Savage assumes for decision theory initially, which is that the states of the world are independent of your action.

[00:12:56]

So the the difference is, I agree that it feels like there's a difference between the nuclear problem and the prisoner's dilemma. In the case of the prisoner's dilemma, it sort of feels like the the state of the world is being determined concurrently with my decision that if I decide to testify against my twin, then sort of simultaneously and my twin is deciding to testify against me. So it feels like there's that this tight linkage between what I decide in the prisoner's dilemma and what my twin decides, whereas in Newcomb's problem, it feels much more like the state of the world is already determined.

[00:13:32]

There's already a million dollars in that box or there isn't. And my decision can't change that because that would be like reverse causality going backwards in time. And I it's a little weird because it because we've stipulated that the mad scientist is really good at predicting what I'm going to do. It's almost as if there is reverse causality, because when I if I were to decide, well, I'll just take the box, then that means that probably the scientists decided not to put money in the box.

[00:13:58]

But but it's not really like my can't really determine whether there's money in the box because the money already happened or it didn't.

[00:14:05]

So so I think there's there's a few related issues here. All around the idea of causality, and I think I mean, one reason why I for a long time hated this this issue was because I thought it's all about causality. And I think that's just this messy thing that probably isn't really real in the world anyway in some sense. And that if we could just avoid that, we could go back to Savage's decision theory and everything would be nice. But I think it's not just the temporal order that's relevant here, because so we get this temporal order that's like the prisoner's dilemma, where the the state of the world is being decided simultaneously or even after your decision, just in a classic gambling case.

[00:14:46]

So we're about to flip a coin and I'm deciding, should I bet on heads or should I bet on tails? But yet, of course, my decision of whether to bet on heads or tails isn't going to affect the outcome of the coin. And similarly, in the prisoner's dilemma, my twin is in a separate room from me. He doesn't know what I'm doing and my choice isn't going to cause him to do anything different. He's already who he is.

[00:15:13]

He's his independent person. My action can't affect him anymore. And yet there's still this this suspicion that maybe maybe somehow my my actions got to be strongly correlated with this in a way that's different from what's going on in the new trouble. Hmm.

[00:15:35]

So another aspect that we didn't touch on with the Newcomb's problem so far is this notion of a of a perfect predictor or really good predictor. I think in the in some versions of Newcomb's problem, it's stated that the mad scientists well, sometimes it's a mad scientist, sometimes it's a super intelligent alien. Sometimes it's an artificial intelligence that's just got a really, really good prediction algorithm. And it's stated that this this predictor is is 100 percent accurate, that whatever you do, it knew that you were going to do that.

[00:16:04]

And so some people, I think, including me, when I first heard this version of the problem, think that that the reason this seems like a paradox is just because it's assuming this impossible thing, that there's no way to actually perfectly predict, you know, with 100 percent certainty ahead of time what someone's going to do. So therein lies the you know, the the that's that's where the shell is hiding in. This is just like a shell game or whatever.

[00:16:31]

But I think this is actually not that essential. So that's what's interesting. Yeah. If the verdict is only 60 percent accurate, it's still very likely that the people who take the one box end up better than the people who are taking votes. Right. And I heard actually that the philosopher Dave Chalmers did this once at a party. He had invited a bunch of philosophers and he knew them all and he offered them all a version of the Newcome problem.

[00:16:56]

Of course, not with the million dollars with smaller prizes. And he apparently got 60 or 70 percent of the predictions accurately.

[00:17:04]

I actually had a philosopher do this to me on a date once. Yes. And I too boxed. I mean, I took both the money and the empty box and inside the box, I just gave it away. It was empty because he knew that I would he predicted ahead of time, knowing what he knew about me, that I would I would take both. I mean, he was right. Yes.

[00:17:23]

Yeah. So that's right. And I think that but it doesn't rely on the predictor being one hundred percent accurate, though. I think if the prediction is one hundred percent accurate, that will strongly push people towards the towards one box. And I think it raises this deeper worry that the great need for a long time that there's something fishy about this as a decision problem, that it it should be something that we somehow exclude from the action of what we're considering.

[00:17:53]

Right. Right. So I've been I've been thinking about this stuff lately, and I'm working on a paper discussing these issues. And the thing that I've been noticing is that so we have these different problems that are structurally similar to each other. And I think philosophers have assumed that whatever your theory of rationality says about the Newcome problem, it should say the same thing or the parallel thing and the smoking case or in the prisoner's dilemma. And what I'm interested in is maybe there is actually important subtle differences between these problems based on something like this causal structure.

[00:18:28]

And so so what I've been doing is I've been thinking about these things in terms of trying to understand what's the causal structure of what's going on here.

[00:18:39]

And I think actually there's a there's a related puzzle that some philosophers have thought about or the theory of action and intention that is that I think actually has enough similarity that I think you can shed a bit of light here. And this is the toxin puzzle from Gregory Kafka. So the way this one works is that a billionaire is coming up to you with a very strange offer. He places before you a vial of toxin that if you. It will make you painfully ill for a day, but will not threaten your life or have any lasting effects.

[00:19:14]

And what he says is, I'll pay you a million dollars tomorrow morning if at midnight tonight you intend to drink the toxin tomorrow afternoon. He emphasizes you don't actually need to drink the toxin. In fact, the money will already be in your bank hours before the time for drinking it arrives. If you succeed in having this intention, maybe he can.

[00:19:36]

He can. He has advanced enough neuroscience and brain scans that he can actually tell whether you're intending to drink.

[00:19:43]

Exactly. That's right. And in fact, he's going to leave before you ever have a chance to drink it. Now, what Kafka is raising this problem for is he suggests, well, think about your situation tonight. I start thinking, let me plan to drink that toxin tomorrow. But I know that once I wake up in the morning, I can check my bank. I'll see if the money is there or not. And and at that point, I feel like I have no more reason to drink the toxin.

[00:20:13]

And so I know that in some sense, no matter how sincerely I try to intend tonight, I know that tomorrow I'm just not going to do it, or at least I have no reason to do it.

[00:20:24]

Yes, right. That's right. Though even I think some people might claim this controversy around that. But it seems to me, and it seemed to Kafka that you'll have no reason to do it tomorrow. And you can predict that now. And second guessing that intention is just not compatible with a full belief that you're just not going to do it.

[00:20:44]

Is there is this related to the hitchhiker problem? Yes. Well, I'll briefly state it for our listeners who haven't heard it. The idea is that you're you're stranded out in the desert somewhere. You're you're out of water. So you're going to die soon if someone doesn't rescue you. And then luckily, a car happens to be passing by and the the driver is well, you can stipulate he's really, really good at sort of reading people's minds. I mean, not literally.

[00:21:15]

Yeah. Intentions just from from listening to their body language and expressions. And he says, well, I'll drive you to town if you can give me one hundred dollars once we get to town, because obviously you don't have any money on you now you're like out in the desert without a wallet and you wish that you could commit to giving him a hundred dollars when you get to town.

[00:21:36]

But you realize that once he's brought you to town, he has nothing on you. You don't have to give him one hundred dollars. So you're not going to have any reason. You know, this is assuming both of you are perfectly selfish agents, which is often a stipulation, these problems people have trouble with. But, you know, assuming that so you realize that you really, really wish that you could credibly commit to paying that hundred dollars once he's brought you to town.

[00:21:57]

But you can't do that. So, you know, even if you say yes, now the driver can tell that, you know, that you won't actually have a reason to do it and that therefore you don't truly intend to pay him off. And so you're left stranded in the desert, which just feels it's sort of too bad that this causal decision theory seems to leave people in the hitchhikers position with no way to actually save their lives.

[00:22:26]

That's right. And so but I think all of these puzzles have the same sort of payoff structure. There's one action that gives you that regardless of what the state of the world is, guarantees you a slightly better outcome. So in this case, it's not drinking the toxin or not dying. And once you get to town. Right.

[00:22:47]

But the state of the world, which is whether you get the million dollars from the billionaire or whether you get the ride back to town is strongly correlated with which action you're going to do. And that's a much bigger effect than the other one. So it's not this direct causation that your action has on your outcome, but it's somehow an indirect correlation with with a much better outcome. And so the question is, what can rationality say about this if there's an action that is guaranteed to, regardless of how the world actually is making slightly better off, but it would be much better for you if you hadn't made that decision.

[00:23:26]

Well, what rationality going to say? And so what I say is that that the causal decision theorists have something right. They say we should be looking at the causal structure. We shouldn't just be treating your action as a piece of evidence that you've got. So obviously, in all these situations, if someone told you you're going to take the one box, you're going to give them the money when you get to town. Those would be pieces of good news for you.

[00:23:52]

But I think even the causal decision theorist says if you discover that you were going to make one of these choices, that the causal decision theory says the bad one, the causal decision theory says that still good news. And so they say the evidential decision theorist is just looking at your actions as a special type. Of news you get about the world and somehow ignoring the fact that what you're doing when you're acting is something different from just learning about the world, you're actually making the world a certain way.

[00:24:19]

And so the causal decision, Kerry says, well, rationality for action consists in making the world do the good thing for you, even if that might be bad news to learn that you are doing the thing that will be good for you. But I say that because these different cases are different enough that many people actually get different intuitions on them. I think many philosophers have tried to develop versions of evidential decision theory or causal decision theory that can explain why we should cooperate in the prisoner's dilemma, but smoke in the smoking region and so on.

[00:24:57]

They tried to develop complicated theories, but I say we should just look more closely at what the causal structure is. So I say in the in the smoking Lesin case, what's going on is there's this fundamental thing, which is your genetics and the genetics has an effect on the outcome. But then the genetics also has an effect on whether or not you get cancer. And then it also has an effect on what sort of person you are and what sort of person you are determines what your psychological state is at the beginning of this game.

[00:25:27]

And then that determines in some sense what decision you do. And that decision then in some sense determines your active smoking, which also gives rise to some features of the outcome. Whereas in the case of, say, the Newcome problem, it's not your genetics doesn't even matter and and even your psychological character in general isn't the big thing. What's important is what your psychological state at the moment that you walked into the tent, because that's when you get scanned.

[00:25:59]

And that's the thing that has an effect right in the toxin puzzle. It's different. Still, it's it's not what your psychological state at the beginning of the puzzle. It's what's the actual decision you make tonight at midnight. But in all these cases, it's not the act that's causing this other thing. It's something that is that is part of the determination of your act.

[00:26:21]

And so we've been analyzing the problem from the wrong decision point or from the right choice point or something maybe, or at least there's many different choice points. And I think I think what's going on is that the act itself, the decision to do the act, the psychological state that you're in at a given time and your character as a person, these are all things that are, to varying degrees in our control. So also, to varying degrees, they're not in control.

[00:26:47]

And I think when we analyze the problem at a different level, we get a different answer. So I think many people what I think is correct is if you can right now try to shape yourself into the sort of person who, whenever you're faced with a new problem, will just take the box. That's a good move. And even the causal decision theorists will say that because whatever actions you're doing right now to make yourself into a one box your future and can problems, that's going to result.

[00:27:17]

That's going to actually cause future predictors to predict your one box and therefore give you the million dollars.

[00:27:23]

Right. It sort of feels to me like some moral intuitions or moral conventions in human society have developed as a hack to make to make hitchhikers dilemmas and prisoner's dilemmas and Newcomb's problems work out well. And that if you you know, we have this notion of of, um, following through on on what you promise you'll do or upholding sort of doing unto others as you would want others to do unto you, even if your actions don't cause other people to treat you better.

[00:28:00]

And so if those if living up to those moral standards is more important to you, then, you know, getting the million dollars or even then then living if you're stranded in the desert, then that makes you the kind of person who would just take the one box or or who would, you know, give the hundred dollars to the person who saved you, even if you have no selfish reason to do so at that point.

[00:28:23]

That's right. Yeah. Although as many people have observed, most many of these social features that tend to help reinforce this sort of behavior depend on the fact that we have continued interactions with each other. And so this is why what came here is talk and talk about me iterated prisoner's dilemma, that if I know that I'm going to be playing prisoner's dilemma with the same person multiple times over and over, then that gives me a strong incentive to cooperate so that my action now of cooperation can cause better results later on.

[00:28:57]

And Kafka in his paper about Biotoxin Puzzle sort of anticipates this. And he says that he knows that one way you can make yourself a drink to talk to him is if you make a side bet with your friend who where you arrange. If I don't drink the toxin, you should steal my money and give it away. And if you can arrange for that, then that's going to help you guarantee to drink the toxin. But Kafka says the billionaire is anticipated this and has written into the contract.

[00:29:27]

You're not allowed to make any side. That's right. You have to make this intention without any side bets, without any of this sort of social sanctions that that could help you stick with that intention.

[00:29:41]

And so the iteration of these games makes them less interesting in one sense, because it takes it sort of transforms them all into causal decisions, problems, because your your actions in this game can cause the state of the world in the future games that's on which like dominates the overall outcomes. Are you. So, yeah, that's a little less interesting in a sense. Right.

[00:30:03]

At least it's it's less interesting for this particular issue about what rationality is. Right. And so I think I think one assumption that's gone into a lot of this is that people assume that there should be one right answer that that either rationality should say that the right action is one boxing or two boxing or whatever, and then rationality will tell us in a more general sense. Well, then a person with rational character is the sort of person who does the rational act or vice versa.

[00:30:33]

Maybe that the right level at which to analyze rationality is at the level of character formation.

[00:30:38]

And then the rational act is just not which the rational character would carry out. And I see I see some analogies here, too. There's been over the centuries a lot of discussion in moral philosophy about whether the right way to analyze morality is at the level of the consequences of your action or at the level of the action or at the level of virtue or character.

[00:31:03]

Right. Or this might be you might have already intended to refer to this. But another level that I I'm personally sympathetic to is at the level of the rule. So the following is certain behavioral rule might might be good sort of in general or in the long run. And and according to this take on. Utilitarianism or moral philosophy, you should follow the rule even in cases where in this specific case, it gives a worse outcome. That's right. Right.

[00:31:30]

Because it's a rule that it followed generally would result in better outcomes. Right, exactly. Yes.

[00:31:35]

Some some set up in which you if you try to sort of game the rule and abandon it in the cases when it gives a worse outcome, you are therefore thereby dooming yourself to worse outcomes overall.

[00:31:46]

Yes, that's right. And I think I think thinking about it actually in the contrast between rule utilitarianism versus act, utilitarianism is actually the most useful way to think of the parallel here, because we're still analyzing all these things based on what are the outcomes that you get, as opposed to many of these moral theorists who think that actions or virtues are the right level, the things they say that virtues are even primary to primary to the outcome, that that one makes an outcome good is that it's a sort of outcome that a virtuous person would bring about or something.

[00:32:18]

And and so they actually even deny the analysis through outcomes. Now, I think fear in decision theory, most people are committed to some sort of outcome based analysis, some sort of consequentialism, as they call it, and moral theory. But the question is still at what level does this consequentialism analyze? Is it at the level of the action, at the level of the decision, at the level of the psychological states at a time or at the level of your virtuous character?

[00:32:49]

Which one of these is the one whose consequences we are most interested in analyzing? Right.

[00:32:54]

I like that because it addresses a common objection that I hear two to one boxing in the nuclear problem, which is, well, look, this just happens to be a weirdly constructed case in which the mad scientist or the the artificial intelligence, the predictor has chosen to reward people for being irrational. And you could construct similarly artificial cases where you get a million dollars for truly believing that the sky is green. But you know that just like too bad for rationality.

[00:33:24]

Yes. In this particular case. But that doesn't mean it that doesn't mean that rationality is is wrong or flawed in some way. It just means that, like in a few constructed contrived cases, it's going to, you know, by stipulation you're going to be worse off by being rational, which which I feel the pull of that argument. But at the same time, there does seem to be something wrong with a rationality that leaves you worse off in a systematic and predictable way.

[00:33:49]

That's right. And so I think I think what so I think there are many people these days, I think I see many people associated with artificial intelligence and related issues, perhaps many listeners of this podcast who have been interested in versions of decision theory that advocate one boxing it advocate, getting the money in the pocket tiger case and so on on these grounds, saying that a theory of rationality ought to be the one that gives you the best outcomes overall and not out of evidential grounds, not by saying we should ignore the causal structure of the action.

[00:34:26]

But I think I think what they're doing is I think they're still thinking that that rationality should act at one of these levels. And then once you determine what what a rational character is, then we can understand what a rational act is on the basis of a rational act is just one that rational characters would do. Mm hmm. And now what I'm thinking is that perhaps there's actually just a deep tragedy in the notion of rationality. Perhaps there just is a notion of rational action and a notion of rational character.

[00:34:57]

And they disagree with each other that the rational character is the is the is being the sort of person that that would one box. But the rational action is to boxing. And it's just a shame that the rational, virtuous character doesn't give rise to the rational action. And I think that this is a thought that we might be led to by by thinking about rationality in terms of what are the effects of these various types of intervention that we can have.

[00:35:30]

Hmm, yeah, but still isn't there some way to it doesn't the rational character trump the rational act in some sense? Like I don't know.

[00:35:42]

I think I think what any when any of these one boxes will tell you is that the right thing to be is to be a one boxer. But you really wish that by accident your two bucks, even though you are deep down one boxer, because you still get the thousand dollar bonus. And so you you'd really like the action you take to be the one that goes against your character, even though you're not that sort of person. Hmm.

[00:36:10]

It reminds me of a sort of modern fairy tale, and I can't remember where I read it. But the the hero in the fairy tale is presented with this opportunity to sacrifice his own life in order to to save, I don't know, a person the world, I'm not sure. And he and he sacrifice he takes the the step to sacrifice his life and then is disturbed when it turns out that he does, in fact, have to lose his life because he's used to the stories in which, you know, when you when you do the heroic, noble thing, you're rewarded for it by, you know.

[00:36:43]

Oh, yeah, actually, you're saved after all. You know, you just had to prove that you were, in fact, willing to sacrifice your life. But that's not actually how it turned out. That's right.

[00:36:52]

That's right. And I think I think perhaps rationality just has that sort of tragedy to it, that that there's one notion of rational character. There's one notion of rational decision. There's one notion of rational action. They don't all necessarily line up, because if you think that they should line up, then the question is, well, which one is the most fundamental thing to have is that the right notion of rational action is the notion of rational character. And I think I think the people that go for what they call this update, this decision theory or timeless decision theory, they say we should go as far up the causal chain as we can.

[00:37:23]

It's the character that determines everything else. And once we understand what rational character is, then we understand what everything else is.

[00:37:30]

So you you alluded to update less decision theory and timeless decision theory. I don't know how brief you can be, but can you attempt to summarize those those alternatives to a causal and evidential decision theory?

[00:37:44]

Unfortunately, I don't think I fully understand them in enough detail to to do that.

[00:37:49]

But I think that this the approximation that I mentioned so far is as useful that you should always act in the way that you wish you would have initially committed yourself to act is approximately what's going on that that when the hitchhiker when you when the driver brings you back to town, you should get the hundred dollars for him because that's what you yourself back in the desert would have wished, even though now you don't. And similarly you should take the one box and the Newcome case, because that's what you would have wished yourself to be committed to before you entered the tent.

[00:38:26]

But in the smoking Lesin case, they say it's not like deciding not to smoke is going to have changed your DNA or something like that. It's somehow that one, the causal chain begins outside. Anything that you have? Well, not sure control is the right way that they would want to put it. But is the way that I think of it that.

[00:38:49]

But somehow I don't determine my DNA, I only enter the picture once we get to the point of forming my my character or something like that.

[00:38:59]

So the thought experiment that makes this kind of algorithm seem absurd to me is something called the counterfactual mugging, where someone comes up to you and says, well, I, I decided that I would flip a coin and if it came up heads, I would give you a million dollars. And if it came up tails, you would give me a hundred dollars. And I'm just letting you know the coin came up tails.

[00:39:28]

So you also it's important I think it's important to set up that I decided to do this only because I know you're the sort of person that would give me one hundred dollars. Oh right. Yeah.

[00:39:39]

You're the sort of person who would follow the terms of the bet that I came up with. And so, you know, you also have to assume this person is like is honest and not just like making up this dumb story to get you to give them one hundred dollars. That's right. And so according to this kind of update list, decision theory approach, you should just pay the hundred dollars because if you had that is what that is the algorithm you would have wished that you would follow, because otherwise the person would never have been willing to flip the coin and risk the outcome of giving you a million dollars.

[00:40:13]

Right. But that's so it just seems so crazy.

[00:40:17]

That's right.

[00:40:19]

Yeah. So I think yeah. So I don't get on board with their view. And to me that I just think that they're the right response here is to say some sort of tragedy and rationality. There's multiple levels of rational analysis and there is one level of rational analysis at which being that sort of person who would take part in this is is perhaps a virtue, perhaps as the sort of character virtue to have. But when you're actually faced with this, the right thing to do is not to not to pay the money.

[00:40:49]

But I don't know what how to I don't know if there's any way to square these two conflicting notions of rationality. Right.

[00:40:58]

But I think that it's important for all of this that we that there is some notion of causation involved here and there is some notion of self determination that that's where all these cases if if we're not talking about me deciding to take the one box or two box, but if we're talking about just that, if you have one sort of muscle spasm, you get a million dollars. If you have a different sort of muscle spasm, you get a thousand dollars.

[00:41:25]

There's no question of rationality at that point.

[00:41:28]

It's only one billiard balls bumping into each other. That's right. Yeah.

[00:41:31]

And then I think this is something this is what's always made me uneasy about the puzzle to begin with. That being in all these cases, we're accepting that there is some there is some way in which the causal structure of the universe allows for our decisions to be predictable. And and once we allow for our decision to be predictable, I think this this raises worries about whether our decisions are really decisions in the relevant in the relevant sense overall and and so on.

[00:42:03]

Yeah.

[00:42:03]

So it seems to me that when you're in the Newcome problem, it sort of destroys this illusion that we have, that we're acting freely in the world.

[00:42:12]

And once we destroy this illusion of acting freely, the notion of rationality stops making as much sense as we might have thought initially. Right. So in the in the new Newcomb's problem, it's it feels like you have unusually less free will because because the causal structure of the situation is such that that your your choice has already been determined in some sense by it, by the mad scientists, or in this case, it's determined the mad scientist is determined.

[00:42:51]

It epistemically. But there's some feature of the world that she's right. Exactly. Possibly determines it. And so both your choice and the prediction of your choice. That's right. And yet, even though it's starker in this problem than in general, it's not it's not fundamentally any different than than how the universe works in a non-nuclear right. That's right. That's right. That's right.

[00:43:15]

And so I think I mean, I think one way to think about this, there's there's this way of understanding causation that's become popular over the past few decades due partially to the work of Judea Pearl and the computer science department at UCLA and then partly partly to the work of shinies glymour expertise here. And I believe they're all in the philosophy department at Carnegie Mellon, but they may be statistics as well that that is trying to understand causation through what they call these causal graphs.

[00:43:47]

And so they say you should consider all the possible things that that might have effects on each other. And then we can draw an arrow from anything to the things that it directly affects. And and then they say, well, we can fill in these errors by doing controlled experiments on the world. We can fill in the probabilities behind all these errors. And we can understand how one of these variables, as we might call it, has contributed causally to another by changing the probabilities of these outcomes.

[00:44:18]

Mm hmm. But the only way they say that we can understand these probabilities is when we can do controlled experiments, when we can sort of break the causal structure and intervene on something. And so this is what scientists are trying to do when they do controlled experiments. They say if you want to know whether smoking causes cancer, well, the first thing you can do is look at smokers and look at whether they have cancer and look at nonsmokers and look at whether they have cancer.

[00:44:45]

But then you're still susceptible to the issues that Fisher was worrying about. What you should actually do. If you want to figure out whether smoking causes cancer is not observed smokers and observe nonsmokers, but take a bunch of people, break whatever causes would have made them smoke or made them not smoke. And you either forced some people to smoke or for some people not to smoke. Obviously, this experiment would never get ethical approval.

[00:45:10]

But but if you can do that, if you can break the causal errors coming in and and just intervene on this variable and force some people to be smokers and force others to not be smokers and then look at the probabilities, then we can understand what are the downstream effects of smoking. Hmm. And so in some sense, these causal graphs only make sense to the extent that we can break certain errors, intervene on certain variables and observe downstream effects. And and then I think in all of these Nukem type problems, it looks like there's several different levels at which one might imagine intervenor.

[00:45:44]

You can intervene on your act. You can say imagine a person who is just like you, who had the same character is you going into the Newcome puzzle. And then now imagine that we're able to, from the outside, break the effect of that psychology and just force this person to take the one box or take the two boxes in this case, forcing them to take the two boxes regardless of what sort of person they were, like, we'll make them do better off.

[00:46:08]

And so that's a sense in which to boxing is the rational action. Whereas if we're intervening at the level of choosing what the character of this person is before they even go into the tent, then at that level, the right the thing that leaves them better off is breaking any affects of their history and making them into the sort of person who's the one boxier at this point. Right. And so so if we can imagine having this sort of radical intervention, then we can see at different levels different things are rational, right?

[00:46:39]

Yeah. That disambiguation really it helps resolve some of the uncomfortable feeling of reverse causality that you get from the original formulation of the problem. Yeah, we're almost out of time, but I want to make sure that we talk, at least briefly, about the implications of this field of decision theory, because in one sense, you could say, well, you know, in the real world, as you were saying, can there any kind of hitchhiker or prisoner's dilemma type problems that arise are in practice repeated games.

[00:47:13]

So, you know, they turn into causal problems or, you know, even if they're not repeated games in practice, society has developed these patches for the problems. In the form of moral standards that we feel compelled to follow and so maybe in practice, it doesn't really matter what the rational choice turns out to be because we've sort of solved the problem in practice.

[00:47:37]

And there's also the second sort of games that we get where it's we have an observational study and there's correlation, but there's no causation. And and here most people just say do the causal good thing, ignore the correlation. Once we understand that it really is just a correlation that isn't affected by your action.

[00:47:55]

Right. Right. So the question is, do you think that there are what do you think is the motivation for for figuring out how to how to think about these kinds of problems? Right.

[00:48:04]

So for a long time, what I thought was we should ignore these problems. There's the pseudo problems because because they only arise in these cases where we have to be confronted with the lack of control that we have in the world, that that the world is really deterministic and decisions are decisions are just part of the world. And and there's no should about it is just what will happen nowadays. I'm thinking that perhaps the the better thing to think is that it's showing us how rationality is tied into questions of causation and entry.

[00:48:43]

Well, and determinism and that I've always been happy with the idea that that free will, in the sense that we really care about it, is compatible with the world being a deterministic place. And the question is just are my actions being determined by the states that I identify with? That is my my character, who I am as a person, as opposed to when my actions are caused by the kidnapper with a gun to my head or are caused by the electrical stimulation of my muscles by an outside psychologist.

[00:49:16]

And so there's multiple different kinds of causation. But the causation that matters to me as an agent is the kind that goes through me as an agent. And and it still sense to be made out of that, though. So there's there's questions about what that means for the notion of practical rationality here. And I think I think that's really what these puzzles are are pushing us towards that, that the notion of rationality and the notion of the ways in which free will is compatible determinist with determinism are going to have to there's going to be some complex interaction here.

[00:49:50]

And and maybe rationality falls into multiple different types of rationality, one for actions and another for psychological states and another for character formation.

[00:50:00]

Right. Excellent. Well, we are actually over time now because I lost track of time because I find this topic so interesting. But I will have to wrap up this part of the podcast and we'll move on to the rationally speaking PECC. Welcome back. Every episode, we invite our guest to introduce the rationally speaking pick of the episode, the book or website or something else that tickles his or her rational fancy. Kenny, what's your pick for today's episode?

[00:50:43]

Yeah, so my pick is the book Thinking Fast and Slow by Daniel Kahneman.

[00:50:48]

Yeah. So I'm going to pick yet surprisingly really surprised. No, I'm I could be wrong, but I don't think so.

[00:50:55]

Yeah, but what this what this book is all about is the different ways that our mind works. And the title is a reference to the fact that that is one set of things that we do when we just work out of habit and another set of things that we do when when we consciously reason about the world. And each of these has their own sort of understanding of the world in their own rationality. But I think his his book also gets at some fundamental problems in understanding what it is that we eat and what he talks about, the contrast between the experiencing self and the remembering self.

[00:51:31]

He says you can go on a vacation and go on this hike through the forest to get to this mountain peak. And for hours and hours, you're fighting mosquitoes and your muscles are sore and you're suffering. And then when you get home, you think that was the most amazing thing, because I struggled through this and I got to this amazing view. And on the one hand, if we say, well, our goal is to maximize the great experiences that we have, you spent hours and hours and hours on that hike suffering and over the rest of your life, you're probably only going to spend an hour or two total reminiscing about this.

[00:52:08]

And yet somehow we think the remembering self is the one that gets to decide what our next vacation is going to be. And and we, as psychologists, philosophers, decision theorists, are creating this difficult position. Well, what is the thing that we really care about? Do we care about people leading good lives moment by moment, or do we care about the what people say they care about? And and there's many other, I think, really interesting dilemmas that arise from this book.

[00:52:42]

There absolutely are. And I think all of the parents or potential parents in the audience will might might see some interesting implications of the the higher example. I'll just leave it at that. Yes. All right. We're all out of time. Kenneth, thank you so much for joining us. All right. Thank you for having me on. Very interesting discussion. And I encourage our listeners to check out Cannis Research, which will link to his page on the site, as well as to his thinking Fast and Slow by Daniel Kahneman and to download transcripts of the episode if you so choose.

[00:53:19]

All right. This concludes another episode of Rationally Speaking. Join us next time for more explorations on the borderlands between reason and nonsense.

[00:53:34]

The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benny Pollack and recorded in the heart of Greenwich Village, New York. Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.