Transcribe your podcast

If you enjoy listening to the rationally speaking podcast, consider donating a few dollars to help support us using the donate button on our website. Rationally speaking podcast, dawg, we're all volunteers here, but we do have a few monthly expenses, such as getting the podcast transcribed and anything you can give to help. That would be greatly appreciated. Thank you.


Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to the wrestling speaking podcast, where we explore the borderlands between reason and nonsense. I'm your host, Julia Gillard, and I'm here with a couple hundred other people at the Northeast Conference on Science and Skepticism. Say hi, guys. I have a fantastic guest for you. He is actually a return guest from four years ago at the Twenty Twelve Nexus.


His name is Jacob Papel and he is a bioethicist, but he has so much more than that. He is also a psychiatrist who teaches at Mount Sinai Medical Center and practices both at Mount Sinai and Beth Israel. He's a lawyer with a JD from Harvard Law School and he's also an award winning novelist, essayist and playwright. So he is absurdly overeducated. Jacob, welcome back to us. Thank you. My fear was you were going to say that you were reasoning with nonsense, the divide we fight.


So Jacob is in the middle or he's currently working on a manuscript for a new book provisionally titled Harder Choices. And it explores one hundred and one challenging bioethical dilemmas that are taken in part from your own professional experience and carefully anonymized and in part from real stories from the headlines. Not that your own stories aren't real, but public publicly, real stories from recent headlines. And so I've been reading this manuscript. It was what gave me the idea to invite Jacob back to Texas.


And there are really harrowing ethical dilemmas. Chigas I was playing this game with myself while I was reading and the game was OK. How much money would I be willing to pay to not have to decide that question? But I was broke like halfway through the book, so that just gives you a little sense of it. So that's what we're going to talk about today. Harrowing bioethical dilemmas and zooming out a bit more, I hope to get into the the principles, the philosophical heuristics that a bioethicist uses to try to settle questions like those.


And at the end, we're going to set aside a generous amount of time for Q&A. So in the back of your mind, as we're talking, you can be thinking about what, Harry, bioethical questions you want to pose to Jacob as he's trapped here on the stage at the end of the show. So let's dive in. I think the first thing we should talk about is what a bioethicist even is like. What is your job description? What's the role that you play?


That's a great question. I should add, by the way, the title of the book is not coincidental. The theory for harder choices is that many people out there will think it's actually hard choices. Hillary Clinton's bestselling book and by a by mistake.


But but it sort of captures what bioethicist do.


And they do two things. One, in the hospital setting and I've done this in the past. I actually don't do this at the moment. Bioethicists help adjudicate or guide patients, clinicians, family members, incoming ethical choices about individual patients. And I should add, much of that is not telling people what to do or even telling people what their options are. It's helping them have permission to do what they want to do anyway. And the other consultant, a management consultant, in a way, I will add, it's one of the few fields that gets easier and easier.


As you get older and older, you get more gravitas if you're about 80 and you have a long gray beard and you can say, I've been in this hospital for fifty two years, I've never left even when my children were in labor. And nobody like your spouse has ever left the hospital alive. It's a powerful thing to say when I say I've been here for a day and a half and I don't think things look good, doesn't carry as much weight.


The other thing bioethicists do is they offer public guidance on challenging issues at the nexus of law, medicine and ethics to clinicians out in the field so they can follow some general rubric and what to do. And they try to increase the education that people have in the general public about these issues.


So are you never at any point saying to doctors or to the hospital administrators, I think this would be unethical and this is the more ethical choice. So I'm not on the ethics committee of my hospital, but what the Ethics Committee does do is rarely but occasionally directive. Occasionally they will say something like and the Ethics Committee will consist of 20 different experts in ethics from different fields in the hospital, the social work ethicist and nursing ethicist, and they work by consensus.


And if all 20 of them say, we strongly feel you should do this or you shouldn't do that, whether or not it's legally binding, it's dispositive because no doctor wants to be in court and say, you know, I asked the top 20 ethicists at my hospital from all the major fields what to do, and they all agreed this was the right thing to do. So I did the opposite.


And what's the training for being an ethicist? Like I said, a basically a subfield of philosophy. And you're reading different philosophical theories of ethics. You remember that Peanuts cartoon, something like advice, five cents. Basically, you show up.


Unlike many other fields where you need a license, where you need professional training, there is no royal road to bioethics. So historically, people were trained either in philosophy or in religious studies. Increasingly, people are trained in clinical fields, bitter medicine or nursing or social work. But much of the field is built on people who just developed a clinical expertise over time. It comes from seeing a lot of cases, which is why, unfortunately, out in the public there are people who hang up their shingles on the Internet as bioethicists who have no experience with clinical patients, have no formal training in the field.


But sort of like being a journalist, anybody can say they are one, as I did for years.


So I'm curious about is there any kind of degree of expertise, any explicit principles that a bioethicist can point to that makes his or her advice more than just this is my own intuition or my personal opinion. But I think before we get into that, I want to get concrete for a while before you dive back into the abstract. So let's talk about some examples. I think the one I want to start with is the question of how to decide what a reasonable request is from a patient.


So to give you an example, you can imagine a female patient saying, look, I just be a lot more comfortable if my doctor, especially my my gynecologist, was female. I think most people and I imagine most doctors or hospitals, I don't know for sure would consider that request reasonable and if feasible, would try to honor it. Conversely, you could imagine a patient who said, you know, I just think women are inferior and I just would be more comfortable if my doctor was male or a two Klux Klan member who says I would.


I'm not comfortable having a black person operate on me. I need a white surgeon. And again, I don't technically know what hospitals would do in those situations, but I can imagine that feeling less reasonable, that feeling like a request that doesn't doesn't deserve our indulgence, essentially. But but these are just sort of my my rough sense of what our society considers reasonable, essentially, or what feels reasonable to me. And I'm curious if there's any criteria that you can actually point to as a bioethicist to settle cases like that.


And there's a wide continuum of these cases. Sure. Demonstrate why it's very hard, on the one hand, to have a concrete policy or even set of principles in either hand to actually adjudicate these cases. Because you could say as a white Klansman, I would like a white Christian doctor to treat me. And that strikes most of us as fairly unreasonable. You could be an African-American criminal defendant who asked the court to appoint an African-American psychiatrist to do your forensic evaluation because a white doctor just can't understand what I've been through.


And a lot of us would think that's more reasonable. You're designing a rule that applies in one circumstance and not the other is extraordinarily difficult. Yeah. The other example I used, you might be a male out there who wants a male urologist because he doesn't want women touching his private parts. Some of you might think that reasonable or you could want a male urologist because we all know that men cut straighter, which is not true, by the way, with the hospital is not just going to cut that quote out and put it on the website.


I figured as much.


What the hospital does not want to do is to be in the position of figuring out which one you are and yet to mete out justice in a meaningful way. That's exactly what we have to do. And therefore, if you have a policy in advance, you can assure some rough sense of justice. But inevitably, like these, the innkeeper who had one size bed and if you were too short, he stretched you. And if you were to cut off your feet, the hospital doesn't want to find itself doing that either, right?


Well, one category of cases in which I feel like there is kind of an explicit rule, maybe not officially, but in practice, is that if a request is made for religious reasons, at least if it's a well-established sort of classical religion and not some smaller fringe or new religion, then it's much more likely to be honored. So if you let's take that juxtaposition you give to people wanting a male urologist, but for very different reasons. If you had a patient saying, you know, I I'm an Orthodox Jew, I don't have a Jewish male and I don't want a woman to be my doctor, my guess correct me if I'm wrong, that would be more likely to be honored than if a man said I'm I'm a bigot and I don't want a female doctor for that reason.


Is there is there any, like, codified policy to do that? Are people just more sympathetic for religious justifications? There are there are laws and a series of court cases that protect the rights of religious minorities. But even there, you've demonstrated the challenge of this, because if we announce that is the policy, if you're an Orthodox Jew, you can request a male doctor. But if you're a bigot, you can't lots of bigots. You're going to say that Orthodox Jews are going to be right back where we started.


It's not theoretical or is there any evidence of that happening? It's only theoretical because we haven't announced the policy strictly as one of the more complex aspects. You know, we sometimes we do look into the reasons that patients make a religious choice a lot for you to enact. Two kinds of examples. One would be a Jehovah's Witness. It turns them blood transfusions. And Jehovah's Witnesses, even though there is some flexibility today, historically turned down blood transfusions because they believe that you were denied a place in heaven if you accepted one.


There is no modern medical answer to the question of whether or not you were damned in the afterlife. If you accept a blood transfusion and Jehovah's Witnesses will tell you, I understand, if I don't get one, I'm going to die. Exactly like you or I would say it's sort of like why Orthodox Jews don't eat lobster. You can tell them that lobster tastes good. That's not the point. In contrast, Christian Scientists turned down medication, let's say antibiotics for pneumonia because they believe that antibiotics interfere with their proof to God that they believe in him and they pray for a cure.


Now, hopefully you and certainly I believe the antibiotics do cure pneumonia. And there is a factual, empirical medical answer to that. So those two cases aren't always dealt with the same. In the first scenario, we almost always let an adult use increasingly in the second area, we let adults choose.


But that's been a longer path. And then there are groups like the Attleborough cult in Massachusetts that they prefer to call the Attleborough sect that doesn't believe in the five pillars of modernity. And what are the five pillars of, oh, I'm not versed in the Attleborough sect.


I believe they include entertainment, education, banking, health care and something else very important to you.


And yet the just so like they don't wear eyeglasses that they can't see, they don't accept dentures, they won't fit a broken limb.


They've been around for 30 or 40 years. They have about 90 members. Do they qualify as a religious group we should respect or are they a fringe cult that can't make its own decisions?


Right. I mean, I guess I can I can see a justification for prioritizing, from giving more credence, more weight to the religious preferences of established religions as opposed to newer religions, if only because if there were no such criteria, no such discrimination, then anyone could just express whatever preferences they want and say, that's my religion. I've named it Julia ism, and now you have to respect it. But do you think beyond that, somewhat extreme cases are justification for for discriminating between older religions and newer religions or larger religions versus smaller religions?


And I'll add one more component. We should not lose sight of how strongly you have to have this belief. The patient who comes to the hospital and is eighty five years old and says, you know, I have a new boyfriend, he's a Jehovah's Witness, so I am too. And the daughter comes forward and says, my mother was a Methodist until last Thursday. It's the old GELO test. You've all made jello with children. Your grandmother puts the colored water in the refrigerator.


And what's the first thing you do 30 seconds later, you run there, open the refrigerator door and put your finger in. At what point does the jello transform from colored water into Jello? I know this is a skeptic's convention, but if you believe in miracles, that is a miracle.


At what point do you gel into a religious nonconformist? So in general, we do respect the decisions of religious minorities, but we use criteria not that different from allowing other people to make decisions. It simply is an added benefit. If you have a long tradition of supporting a particular group or a particular religious community, that can help bolster our claim that what you're making is rational in the context of your own life.


Well, while we're on the subject of this intersection between religion, our culture and medical choices, I'd be curious to hear what you think of whether parents should be allowed to select for trade in their child or make a medical choice for their child. That society as a whole might consider to be to the child's detriment, harmful to the child, but to that family, to their culture. That's a valid cultural choice. Like, for example, choosing to have a deaf child, which could either result from choosing selecting for an embryo that is deaf or declining any treatment to cure a child's deafness, which and maybe those are treated differently.


But in either case, I imagine the fact that deafness is a claimed cultural choice has to play a role or there's a third way to we should not lose sight of. I want to enumerate all three. Please don't say making the child deaf. Yes, absolutely. God, that's because what I feel is so different. But we're all supposed to be rational people here. What is the difference between when you have a pool of embryos, you can choose from choosing the deaf ones to implant or when your child is born, let's say the day before it's born or when it's just about to hear for the first time asking a well-trained physician to change its internal ear structure so we can't hear anymore.


Either way, you get a deaf kid. Now, unless you believe in the installment of hearing people and deaf people, there's not much practical difference.


Well, and I agree that it's hard to come up with a principal justification to to to justify why causing harm is no different than failing to prevent harm, for example. But in practice, we do seem to have a moral intuition that those are very different. And I imagine that, like, even if we can't say that's objectively correct, if we pull that pin out, a lot of other things collapse in our moral.


But if you assume the. This is a harm which I'm not endorsing, by the way, but if you assume that right, I'm not I'm just saying that by choosing that particular embryo, you're causing a harm in a very tangible way.


But the comparison is harder because with a child who already exists, you can compare the counterfactual. You know, if we hadn't done this this thing, then this this particular identified person would have hearing versus not have hearing that you're choosing an embryo. Then you're comparing to people who you can't say that this embryo would have had hearing if I had made a different choice, but they wouldn't have be. Board pointed out there's a bucketful of embryos you could hear they could have chosen.


So they didn't really see the difference, which is why they didn't allow Paula and Tom Licky, by the way, a British high profile couple who were deaf, they already had a deaf daughter. They wanted to go forward with this plan. And to the British authorities, there wasn't very much difference between their choice and actually just deafening a hearing new boy, that's that that totally seems like most people would have a different moral intuition to keep your hands off.


Those seem different to you, and they do for most people. And then when you ask them to enumerate it, it gets much more difficult for them.


Mm hmm. Interesting. Well, actually, this might be a good time to ask a sort of meta question that I had, which is you may be familiar with these two different systems of thinking or systems of decision making system one, which is our intuitive, emotional, instinctive way of thinking versus system two, which is our sort of reflective, analytical, coolheaded system of thinking. And there's been a fair amount of debate over like in some cases where there's an objective right answer.


You can show system one tends to get that wrong, like questions about probabilities. And one tends to get often tends to get wrong because we didn't really evolve to to handle comparisons between a point one or point one percent chance of something. Questions of scope system one tends to get wrong, but ethical questions where you can't just show, hey, system one is unreliable. In this case, it's a more open question whether your system one or your system to judgment could be considered the the should be given priority, basically.


So a philosophical about experiments I know we've talked about on the podcast before is the trolley problem where there's this train, this trolley barreling down the track and there are five people who are tied up on the tracks or otherwise kids playing on the tracks and you can do nothing and let the train hit those five children and kill them. Or you have the option to push a man off a bridge or in some cases pull a switch that causes the man to fall onto the tracks.


And he's large enough that he will stop the train. Of course, he'll die in the process, but that will save the five children. So experimentally experimental philosophers or psychologists have found that when people are thinking more with their system, one, either because that's more the kind of person they are or because there's been an intervention that's that's put them more in a system, one state of mind, they're much more unwilling to to sacrifice the man to save the five children because it's an upsetting thought.


It feels intuitively wrong to be essentially killing someone. But when people are thinking more with their system, too, they're much more willing to sacrifice the one person to save the lives of the five because they can they can just calculate objectively five better to save five lives than one life. And some people, like I think Joshua Greene is a psychologist who's done many of these experiments will say, well, you know, the fact that our system to give us this utilitarian judgment is kind of a it's a defence of utilitarianism because our system, too, is that careful thinking one.


But I'm not so sure I agree with that reasoning. I'm not so sure it's obvious the system to answer is the correct one there. It may be one. I sympathize with more, but it's not I don't see how I would defend that is objectively correct. So my question is, to what extent do you think the intuitive system one judgements should play a role? I'm sure they must descriptively play a role in bioethics. But how much do you think they should play a role?


Well, the most important thing that I've learned to stay far away from trollies. Oh, which solves a lot of problems. But let's say you live in San Francisco and that's not an option. Yeah, I think what's most important is less when we use our system one and when we use the system to the knowing when we're using each one so often. I am absolutely fine in the hospital. If somebody wants to make a completely irrational decision, as long as they say to me, look, doc, I know my decision makes absolutely no sense, but I'm going to do it anyway because I have to live with it rather than someone explaining to me in a very logical but utterly convoluted ways.


They're making a decision that is fundamentally irrational, got it. So you see your role as the bioethicist in those situations to just make the person acknowledge what the tradeoff is that they're making. And then, you know, if they want to make that trade off, they can do so as long as it's conscience within certain parameters.


There are certain situations where I can intuit that the patient is making an irrational decision and knows they are without them having to tell me. So the end stage cancer patient who is in denial, I don't need to put their toes to the coals to get this data. Me, I'm really going to die because we both understand without discussing but boring cases like that, I just want people to acknowledge which system they're using.


OK, so let's switch tacks a little bit. There is this principle which I didn't invent, but I've been popularising it's called the Copenhagen interpretation of ethics. And essentially it states that as soon as you interact in any way with the situation, you acquire some moral responsibility for it. So if you if there's a drought in some geographic region and you fly out there and you start selling bottles of water to people, a lot of people will look at you and say, God, that's reprehensible.


Why don't you give the water to these poor people who are going to die without it? Why are you demanding money for the for the water? And yet, of course, there's millions of other people who who never even went to the country nor said anything about it, and no one is criticizing them for not giving bottles of water to people who need it. They're criticizing the person who interacted with the situation in some way. And I bring this up because there are a couple interesting case studies in your manuscript that I kind of thought were examples of the Copenhagen interpretation of ethics.


And I'm curious if you agree. So one of them is the problem of doing studies that involve withholding some treatment that we know works from people who are sick and need treatment. If if we have, let's say we have a pill, we'll call it a pill A and we know that this works pretty well at treating some disease. But there's a new pill, PLB, that we think might work better or maybe it's cheaper. And if it works, that would be much better to be able to give it to people.


But ethically, we can't justify splitting people into groups and giving some of them pills. And some of them will be because the people getting PLB, even if they don't know who they are, are we're withholding treatment from them essentially. So some doctors have gone to Third World countries where poor people don't have access to any of these pills and are suffering from these diseases and done the studies there. So arguably, no one, no one in those countries is being harmed.


They're just you know, some of them are really being helped and some of them may or may not be helped, depending on whether it actually works. But they're not worse off than they would have been. But my understanding is this is a pretty controversial, pretty criticized practice to use different ethical standards in another country. So what is your opinion on those cases? And do you think that it's do you think I'm right in my diagnosis?


I want to emphasize clearly only something come up with in Denmark, a country with no problems could put the burden on people who interact with things, not anybody, because the ethical incentive is created for none of you ever to leave your apartment and then you'll be perfectly ethical life.


So presumably there is some threshold beyond which you have to reach an engagement, I would argue, before you really take responsibility for something. But I think that's a really good example. There is a difference between you flying through Uganda and you have a layover in the airport. So you've sort of engaged in their country and therefore becoming responsible for all of its poverty. And you running a complex clinical trial testing treatments for AIDS in Uganda, some of which work and some of which don't work as well, where you're taking a much larger degree of responsibility and you're a trained professional and you're bringing the baggage, you've premature of some fancy university.


So there's probably a continuum of responsibility at the extremes you're talking about. I agree with the principle, and I I think increasingly there's a consensus in this country that you can't do that. Marcia Angell, when she was editor of the New England Journal of Medicine, decided not to print the results of these studies. And both journals now won't do them.


Can you elaborate on why you agree with it? Like why what's your response to the argument that you're not making anyone worse off than they would have been and are in fact, helping some people? Let let me offer an analogy. It might be helpful.


So let us say there is a famine in Country X and you decide you're going to solve that famine by adopting a child from that country and bringing it back to our country and feeding him. Well, all of us would view that as a good thing in the context. Instead, you adopt ten children and you feed them a subsistence diet because that's what you can afford in your home country. Most of you would feel very comfortable doing that. You take them out of it, horrifically abusive home and you only beat them on birthdays.


We wouldn't accept that. So why is it OK if we do it across the border? I mean, that is intuitively compelling, I just want to I want to principle, but if we established last time, sometimes there are things where it's very hard to establish a principle, and yet intuitively we can reach a pretty good societal consensus and the right way to do them. And we know it's hard to come up with the principle, but we still don't do them.


All right. All right, fine. We'll ask the questions for some time now. All right.


So another phenomenon that I noticed and that I picked up on a few times and the examples in your book is that there's this trade off between sort of maximizing utility, being being a utilitarian versus avoiding causing any harm, like following the maxim first, do no harm. So one example at the sort of personal doctor level, although I think this happens society wide as well, is a person coming to a doctor and saying, look, I really want to amputate my limb.


It's not medically necessary, but I feel like a strong psychological urge to do it or to take a somewhat well, I don't know if it's more extreme. A different example, parents coming in to the doctor and saying, look, we really want to have our daughter circumcised because that's our culture. But we'd prefer to do it here in a nice, clean, competent medical setting than back in the tribe where we live. Will you help us? And in both cases, there's the implication, either explicit or either the explicitly stated intention or the implication that the person, if they don't get a doctor to perform it for them, will just do it on their own and it will be more painful and and likely to be risky.


So, you know, one could say the doctor in that case is is doing the best thing overall by saying, OK, if the surgery is going to happen, at least let me make it happen safely. Or you could say, you know, the doctor should not be performing medically unnecessary amputations and circumcisions.


Well, I think what you're really doing in a lot of these cases, balancing the welfare of an individual and their outlying request with the welfare of society as a whole in terms of establishing certain principles that medicine operates by establishing the rule of law, establishing consistency. The patient in the hospital is almost always a one time participant or usually the doctor. In contrast, so often they repeat step player and as the repeat said player, you can never give the patient all the care they need because you always have competing goals or competing values from other patients.


This is simply an extreme example of that, where you're weighing our normative goals, the society against what might actually make this particular patient better off in the context of their lives.


Can you elaborate on on how how making that choice to to agree to the patient's request would impact society or the doctors other goals? Sure.


So in this particular case, let's take the example of the female genital cutting, because I think it's more complex. We had lack of limb amputation. This girl may end up being taken back to Africa, end up being cut within unsterile knife, end up having far more severe medical complications or not getting the procedure might end up unmarriageable in her local community and a social outcast and die of poverty, all of which very possible. Even if we take that all this fact and it may or may not occur, we have to weigh that against the implications of setting up a system that allows other people to accept this as the social norm, because then more people may come forward thinking it.


Their the efforts to eradicate the procedure elsewhere may become more complex. The belief that doctors won't do something overtly harmful may be undermined in our society. A much more basic example of this, if you can, could think of a scenario where you have a patient who comes to your office and is a school bus driver and he drinks like a fish. And what you really want to do is engage him in care over several months and doing so you'll get to know him better.


You'll be able to help him get alcohol treatment. But you put all those little kids on the bus in the process in danger. On the other hand, if you simply report him to the authorities, you may keep him from killing those little kids. But what is he going to do is going can tell all of his other hard drinking bus driver friends. I told my doctor this private information and she called the authorities. And then in the long run, we may end up with more drunk people driving school buses, not fewer.


It's a hard calculus to make.


So the idea is that a lot of these things that seem seem to be the utilitarian choice locally are actually not when you zoom out and broaden the scope of your calculus. Exactly. All right.


Well, how about this? This was a tough one that I like really grown that when I read it, someone has a there's a very contagious disease going around contagious via the air, not through blood, for example. And there are people who are carriers of this disease. So they themselves are not at their health, is not at risk, but they can transfer it to other people and we don't yet have a cure for. This disease, so our choices are between basically quarantining the person against their will so that they don't infect other healthy people and spread the epidemic or letting them go free and risking that they're going to make the epidemic much worse.


I forget how much I was willing to pay to not have to make that choice, but it was a lot. What's your take?


And I will add the New York City Department of Health is glad to make that choice for you, though they have taken it off your hands that the paradigmatic case is Mary Mallon, who is pejoratively known as Typhoid Mary. We spent a good number of years forcibly quarantined on North Brother Island in the river across the way with very, very little outside contact, because as the poem goes, everywhere that Mary went, Typhoid was sure to follow. And for Mary, that was a significant hardship.


But for the parents who lost kids to typhoid, that was also a significant hardship. What I will use that case to point out is we now know there were lots of other people in New York City carrying typhoid. At the same time, they just weren't poor female Irish American immigrants. So we want to make sure when we're doing this first, if you have the science right, that we're not singling out one set of many carriers and placing the burden on them.


Once we do that, then you think I think we have to really step back and sort of ask that if we were designing a world question with a valid ignorance, taking the small risk would be that person. Would we accept that? Because practically the alternative is condoning epidemics, which few of us would want to tolerate.


Are you implying that there is a clear answer to that question that we would we would want to take that? I think if we took a vote and I say this with empirical evidence, because when I give bioethics lectures, I actually vote on a number of these questions. So my sense is that in the general public of people who pay one hundred and fifty dollars for a bioethics lecture, about 80 to 90 percent would put Typhoid Mary on the island. Hmm.


Your audience? I don't know. They're more skeptical. Maybe they don't trust.


It's interesting. There was this meme going around Facebook recently, at least in my section of Facebook, that it showed the trolley problem with a nice visual illustration, said most people think that when you're answering the trolley problem, you're this guy who's making the choice of whether to pull the lever and drop the man from the bridge. But actually the right way to approach it is you don't know which guy you are. You could be one of those guys on the tracks.


You could be the fat man. Sorry, he's in the class. He probably supposed to be fat because that's how he can stop the train. But it's not a very nice wrinkle of the problem anyway. So I think that I don't know if that version of the problem has been given to people, but I imagine the results would be pretty different. If you're behind a veil of ignorance and you don't know which of the people in the scenario you are.


And at the opposite end, it becomes more difficult if we point out who the person is to you in a flesh and blood, non quote, fat man setting. If I tell you next time the UPS guy delivers the package, we can tie him down and cut him up and deliver his organs to six different people and save their lives. Virtually none of you will join me with a knife.


Right? So I hope I was just. Yes, please. Any lyric at the end of this book.


OK, I'll ask you one more question then. I'd like to hear if there are any dilemmas that you think are particularly challenging or interesting. I think many people here might have read the book Nudge by Cass Sunstein or this has been reported elsewhere. But basically the finding that was so striking from that book was that when you switch the system of organ donation in a country from being opt in to being opt out, you get a huge skyrocketing of the percentage of people sign up for organ donation.


So you can basically people still have the right to choose not to be organ donors. They just have to actually opt out of process. And I think the number was in the 90s, 90 something percent of people in countries that switched to the opt out system are organ donors, which is a huge boon for the for the medical system. But and to me, this seemed like a clear win. This is just great. You don't have to take away any freedom from people or any autonomy, and you can vastly increase the supply of organs.


Wonderful done deal. No question. And in your book, you made it seem not quite that simple.


Why is that a point that you can have a system using opt out that gets one hundred percent donation, you know how you do it. You make the system opt out and you don't tell anyone. It's very effective. But but that's part of the problem.


We live in a country where more people can name the seven dwarves than the justices of the Supreme Court. And a handful of people out of 100 can name the vice president. And all these alarming figures to actually have a system where you fully inform everybody is a daunting task. And what often happens is the cultural and religious communities most likely not to want to donate organs or want to opt out. Are those least in the network to know what the rules are?


Right. Beyond that, you run the risk of people losing. Faith in the organ donation system, and I think this also comes up where there are networks that exist, I believe Project Renewal is the most prominent one that largely recruits organs from Jewish donors and Jewish recipients. Now, in theory, you might say that's a great thing. They're increasing the number of organs available. Therefore, if they give a kidney to a Jewish recipient on the list, somebody who is not Jewish will be able to get a categoric organ.


That might be true. We don't know for sure, because the danger is people will be out there and will say, oh, they give organs to Jewish people and not non Jews. I don't want to be part of that system. I don't trust them. I'm not saying that's true, but I'm saying that the integrity of the system and its perception is crucial to making it work. Right. So at this point, I want to invite you to to tell us about any of the particularly interesting dilemmas you've you've personally had to face, but I'll share a couple of issues I'm interested in.


One is how to use public resources that have private implications. The classic example of this is the question of whether trace amounts of lithium should be added to the drinking water. And the number of psychiatrists have written about this. So it turns out you're laughing, but it turns out that a certain percentage of the drinking water in this country already has lithium in it naturally, and the areas where it does seem to have a substantially lower suicide rate. And this is a finding that it's replicated in Greece and Turkey, in Great Britain, in Japan.


So the science is pretty good. Now, the first question you should ask is, would it be ethical to divert water from those areas where it's in the water supply to areas where it's not? And if you embrace that theory, then why wouldn't it be just as ethical to actually add synthetic lithium in the same way we add fluoride to the drinking water? I will tell you, if you raise this question, even without offering a definitive answer, you will get more hate mail than you can possibly imagine.


I can tell you that this is not theoretical, not theoretical. Anything more abstract question for you to think about? A more practical question, I do a lot of my work in health care resources. Historically, the great questions in bioethics were scenarios where. A patient or a patient's family wanted to stop care and society for religious or cultural reasons wanted to enforce care upon them. You can think Karen Quinlan, Nancy Cruzan, more recently, Terri Schiavo, we have now turned that on its head.


Now, increasingly, the cases are those where the family is saying desperately, we want more care and the hospital or society is saying you've exhausted your budget of care in a primary system. Nobody is worth five, 10 million dollars of health care in one shot. You're out of luck. It's easy, by the way, when that relatively easy when the patient is comatose or in a vegetative state, but you have patients like Slim Watson, for example, with a prison guard in North Carolina in two thousand, written up at length in The Wall Street Journal, who had a disease where he could walk around the hospital, bright and cheerful, entertaining the pediatric patients was in the prime of health, except he needed a drug that cost roughly five million dollars a month.


Is anybody worth five million dollars a month? And the answer to it is, if you are the patient and not if you're not. Well, do you have any understanding of why that balance has shifted so dramatically, like why patients now want to continue care beyond when the doctors want to provide it?


I think there are two factors at work. One, the reason the older cases have evaporated is the courts have increasingly made the rules clearer. And we've adopted a far more autonomy oriented approach to letting patients and families make their own decisions when it comes to ending care with the palliative care movement. Hospice care. However, a combination of technology on the one hand and high profile cases on the other have led people to believe in. I hate to say it, but miracles have their own health care story.


So, for example, we all know that the life expectancy with diseases like motor neuron disease or domino mesothelioma is relatively short if you live five good years. That is rather impressive. Nobody takes that to heart. They look at Stephen Hawking and they say, I can be him. They look at Stephen Jay Gould, 20 plus years with downlow mesothelioma and say, that's me. Even if there are only five cases out there. And therefore, a large portion of people in essence, believe that just because nobody else in their condition has ever left the hospital, does it mean their grandfather was right?


I've actually been watching Scrubs lately. It's like some of my comfort food TV. And there was a scene that made me gasp in which Dr. Cox, who is he's kind of grating, some might say, abusive to his interns, but he is supposed to be this brilliant and and righteous doctor. He says to JD, the protagonist, Statistics don't mean anything to the individual. I'm sorry. The way he says it, it's clear he means like if you are an individual trying to decide whether to use a risky surgery or risky procedure, ignore the statistics because you're an individual and statistics don't apply to you or something like that.


And I think this actually reminds me of the problem that you're talking about a few minutes ago in which, you know, yes, we can officially tell people that they can opt out of the organ donation system, but a lot of people are are ignorant and just won't well know and can be there. Can we consider that we've been really giving them a fair choice? And I wonder for for any kind of risky procedure or patients who are participating in a study that has some risk to them, a statistical risk.


If we know that people have a hard time understanding statistics and will assume that their their case will be special and it won't happen to them. Can we really say that we're giving them that? We've got informed consent. So informed consent, I would argue, is a misnomer at best. And for reasonable disclosure, we're asking the doctors here what a reasonable patient would want to hear, whether the patient is really informed, even though we pay lip service to it is highly doubtful, because if you only go through it once and you don't have that context, your decision is going to be largely an intuitive one, maybe with a sprinkling of data.


If you went through it a thousand times like the doctor, you might make a very different choice. But that's never an option.


So it's what a reasonable what a person would request and then what they what they do with it after they were questioned, whether they understood it and clear, and even if they understood whether they I emphasize the word really understood isn't clear. But you can understand that no one in this condition has ever left the hospital or you can really understand that. I'd be interested to hear if there are any principles or heuristics that you've developed in in thinking through bioethics cases, they don't you don't have to defend them as objectively correct or use them all the time in a hard and fast way.


But but things like like what kinds of trade offs you feel comfortable making or. Yeah. Well, I think your audience may not like this that much, but I believe there's some data on buying houses and I'm going to butcher the details. But if you buy a house because it's close to the train station and the flat driveway, you'll have to shovel it and lots of other rational reasons. You're less likely to be happy with it in the long run than if you buy the house because you just like it.


And I think that's somewhat translatable to many bioethics decisions. You get a feel for the family, you get a feel for the family or the patient wants from the scenario, and then you help them get to where they want to end up, which is a far more artistic, subtle or intuitive process, rather than simply outlining for them two very different rational paradigms. Because often, if they are rationally oriented, person can choose the most rational paradigm. It may not be the right paradigm for them.


And the difference between bioethics as it applies in hospitals and many other areas of society is there are a lot of different right answers depending on who you are.


Hmm. Well, what about at the society level? So a decision like whether to add look into the water supply or not, or whether to allow research in third world countries that we wouldn't allow here, that sort of thing. When you're not trying to midwife someone else's decision, what kind of principles do you use?


Well, there are two principles I like to fall back, and one is my own and one is somebody else's. So one, when deciding what society should let people do, I find it very valuable to ask not what would I do in a particular situation, but whether I can conceive of any reasonable person out there making that particular decision. And if it's a decision that I can't think of any reasonable person making or find any pathway to get to a rational decision, and then I'm far more comfortable curtailing people's right to do it.


And the examples come to mind. Yeah, I can offer you a practical example from a hospital setting, from the hospital setting. Occasionally we'll have somebody come in and they will retreat, turned down an emergency life saving surgery. Let's say they'll need treatment for an appendectomy. And they will say, I have a good reason why, but I don't want to get into it with you. Those are the rare cases where we really override individual autonomy because it's very hard not for me to conceive of a reason that people wouldn't want to have an emergency appendectomy, but for a reason that people wouldn't want to have an emergency appendectomy knowing they're going to die, but also wouldn't want to share their reasoning with me.


I just can't get there. And you can try to think of cases. They're a hypothetical construct you can come up with. It's very hard to come up with them.


Do you have any guess about what's actually happening? Like assuming you're correct and they don't have a good reason that they also have a good reason for keeping secret? What do you think is happening?


Usually I think they're probably either misinformed or they're imbalanced, which accounts for a lot of them, but they're not nobody can say and I'm misinformed and imbalanced.


That's why I read my reasons for keeping secret is I was embarrassed that I'm not. And so I basically write the second principle. I think it's valuable. It's actually not me. It's Lester Thoreau. He explains why we've gotten to the challenging situation. We're in health care right now in this country. And he says it's because Americans are guided by two different principles. On the one hand, by nature, we're libertarian. We believe that if there's some treatment or intervention out there that is available, that anybody who has the resources for it should be able to buy it.


Some European countries don't let you do that, which is why those patients come here. But here, high end treatment, Bill Gates can go out and get it. We don't feel comfortable saying no, but we're also egalitarian by nature. So if we let one wealthy person have it, then we want to find a way to let everybody have it. A rising tide lifts all boats, even if the boat shouldn't rise. And then suddenly everybody is buying into something that may not be societally cost effective.


And the result of that is we end up helping visible victims at the expense of invisible victims because people see slim what's going to be five million dollars and they say, I don't want it to die. They don't see the two million people who didn't get flu shots or the five hundred thousand people who didn't get mammograms because those people don't know they died of the flu because Slim Watson got his five million dollars worth of care.


Is there anything that you've your opinion has evolved on in the time that you've been studying bioethics cases, you've changed your mind on or things you're more hesitant to to justify now than you were before? I think as I do this more and more, I feel more and more comfortable with letting people make their own decisions, even when I think they're profoundly bad ones. And I feel more inclined. I think with adults, we often let people do that. With children, we become more and more conservative.


And our current theory, which is the Supreme Court case, the principles of Massachusetts Justice Rutledge said, paraphrasing, It's acceptable for people to make martyrs of themselves, but not to make margins of their children. We should let their children live to the point where they can make their own decisions up to a point. I embrace that, but I think it's crucial we don't lose sight of the fact that those parents and that child have to live with the consequences far more than I do going back to the hospital and seeing the next patient.


My sense I don't know if I got the sense from your book or just from talking to you before was that you're more in the trade off between respecting a patient's or a family's autonomy versus imposing the solution. That seems like clearly the best solution to you, the doctor, that bioethicist that you're more you way more the autonomy side of that equation than maybe a typical bioethicist. Is that right?


I think that's a fair assessment. And I think a lot of that comes from seeing the consequences of when you get it wrong and also seeing the consequences. In my own life. It's like I'm not just like the Copenhagen experiment intervening in other people's world. I also have my own world where I've been a relative of people. And I've heard what the hospital ethicist has to say. And my thinking was not, oh, great, here is this an Ingres come to give me his wisdom.


My thought was this is my relative, not yours. What the heck do you know? And so I imagine the people I deal with feel the same way and I try not to lose sight of that.


And do you think society in general has gotten those choices wrong enough in the past that we should significantly underweight our confidence when we impose a solution?


I think there is no doubt that historically science and medicine have gotten a lot more wrong than they got. Right. And maybe the balance is tipping in the other direction, but it's easy to forget our errors. My favorite one is I teach the medical students this. In the nineteen twenties, they were two treatments for an acute heart attack in New York City. Most people prescribe six months of bed rest. This cardiologist and Hopkins John Darwin at the height of prohibition, prescribe beer and we write beer prescriptions.


Everybody laughs. Maybe you don't laugh because it seems so grim. It turns out the people who got beer did a lot better. Not because beer. Your heart disease. I'm not telling you that. Don't go home and have your elderly relatives drink beer. But the people who got. Without anticoagulants died in large numbers of blood clots. The beer was utterly neutral and that affected tens of thousands of people.


Was that his point or did he just really think beer was a cure and he got lucky? Oh, no. He just thought drinking was good for you.


But but he stumbled upon the right and he also prescribed it for smallpox and diphtheria had to a man with a hammer.


But occasionally he hit a nail and we shouldn't lose sight of that.


I have kind of an out of left field question I've been meaning to ask you. I have the sense that the field of bioethics has a surprisingly large Christian influence to it. And I, I say that because I see a lot of Christian bioethicists writing essays on the Internet, but also because there are certain principles that I see cited in bioethics, like bioethics journals that have a kind of Christian flavor to me, like the idea that tampering with the sort of natural state of a human's, the human experience or the human body is an affront to nature or it's like a crime against human dignity or something like that.


Oh, there was also the president's Council on Bioethics appointed, I think, by George W. Bush that had a bunch of Christians on it. So and maybe the proportion of Christians in in bioethics is not high relative to the country. But I think the baseline I'm using is like philosophers who are who seem much less Christian than bioethicists. So I'm wondering if I'm I'm picking up on a real thing. And if so, why is that true?


Well, I think historically, many of the people in bioethics, starting when there was the bioethics revolution of the 60s and 70s, what we would call liberal Christian thinkers, so many of the underlying principles of bioethics, like equipoise, justice, really do stem from people who felt that what doctors were doing may have been well-intentioned but was blind to the larger picture, making the world a better place. I think many of those people, although religiously motivated, developed a set of principles that largely embraced secular values like autonomy.


And what we see now is a much more conservative Christian interest is really a backlash against what is become the ethical norm in medicine. I can tell you that there are many people I know who are very religious, who are ethicists, but very few of them have any kind of, I would say, dogmatic or ideological agendas that people like George Bush's bioethics commission had. So they are not the representative norm, I don't think, in the country. If a bioethicist is religious, does that influence the advice that he gives to patients, even if not like him, maybe he's not saying, well, as a Christian, I think you should do X, but but like urging patients to do something because that seems right to him.


And the reason that it seems right to him is his Christianity.


But I think our goal is to guide people, not to grab people and drag them. I think I would be a fool to say that bioethicist certainly in the hospital setting don't have biases. I think it's important to be aware of these biases with yourself. And they can actually prove very helpful. You can say to the patient who is a fundamentalist Christian, well, I can't speak to your tradition, but but I'm an Orthodox Jew and this is what I believe.


And this may help you to some degree. You can also just try to steer the neutrally. It would be a mistake, I think, to tell them I don't have any religious bias whatsoever. And this is the right answer.


Well, I think we're almost ready to go into question and answer. So I don't know who's responsible for coming up. Mike, is that. Yeah, excellent. So I'm going to ask one more question then. While Jacob is answering, we can have people come up and ask questions.


I said, yes, OK, I don't know how this is going to work. Hopefully it'll work. So my my last question for you, Jacob, is whether there are any technologies that are on the horizon now or that have the potential to be on the horizon maybe five or 10 years from now that you think are going to pose interesting new bioethical issues or that maybe you're personally concerned about that you think like may well be used unethically?


I thought you were going to ask if I did bar mitzvahs. I was all excited. But I think an issue I'm not concerned about, by the way, human cloning, which gets lots of press and there's some data to suggest that the majority of time the Congress has been discussing bioethics issues over the last 20 years and then human cloning, if you do surveys of audiences in the country and I ask this all the time, how many of you would like to be cloned?


Nobody ever raises their hand. I think the most interesting issue for me is going to be the development of law and politics around three parent babies. Some of you may know there is now a process for extracting nuclei from one cell and infusing it to the cytoplasm of another exell fertilizing with a third sperm. And then in theory, you can place that embryo in a fourth surrogate mother to bring the child to term. And you could end up with three genetic parents, a fourth biological parent.


How society copes with who is the parent who has custody over the child who makes decisions is one where we have a lot of different answers in different places, and it's one where there's really not an intuitive consensus.


Well, I think we have our lines, so let's start over here. All right, shirt question. I'm an epidemiologist for the large pharma company. Drug prices are quite large. Hep C is changing the market a lot. It's like 100 grand a year to get cured of Hep C. The alternative might be liver cancer, really expensive diseases that you'd have to treat. Alternatively, also, the pipeline, if you have an expensive drug, then you invest in a lot more clinical trials for other drugs.


Question for you is, what do you see as the ideal ethical approach? What criteria would you use to evaluate the optimal drug prices for drugs? Short question, indeed, a question.


And I'm I'm a short bioethicist, so it works perfectly.


So I think the intuitive answer is not the right one. The intuitive answer would be to set up a rational system and figure out exactly where it becomes cost effective for society in relation to other health care expenses. Seeing how large a pool we have on health care and assuming that pool is somewhat immutable, I think you probably want to have some margin there for what people as a whole wish to society, especially if we can agree rationally on the right answer.


The analogy I would draw is the Oregon health system is a fairly rational system where they don't pay for treatment for certain end of life decisions and certain end of life diagnoses with a lower prognosis of living less than six months. And I think they set that at five percent. It turns out rationally you could actually take that period substantially longer and make the cure rate substantially higher and reallocate the health care dollars and come up with a more rational answer. But intuitively, none of us would feel comfortable with telling somebody with a 20 percent chance of living a year and a half.


You can't get chemotherapy. So in the same way, I think we want to figure out what the rational calculus is and figure out what the societal wish is and come up with some kind of balance between the two. That, unfortunately, is why we have elected officials to figure out that balance. I am a primary care physician who treats patients with opioids, opiates, with narcotics, and as recently as five years ago, that's what we were taught to do.


And now all of a sudden, we are the problem.


So there is a great pressure on us, especially starting in twenty seventeen, where we will be retrained and we will have strong disincentives to prescribe. What obligation to do I have to make patient who is on chronic narcotics, there's no diversion by this patient at all. I'm trying to get them down to as low as dose as possible. But yet they do not want to come off because they fear their quality of life is going to be sacrificed versus society at large versus the state who will actually find me in 2017 if T's are not crossed and I's are not dotted?


To me, it's a very complex situation and one where I sometimes feel that I have a I feel a sense of loss of what I'm supposed to do.


I think that's a great question and it's an issue we're going to hear a lot more about. It's a great example of hard cases making bad law in a sense, because there are outliers. There are doctors who abuse the system. There are patients who abuse the system. And we have a national epidemic. We've designed a rule that is both overinclusive and under inclusive. And I think the individual you are going to suffer from it are people, exactly as you said, who've already been established in one system and then are asked to switch to another.


And had I designed the system, I would much prefer a system that allowed some kind of grandfathering and a much lower over time adjustment to a new way of doing things. I understand that governments don't like doing things that way, not just here, but in many other areas. But I agree it's deeply problematic.


First of all, I'm a fat guy who lives in San Francisco with a trolley line literally outside my front window. So you're you're scaring the hell out of me.


But my question has to do with a lot of the dilemmas that you pose, come down to the difficulty we have, choosing the best of several bad options, you know, given the choice being a good option, a bad option, kind of everybody agrees. Well, let's take the good option. But when we have to choose between the least bad of several bad options, we seem to have some kind of barrier. We want a good option. Out there somewhere, and we're frustrated that it's not there, is this a problem with our cognition or some other way that we think about hard dilemmas like this?


Can I tell a joke? I tell you with a joke, because if I didn't, you might not know I'm a bioethicist. So to Mt. Sinai, professors retire and go to Yellowstone and they're about to take their first walk. And the first morning, the guide says to them, you know, gentlemen, I have to warn you, there could be bears in the woods. The legal disclaimer first, professors say spy movies bears second professor goes back to his cabin and gets his running shorts.


And the first professor said, Are you out of your mind? You cannot run a bear. I'm a second. Professor says, of course not. But now I can outrun you.


That is how, unfortunately, many decisions are made in health care.


And it's a framing technique I think most of us use when we give people two undesirable options by nature, they twist one of those options into a desirable one. So I think you've hit the nail on the head. I don't have an answer about how we can convince people to see both of those as unpalatable because there is no greater motivating factor for most people and hope this is about altruistic kidney donation.


All right. I suppose you're aware of that. There is a website which matches people who want to donate kidneys and the donor picks out the person she wants to do that. I just really feel this is not the way to go. When I was wondering how that's seen in bioethics and I will add, there have also been advertising campaigns.


A number of years ago in Texas, there were billboards all over the state to deliver the question of whether you could allocate your cadaveric organ to someone else if they increase the supply of organs. I would not be opposed to it. I would still have some problem with it in the sense that you're going to have a just the issue of allocating resources. I think the greater question to which we have no empirical data is, does this actually increase the supply of organs?


Would those people donate anyway if they didn't have the choice and or other people being scared away from the process of receiving it is unfair. And until we know the answer to those questions, I agree with the questioner. This is deeply problematic. Unfortunately, there are complex social and financial incentives visible versus invisible victims. When somebody shows up at the hospital and says, I'm willing to give a kidney and here's my friend I met online willing to accept a kidney, it's very hard for nephrologist to say we wish you the best of luck, but not here.


Doesn't go high enough. I think many philosophers have hope to find something like a function by which ethical evaluations can be made. An alternative to this is Virtue Ethics, which views ethics as one of many traits by which a living thing can flourish or not flourish. Does this have any practical place in making practical decisions in a medical setting?


But will virtue ethics, which basically all we practice for many, many years, would you didn't call it that. We asked the senior doctor, the hospital, what he thought the right thing to do was be the likeable guys that we did what he did. Unfortunately, more often than not, when we look back in those cases, we got them deeply, deeply wrong. So I think there is a role for it in the global picture of coming up with an answer.


I don't think it affords us increasingly an answer to many of the questions, in part because making any of the ethical decisions now is complicated by the depth of knowledge you need to have to make whatever decisions you need, not just a sense of ethics, but also a sense of technology and increasingly finding people who have both of them extraordinarily hard to do. We have time for one more question. Thank you.


You touched earlier on the respect offered to religion in terms of patients choices. Could you touch a little bit on the respect we should offer to religion in terms of organizations, religiously affiliated hospitals, insurance companies and so forth? I know there are some campaigns to insist that such facilities disclosed all patients. What limitations there may be on their care ahead of time, because obviously many of these facilities provide care to a vast numbers of people who do not subscribe to that particular religion.


And I will offer two thoughts and what is a very complex feel. They've actually written a fair amount and I'm glad to talk to people differently or they can email me about questions. But I think on the one hand, we want to ensure a system where people at a minimum know what is available to them at any particular facility. They get care. And on the other hand, we also want to make sure that everybody out there in a reasonable way can get the kind of care they want with that those parameters.


I also feel it's important if we can obtain that to allow people who have different views and practice medicine, either religious, the cultural view, to be able to practice their way. For example, I would use if you're a Christian fertility clinic in California, because the real case and you want. Lesbians and you tell people up front and there are many other clinics in the area that will do it. It's not particularly problematic. It's different from baking the birthday cake in my mind.


It's an intimate, long term relationship. If you're the only fertility clinic in the area or you don't ever tell people that you do it in advance. That becomes much more problematic. Achieving that in practice is challenging, but I think there are ways to do it.


Right. We're just about out of time. So thank you so much, Jacob, for returning to Nexxus. Internationally speaking, let's all give Jacob a big hand.


Thank you. Thank you for having me. And as always, this concludes another episode of rationally speaking. Join us next time for more explorations on the borderlands between reason and nonsense. The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benny Pollack and recorded in the heart of Greenwich Village, New York.


Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.