Transcribe your podcast
[00:00:14]

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to rationalise, making the podcast where we explore the borderlands between reason and nonsense. I am your host, Massimo Villehuchet, and with me, as always, is my co-host, Julia Gillard. Julia, what are we going to talk about today?

[00:00:47]

Well, Masimo, today is another very special episode. Oh, yay, yay. This is our 20th episode of Rationally Speaking.

[00:00:54]

And our new pattern is that every fifth episode, we'll be taking Q&A from our readers and listeners and doing a one hour special episode where we answer whatever's on your rational minds.

[00:01:08]

So we've collected a lot of questions from the rationally speaking blog. I'll start out with a question from a commenter named.

[00:01:16]

Hmm. And that's really his name. Anyway, he asked if groups such as the skeptics overemphasized science and rationality. Basically, he asked, should they be less like Spock and more like McCoy?

[00:01:31]

So, yeah, honestly, if I can start, I think that this whole idea of overdoing or overemphasizing rationality is kind of a straw man or to use a term that I recently came across a straw Vulcan.

[00:01:47]

That's from TV tropes. I love that so much.

[00:01:50]

So it's this idea that being too rational is bad for us, either in general or in specific types of situations or contexts.

[00:01:59]

And I would argue that this mostly stems from a misunderstanding of what rationality is.

[00:02:05]

So I would I would define the rational thing to do in a situation as whatever is most likely to get you the best results for whatever definition of best you want to use. And so if someone is going to tell me that while sometimes doing the rational thing is is actually not a good idea, well, then I'd say, no, you're doing it wrong.

[00:02:24]

That's not the rational thing.

[00:02:26]

And I think so.

[00:02:28]

But let me give you an example of one thing that I've heard recently is that, well, sometimes it's better not to be rational because a rational person would have to consider every little detail of a situation before acting and get all of the information before acting. And sometimes it's really best to just act quickly and intuitively rather than waiting until you have all the information which you know, that's true sometimes.

[00:02:50]

But the reason is not that, well, irrationality is sometimes better. The reason is that, of course, there's a cost to information. There's a cost in terms of time and money and resources, of gathering more information and more time to think. And so the rational thing to do is to weigh the cost of the information against how beneficial that information is going to be to you and making your decision. And frequently, the rational thing is not to wait until you have all the information because the cost is too high.

[00:03:16]

The rational thing is to act on limited information.

[00:03:20]

Yeah, I think you're you're right. I mean, sometimes the rational thing to do is, in fact to not do an exhaustive search, for instance.

[00:03:27]

But often, actually, I would argue that what is the rational do simply because there is a diminishing return, you keep spending more and more time and energy and resources searching for an optimal solution while in fact you have to live a life and get some kind of solution. And I agree. I'm going to disagree with in part, I guess, with your take on this, which is on the blog actually on the on August 25th this year, I actually wrote a an enterprise on on this topic.

[00:03:56]

It's called Between Spock and McCoy. Right.

[00:03:58]

And in parentheses, it says via Aristotle and I agree with you in the sense that sometimes these kind of objections comes out of a misunderstanding of what it means to do the rational thing. For one thing, I also think that this is a position between rationality and in emotions is overemphasized by some people who really seem to think that these are completely different independent systems in the brain, that we there have a you know, these these rational decision making or were completely driven by emotion.

[00:04:31]

And there is nothing nothing in between or nothing more interesting going on.

[00:04:35]

This is actually a view that goes all the way back to Playdom, who thought that, in fact, that the human soul at the time was called is has three parts and that the rational part has to be in charge because otherwise all sorts of bad stuff happens. But of course, modern neurobiology shows that the emotional circuits of the brain, which are in the limbic system, the amygdala and so on, and the rational system, the cognitive systems of the brain, which are in the frontal lobe, are actually very much integrated.

[00:05:06]

There is a lot of, you know, literally millions of fibers that go back and forth. And the two have to communicate in an individual for that individual to make reasonable decisions. So so a reasonable decision is always some sort of balance between your emotional instincts and and whatever it is that you were counting, the processes tell you is the best way to do it. So I think that that a better way to look at it is that these two things are not in.

[00:05:33]

Opposition to each other, but they need to be integrated. After all, if we want an example of what a human being really would be like without emotions, then just look at a psychopath. And that's not a particularly pretty sight. On the other hand, it's also hard to argue, in my opinion, that this is a world where way too much rationality is taking hold and people always make rational decisions. So let's go back to emotions. I don't think anybody can make that argument seriously.

[00:05:59]

So it's a little bit more of integration, which is really what the what the science itself in neurobiology is telling us. It is the way to go. So I don't see the opposition and therefore it's a I would I would classify that as a false dichotomy.

[00:06:12]

Yeah, that's actually another good example of what I would call a straw. Volkan, I've heard this from people before, too, that something to the effect of, well, you know, if you're fully rational, how can you really enjoy life?

[00:06:24]

And the and the implication is something like in order to enjoy things like food and sex and beautiful sunsets, you need a rational reason to do so.

[00:06:36]

This is what they think a rational, you know, a rational person would say. And no, you don't need a logical argument for why sunsets and sex and food are enjoyable.

[00:06:46]

They just are. That's how we're built. So just feel good, right. So, you know, this is.

[00:06:51]

Yeah. This is another reason people think that there's such a thing as too rational that it would mean that you didn't have some excuse or justification for taking enjoyment in things.

[00:06:59]

But again, I've studied psychopaths. I don't know many rational people who don't enjoy sex and sunsets and so on and so forth.

[00:07:07]

Right. Case in point. All right. Let's move on to a question from in Pollock who asks, what is the relation of politics to rationality? How do values fit in and what counts as a rational policy?

[00:07:21]

I should mention that Ian is actually the person who designed the logo on the Russian speaking blog.

[00:07:26]

Oh, yes. Thank you. Very nice. Thank you. It's very it looks very nice. So interesting question because the next Russian speaking post, which will come out in a couple of days, which means obviously by the time people will listen to this podcast, it will be out precisely on that topic, which is yet right now.

[00:07:44]

That's why I just ran.

[00:07:46]

But and it is precisely what our producer, Benny Pollack, calls the demarcation problem for skeptics. So what is it that makes a topic, a good topic for rational argument, for rational discussions and therefore by sort of by exclusion?

[00:08:04]

What is it that what kind of topics are not rational? Good, good topics for our discussions? Well, I think that, for instance, tasting chocolate, it's not up for rational discussion. It's obviously clear that dark chocolate is best and there is nothing to discuss their families that have.

[00:08:22]

Wait a minute now.

[00:08:24]

I mean, so clearly things matters of taste. It's hard to imagine that there can be a rational discussion about it. Right. Right. You know, somebody likes dark chocolate or chocolate. Somebody likes a particular type of music or a particular type of painting and so on and so forth. And you cannot question that, like, you know, not like you can question on sort of more objective grounds, the technique or the way in which those things are done.

[00:08:51]

Right. But then it's a matter of taste. I don't think there is any any discussion about it at the opposite extreme. Clearly, we have science and philosophy and logic. Those are clearly areas where rational discourse is is the way to go despite postmodernist philosophers.

[00:09:09]

And then the interesting question, I think it comes, comes from four things, about four fields in between. So politics, for instance, you know, can we have rational discourse in politics?

[00:09:21]

And I would hope so, because if we cannot have rational discourse in politics, then that seems to me to undermine the foundations of a democratic state that the basic one of the basic assumptions about a democratic state is that we can, in fact, have rational discourse and even occasionally change each other's minds about issues of values and policy. If we cannot, then the whole thing comes down to a shouting match. Now, unfortunately, lately, the political situation in the United States is very much similar to a shouting match, but it has not always been the case.

[00:09:59]

I do know people do change their mind about political opinions on rational grounds, so I don't think that's completely out. The problem there, of course, is in this this connects to our earlier discussion about emotion and reason is that, of course, political opinions often do come very much charged with high emotions.

[00:10:18]

And it is, in fact, difficult to keep the emotions at bay enough to understand, to try to really understand what the other person's argument is.

[00:10:26]

I would agree with that. But from my experience at least, it's often the case that. Rational discussion in politics only goes so far that once you get the objective facts out of the way, you make it sound trivial. But, you know, that's a big deal.

[00:10:44]

A lot of political discussions depend on getting the right answer, depends on empirical evidence about about what the history is of, you know, relations between different countries and how certain economic policies have worked in the past.

[00:11:01]

And, you know, these are these are objective facts and really important.

[00:11:04]

But but even if you could answer all of those questions with 100 percent certainty, there still seems to be a huge component of just preferences, just value judgments.

[00:11:14]

So, I mean, there are these fundamental tradeoffs in the political decisions that we make between things like equity and efficiency or between liberty and security. And if someone prefers to trade off more of one to get the other than I do, then I'd be hard pressed to explain why they're wrong about their preference. So I can give an example. I've had this conversation recently about social welfare. This issue tends to get a lot of people really worked up because they're very bothered by the idea of so-called welfare queens taking advantage of the system and not working.

[00:11:54]

Now, there are objective facts involved here. How widespread is this problem? It may not be as widespread as these people think it is, for example.

[00:12:02]

And to what extent could we reduce the problem with various kinds of welfare reforms?

[00:12:06]

But at the end of the day, if you want a generous social safety net, there's probably going to be some amount of people gaming the system, taking advantage of it undeservedly. And I might say, you know what, it's worth it. That's an acceptable tradeoff for the to have the benefits of the safety net for the people who really do need it.

[00:12:24]

But I think a lot of people are really upset at the idea of that going on enough that they would say, no, that's not worth it.

[00:12:29]

And I don't agree with their preference, but I don't know how I would rationally tell them that they're wrong.

[00:12:36]

Well, I think yeah, that's a good example, but I think that there are a couple of ways to go about it. First of all, of course, underlying the disagreement here between the two of us may be the fact that, as you know and as we discussed in the past, I do actually think that you can do rational discourse about ethics and which, of course, underlies values. That doesn't necessarily mean that you can show other people that they're wrong in the same sense, in the same factual sense you're talking about.

[00:13:02]

Right. So so if somebody say, let's take your example, if somebody believes that there is a 20 percent abuse of the welfare system and the real figure is actually point zero two percent or something like that, well, that's an empirical matter. You can figure it out. And then if the other person refuses to accept it, the figure then is the other person's problem for literally rejecting reality and not just making an argument. So so that is something that can be settled.

[00:13:30]

But one can say yes, but my assumptions, by my principles about how a democracy or how a state should go, what the social contract is supposed to be, are is different are different principles from yours. And so what what are we going to settle down? What matters of fact, we're not. But that doesn't mean we cannot have a rational discussion about it, because I think of our values as the equivalent of assumptions in a mathematical system or in a geometrical system.

[00:13:59]

So.

[00:14:01]

As you know, in mathematics, for instance, you start with different axioms and you come up with different results, but once you do agree on certain axioms, then it is a matter of fact mathematical fact that, for instance, two plus two equals four, or that within Euclidean geometry, you know, the Pythagorean theorem is correct and so on and so forth.

[00:14:22]

So there are two levels of discussion here. If you accept the axioms, if you accept your assumptions, then there are a matter of facts that become relevant and the issue can be settled on a matter of facts. What if you don't accept the axioms? Then we can talk about why you don't accept the axioms. There may be reasons. I mean, I don't think that people necessarily have only sort of the same kind of a preference about political systems or about ethical system that people have about chocolate or a particular type of painting or a particular type of music.

[00:14:53]

I mean, people have reasons for it that they would deploy some of these reasons. For instance, very recently I read an article by Matt Taibbi in Rolling Stone that was addressing precisely this point of the idea of how the Tea Party sees welfare, the welfare state. And in that particular case, as it turns out, he was talking to a couple, asking them what they thought the problem was and why they thought there was a problem. And it turns out there wasn't really a problem.

[00:15:21]

It was it turns out that these people thought that they had different principles from which they were starting, but in fact, they were OK with, as it turns out, with a massive welfare state, just had a very different attitude between when it came down to their own welfare versus other people's welfare.

[00:15:42]

So tab, for instance, in this particular case, pointed out that they themselves, where the recipients of things like Medicare, for instance, which to which they were otherwise objecting.

[00:15:52]

And so when you pointed out that contradiction, that is a contradiction that stems from their own principles. And the only way to deal with apparent contradiction is either to to acknowledge that your principles are, in fact, incorrect, because they led to a contradiction, to a logical contradiction or to somehow argue on the basis of the facts, which is I am the only person who does not abuse or I'm one of the few people who doesn't abuse the system. Well, if you argue in the second case, you're back, to matters of fact, we can establish how many people abuse the system if you do not argue on that basis or whatever.

[00:16:24]

And you just saying, look, I'm against Medicare, but I want Medicare, then you're contradicting yourself. Then you're starting from whatever system of ethical or political positions you have. You're starting from a system that immediately leads to a contradiction, a logical contradiction, which means that you're really wrong. You're not wrong factually. You're wrong, logically.

[00:16:45]

Right. Although in my experience, people tend to be pretty good at coming up with reasons for why they're an exception but are theoretically generalisable. But of course, we're adhoc. Correct.

[00:16:55]

But that's a different matter.

[00:16:55]

I mean, that's that's just the person not realizing that he contradicts himself and or wanting to, you know, weasel out of it. Yeah, that's true.

[00:17:03]

I am actually a pretty optimistic that a lot of things that seem to be differences in value judgment could actually be resolved by a better, more accurate empirical knowledge. But that's that's hard to come by at.

[00:17:19]

These political debates tend to be tend to rely on such complex information when especially when there's economic issues involved that it often well, two people claiming and making empirical claims that contradict each other.

[00:17:33]

And I sort of throw up my hands and say we need we need an expert in five encyclopaedias.

[00:17:37]

Well, of course, the politicians don't help by often literally making up stuff. You know, now there is there is a proliferation of of websites that actually do fact checking, which is what journalists used to do. Apparently these days don't have the time or the money to do it anymore. And it's pretty clear. I mean, you can show I mean, the trend is incredible. You can show that a lot of politicians of all political stripes, although I would maintain one particular political stripe than another, are literally make stuff up.

[00:18:06]

I mean, they just throw numbers when they go on CNN or on or on a radio show on NPR or God forbid, on FOX.

[00:18:13]

And and they just make up stuff. They throw numbers. You know, this is going to cost this many billions of dollars. And then when you actually go and check, it turns out that the number came out of nowhere. Well, when you're dealing with people making up numbers, then we're not even talking about logic or factual accuracy, just talking about lying.

[00:18:27]

All right.

[00:18:28]

So I think this is probably a good point to talk about. Another question from a commenter with the initials ACEL who asked us to comment on Sam Harris's new book, The Moral Landscape That Just Came Out, and to talk about how much science has to say about values.

[00:18:47]

So I have I've not actually read the book in its entirety, but I've sort of skimmed didn't. I've read other stuff that Sam Harris has written on the topic and in his TED talk on the topic.

[00:18:58]

So but the argument of the book, actually. Seems to be, as far as I can tell, is that science and empirical evidence can tell us a lot about how to improve human well-being. That seems uncontroversially true. But but Harris seems to think that that sentence is equivalent to science and empirical evidence can tell us a lot about what moral behavior is. So he's made that huge jump between equating human well-being, increasing human well-being and moral behavior, which is far from being as self-evident as he seems to think it is.

[00:19:32]

And in fact, when I do when I talk to people about what they think moral behavior is or when you read about morality, I would I would venture to guess that most people would disagree with that definition of morality that he seems to think is just obvious.

[00:19:47]

Yeah, there is another blog entry that deals with this thing. This was published on April six, actually. And it's a direct point by point to point analysis of our various argument to which actually Sam Harris responded on his blog. So that was so people are interested can can look up to the both the original posts and the, of course, the cross commentaries on these things. It's it's a really interesting discussion because as you said earlier, it's definitely the case that science can tell us what increases or doesn't or does increase human well-being in any particular area or another.

[00:20:25]

Right. So that there are empirical issues concerned with human well-being. It's, I would say, a truism. I don't think anybody would disagree on any moral philosophy for certainly disagree. The problem is, as you pointed out, when you equate the total of values to empirical facts, we're back to the discussion we just had about politics.

[00:20:49]

That is, to me, moral values are assumptions that we carry into our discussions about the world, about what what is important and what is not important, what kind of what takes priority and what doesn't take priority. And it doesn't seem to me that empirical evidence there is particularly pertinent. For the simple reason that empirical evidence, in fact, isn't going to change, as a matter of fact, doesn't change usually people how people see the world in ethical terms.

[00:21:19]

It takes more than that to change to the change people's opinions.

[00:21:22]

You have to show that their view of the world implies contradictions, for instance, that is that it's logically inconsistent or that it carries certain consequences that they actually don't want, that they actually abhor.

[00:21:35]

And that is the point where people might start seriously thinking about reconsidering their moral values, certainly not because of empirical evidence. The other thing to think about it is that if you in fact think that moral values are entirely determined by empirical evidence, you open yourself to a really disturbing series of possible consequences. I put this question to Harris on on the blog, and I didn't particularly get what I think was was a reasonable answer and which is, well, suppose that, for instance, a new economic theory shows that slavery is increasing overall well-being in societies that adopted.

[00:22:18]

I don't think that is the case. But but it could be the case, right? It may be the case if it is an empirical matter. Mission impossible. Exactly. It's not logical. Impossible. It is perfectly rational, rational possibility.

[00:22:31]

So suppose that that is the case.

[00:22:33]

I am going to wager that some areas would still think that slavery is immoral, regardless of what the empirical evidence says about well-being. And we're trying to get out of it by redefining well-being, for instance.

[00:22:46]

Right. So I think he does try to make the claim that at their core, all moral systems are really about increasing well-being in some respect.

[00:22:57]

But but he only accomplishes that by by defining well-being loosely enough so that anything and the goal of any moral system counts as well-being. And and you really do have to define it pretty loosely.

[00:23:10]

I mean, there's there's some really widespread moral beliefs that are really hard to connect to views of well-being, like, I think a large chunk, maybe even a majority of people who responded to the moral studies of Jonathan Hights, the experimental philosopher, cognitive scientist, said that it's immoral to to burn an American flag in the privacy of your own home.

[00:23:34]

Right. No one's being hurt. There's no well-being involved here. And yet it just it feels wrong to them.

[00:23:38]

So it's immoral or I know for sure that a vast majority of the respondents to this survey said that it was morally wrong for two siblings to have sex with each other, even as stipulated if there was birth control, no chance of a pregnancy, and neither one of them was harmed by it. In fact, they enjoyed themselves. So he removed any possible way of explaining this reaction in terms of a well-being argument. And people still insisted that it was morally wrong.

[00:24:06]

So, you know, that's right.

[00:24:08]

And they may like this to be, you know, the principle underlying moral judgments. But in practice, it's just not.

[00:24:14]

I think it's even worse than that. Those are all very good objections to Henry's perspective. But I think it's even worse that there is it's not just that he defines moral well-being in a loose enough way, that pretty much everything fits is that he doesn't actually seem to be aware of the fact that different medical systems do not necessarily value human well-being above everything else. I mean, the system that does is consequentialism, which or utilitarianism, if you prefer. In fact, there is a recent article in The New York Times by Anthony Appiah, who is a moral philosopher who reviewed somehow his book.

[00:24:48]

And Appiah pointed out what to me was was obvious, in fact, from reading Harris writings, which is that Herries, apparently, without realizing it is a consequentialist, he's a utilitarian. He seems to think in terms of the greater good for the greatest number of people. But there are other systems, such as virtue ethics, which is not about the greatest good for the good of people. It is about individual one being defined broadly. Yes, because Aristotle's view of well-being or eudaimonia or happiness was a broadly defined, but nonetheless the focus there is on the individual in the context of society.

[00:25:21]

It's not on the majority and the ontological systems, both the secular variety like accounting systems or and certainly the religious varieties such as the Ten Commandments, not necessarily have anything to do directly with human well-being.

[00:25:37]

And there they are. They are sort of these kind of systems that let's take. The Ten Commandments, of course, do include things like the ones you were talking about earlier. There is a prohibition against violating sacred objects or engaging in certain behaviors because they're repulsive and so on and so forth.

[00:25:53]

So it seems like before we agree on what kind of data empirical data are relevant to establish the best course of action for human beings, we first have to talk about, well, what set of values and what set of priorities are we considering? And then we can those those priorities can be informed when it comes to implementing courses of action by the science. So the model I think, to look at is an interaction between science and philosophy, where we have to have a discourse about moral priorities and why certain things count as moral and certain others don't.

[00:26:29]

I would assume, for instance, that a lot a good number of our listeners probably would agree with what you implied earlier, which is that harming people or issues of fairness, artifact, moral issues.

[00:26:43]

But on the other hand, you know, doing whatever you want in the privacy of your home when it doesn't hurt anybody, it doesn't count as a moral issue. But of course, a large number of Americans disagree. You know, if you're a conservative, you tend to actually see those as dimensions of morality.

[00:26:58]

And not only on in fairness, as the mention of morality, that means that there are different systems out there that are based on different assumptions. And the same data will tell different things to different people, depending on which systems they use to filter those data.

[00:27:14]

Well, Mazama, you and I may disagree, as we've learned on the blog before, about how much it's possible to defend moral axioms rationally.

[00:27:25]

But at the very least, we can agree that we disagree with Sam Harris on this one. That's right. Let's take another question from Janice, who asks a very good practical question.

[00:27:38]

He wants to know what innate or learned mechanisms you have to try to analyze if your disagreement with an argument is mostly emotional.

[00:27:47]

And once you notice that you have sort of an emotional disagreement with an argument, how do you guard against bias? How do you make sure you're considering the argument fairly, given your emotional state?

[00:27:58]

That's an excellent question. A great question.

[00:28:01]

And and I have a few things that I've developed to deal with this kind of problem.

[00:28:07]

So for me, I usually don't have any trouble considering an argument on its own merits unless the big exception is when I feel like the other person isn't playing fair in some respects. So that might happen if I feel like they're being needlessly sarcastic or condescending or if they seem to be deliberately misunderstanding my points or not listening to me.

[00:28:30]

And that's when I really get in danger of just sort of turning against them emotionally. And and then I'm not really listening to their arguments on their own merits anymore.

[00:28:38]

I'm I'm sort of immediately on the defensive.

[00:28:42]

And so so I have I have three mental tricks that I that I use to guard against this.

[00:28:47]

The first is just that I imagine that it's someone else making their argument. So I'm hearing their words. I'm hearing their argument. And I'm just imagining them coming out of the mouth of someone else, usually someone who I like and respect. And and it's interesting sometimes doing this thought experiment, I'll be having this reaction of, oh, this person is so wrong. And then I switch their bodies in my mind with someone else and and hear the argument coming from someone else.

[00:29:12]

And I think, well, actually, it's not that unreasonable. They make some good points. So so that I think shows that it's a useful trick. The second is simple.

[00:29:21]

I try to remind remind myself of the times that I've been wrong before or made erroneous arguments.

[00:29:28]

And and that makes me feel more generous towards the person who who I'm currently seething over their wrongness.

[00:29:36]

And then third, I try to keep myself focused on the benefit that I'm getting out of the conversation, out of the interaction.

[00:29:44]

So even if I feel like the other person is being unnecessarily sarcastic or obnoxious, I say in the way that they're arguing with me, it's still useful for me to hear their arguments, either because they might be right about some things at least, or at the least because it's good practice for me to try to articulate clearly why I think they're wrong.

[00:30:05]

And if you can keep focused on your own self-interest, then you realize that it would be a waste just for yourself to let your emotions prevent you from getting these useful benefits out of the interaction. Yeah, that sounds very good.

[00:30:17]

I like to be in your mind when you do that kind of switch between different people. I don't know what that will feel like. I mean, those are the useful tricks.

[00:30:26]

Of course, this is a very well known problem in cognitive science. You know, there's a lot of studies that one of the books that I've been reading recently, it's called Being Wrong Adventures in the Margin of Error. And it deals entirely with this issue.

[00:30:40]

And the emotional underpinning of this issue is, as you pointed out now, of course, there are two major reasons why we might not get somebody else's argument right.

[00:30:53]

One is the kinds of reason you're talking about. The other one is that, in fact, we cannot appreciate the logic of it.

[00:30:59]

Right. I mean, there's some some arguments just fly above our head or below our radar or whatever you want, whenever you like, you know, so we just don't get.

[00:31:09]

And I'm not talking about being stupid here, I'm talking about, you know, I find myself in situations where I thought I was getting the logic of an argument. So it wasn't one of these emotional situations that you're talking about, which I'm also familiar. But these were situations where, you know, the person I knew that the person, you know, like the person in question I finally was making is is my person who was probably making a good argument.

[00:31:31]

And yet somehow what I was hearing was clearly so unconvincing to me that I thought either the guy is completely out in terms of logic today or I am completely out in terms of logic. So there is also a genuine question of, you know, sometimes you just don't get it for a variety of reasons. You know, we don't have enough background on the particular topic. We don't have we're not in a good day for rational arguments and that sort of stuff.

[00:31:58]

So what I find myself helpful in that case, in those cases, is to go back to trying to suspend judgment and go back to the topic as presented by different people. So having conversation with different people than that topic or reading different authors on that topic, that the times that I changed my mind in life on certain things based on argument. It's typically been over a long period of time once that I had time to process different angles in different ways of explaining a particular a particular issue or a particular problem.

[00:32:31]

I might take notes right now and how to change your mind in the future. Yes, I can. I can see that.

[00:32:36]

And now there is another thing to consider, which is I guess the simple answer is really at the top of our blog. If you go right at the top, right below the very nice graphic that you put together for us, it says, Truth springs from argument amongst friends, which is David Hume quote, which I think actually summarizes at least part of what you were saying earlier.

[00:33:01]

Um. Truth springs from argument amongst friends means that you're talking to people with whom you're friends, therefore the a lot of the emotional and the emotional component is taken out because this isn't somebody you despise, isn't somebody who is condescending to you as a friend. You or she is a friend. So you're talking and you're talking, presumably while having dinner over a glass of wine, which always helps because it the ideal case study that you're the case study.

[00:33:30]

I actually have been in that situation several times. And of course, this is a long tradition philosophy of doing that. Right.

[00:33:35]

If you go back to the symposium, which is one of the platonic dialogues, the word symposium in ancient Greek Greece indicating, in fact, that kind of situation, friends getting together to argue while wine was being poured, of course, by slaves, because at the time there were slaves.

[00:33:53]

Now, there are a couple of interesting things about the symposium. If you actually if you read the dialogue, it also explains how the dynamic was working. So food was served, but not too much because that dulls your ability to to concentrate and engage in conversation. Wine was bought by not too much.

[00:34:11]

You can employ yourself. And it was watered down for the same reason you don't want people to get boisterous or and or so drunk. Did that actually understanding what they're saying, what other people are saying.

[00:34:21]

But the whole idea being that if arguments with whom you disagree are presented in a friendly manner and in a non-threatening atmosphere, that's when they have the best chance of getting through, which is really another version of what you were saying to some extent. So go out, drink with friends. I think it helps.

[00:34:41]

So this is a good opportunity to bring up a question by a commenter named One Day More. It's a two part question. One day more would like to hear. First, an example. When we've had an interaction, when each of us has had had an interaction with someone in which we change their mind and then an interaction we've had in which someone else changed our mind. Do you want to go first?

[00:35:04]

Sure. My favorite story of myself change my mind.

[00:35:09]

It has happened contrary to what some people seem to think more than once. But my favorite story is actually the result of an interaction with one of our former previous guests on Russian speaking, Eugenie Scott. I met Ginny several years ago when I was in University, Tennessee, and I invited her to to give a series of talks about in conjunction with our Darwin Day at the time. And just before she came, I got really upset with her because she had been instrumental in getting the National Biology Teacher Association to change the definition of evolution, to take out words to the effect that evolution is an undirected and intelligent process.

[00:35:58]

And I was very upset.

[00:36:00]

I said, What do you mean? You're changing the definition of evolution. You are trying to appease intelligent design proponents. You're trying to be what today will become an accommodationist. At the time that the term wasn't around, I said, you know, you're making a fundamental mistake. Here you are. You're selling out for four completely practical, pragmatic reasons. I understand the pragmatic reasons. You don't want to upset a lot of biology teachers who themselves are Christians or religious.

[00:36:25]

But but by making that change, you are confusing things scientifically and confusing things philosophically. So we had this really interesting discussion when she came and she did not she did not move me at single inch during that visit. We continue our correspondence of our we really like each other, I think I can say. And we kept in touch. We saw each other a couple of other times. And then eventually I started reading more and more about what her point was, which was that, Drew, the distinction between mythological and philosophical naturalism.

[00:37:01]

Naturalism, of course, is the idea that all the reason in the world is natural processes. Right. And the distinction, as I understand it these days, and which was what Jean was trying to tell me at the time, was that the philosophical naturalist is an artist, is somebody who says, I don't think because of philosophical reasons that there is any anything outside of the natural world in methodological. Naturalists, on the other hand, is somebody who acts as if there were no supernatural but actually makes no particular commitment one way or the other.

[00:37:32]

And her idea was that science is methodologically committed to naturalist, but not philosophically, so that you can be a scientist or a science teacher. You can believe in whatever God you like to believe. But when you go to the lab, you don't bring. And by the way, this may be a miracle as part of your explanations. That kind of explanation is out methodologically because it's not useful. It doesn't bring to any testable hypotheses. It doesn't.

[00:37:55]

It does. It does in further research in any way. Some mythological naturalist does not come into the philosophical naturalism. I thought about this over and over and finally hit me that she was absolutely right, there is that distinction, is there? It's important. It's not just pragmatic, it is a conceptual distinction. And it's a distinction that I can live with. And, in fact, that I can actually see the advantages both in terms of the relationship between science and philosophy and in terms of obviously the practical aspects of teaching evolution.

[00:38:30]

So that is a major example where I changed my mind significantly about this sort of completely flipped around my position.

[00:38:38]

So in terms of examples of my arguments changing somebody else's mind, then actually, you know, I don't know how often that has happened, obviously, because people don't necessarily write back to me and say, hey, by the way, I changed my mind. You're right.

[00:38:52]

In fact, usually people write to me to tell me that I'm wrong and in fact, substantially and dramatically wrong. But it does happen. You know, over the years I started I started writing and giving public lectures, running for for outreach purposes to talk about science and then little philosophy, the general public since 1997. So it's been a significantly long time at this point. And I do get fairly frequently emails or letters. And sometimes I actually meet, again, people who I've met many years ago who have either read something that I wrote or have who are present at a debate, for instance, that I did.

[00:39:32]

And these people write back to me years later and say, you know what? I was there that day. That day I saw your arguments. I complained for them. But in fact, they kind of started resonating and eventually they helped me change my mind. Typically, this is about these these letters come from former fundamentalist Christians who have come to be to be either agnostics or humanists or something like that. Now, I cannot claim, of course, and in fact, they are not claiming that my contribution was the only one that did the trick.

[00:40:04]

But that goes back to what we were saying earlier. I think that the most likely path for people to change their mind is, in fact, to be exposed to a particular position at a particular point of view by more than one person who presented more than one way. That's usually the way it happens to me when I change my mind is because I get the same idea of presenting a variety of ways over a period of time. So, so but these are pretty consistent.

[00:40:28]

I mean, they they happen these kind of changes that that I'm made aware of happen frequently enough that it sort of make me think that we're not wasting our time doing, you know, doing the blog.

[00:40:38]

So either you planted the seed in their head or you were the one who watered it, or you're the one who, you know, fertilized it or something.

[00:40:44]

But either way, you made a contribution to if you want to stick with the botanical, do you conversion? I don't know where I'm going with that metaphor. Yes, the botanical metaphor has some limitations, but yes, that's the idea.

[00:40:55]

What my metaphors get away from me sometimes.

[00:40:57]

OK, so my example, I'll I'll start with an argument that I have used to successfully change other people's minds.

[00:41:07]

I've actually made this argument to several people recently successfully and gotten them successfully on the spot to admit that they were wrong, which is, you know, not too common.

[00:41:18]

So the argument is that the person I'm talking to can't consistently be simultaneously against bestiality and not against eating meat.

[00:41:36]

So the way this conversation usually goes is, as I said earlier, sometimes it's interesting to be inside your mind, but go ahead.

[00:41:45]

Let me explain, though. I may not have a hoity-toity example like methodological naturalism, but I can defend myself. I feel this is perfectly fine. Go ahead.

[00:41:54]

So so what usually happens and there's a lot of regularity in these conversations is that I'll ask them why at first if visuality is morally wrong and I'll say yes and I'll ask them why and assume they're a reasonably intelligent person.

[00:42:10]

They're not going to fall back on something like, well, it's just gross.

[00:42:13]

They usually give a harm based argument that, well, the animal can't consent to the sexual act. And therefore, you know, it's rape, which I'm not disputing.

[00:42:24]

But then I ask them, OK, so do you eat meat? And so then we start talking about eating meat and well, you know, obviously the animal can't really consent to you killing it and eating it either way.

[00:42:36]

And if it could, it seems unlikely that it would.

[00:42:39]

So so I have a pretty good success rate at getting people to acknowledge the inconsistency.

[00:42:45]

They're usually the way they resolve it is not by saying, well, OK, bestiality is morally acceptable.

[00:42:51]

It's saying, yeah, OK, so I can see a moral problem with eating meat.

[00:42:55]

But, you know, I just live with that because I was just about to say you just convinced me that bestiality is more than something you're on your own there. But I might not have actually had very strong opinions about this, so that was not my argument. Right. Go ahead.

[00:43:09]

OK, so the other part of the question was an argument that changed my mind recently.

[00:43:14]

And there was a blog post that I wrote a few months ago called Truth from Fiction, Truth or Fiction, in which I argued that it's not possible to learn anything true about the world, about the external world that you didn't already know from reading fiction.

[00:43:34]

And I had a lot of conversations with people during the time that this post was up and following the post.

[00:43:41]

And I did end up concluding that I was wrong in a in one in one way.

[00:43:47]

So I now think that it is possible to learn something that you didn't already know. And that is pretty decisively true from reading fiction in the sense in the same way that a philosophical thought experiment can can accomplish that.

[00:44:04]

So the form that I think this takes is in showing you something that you actually believe, but you didn't realize you believed by showing you sort of the logical consequences of other things that you believe or by showing you an inconsistency in something that you believe through fiction. And I do think that this is possible.

[00:44:24]

So the example that I like the best is the book Frankenstein by Shelley.

[00:44:32]

And I think that this book for someone who thinks that who wonders about what the meaning or the purpose of his life is and who thinks that if only I knew why I was put here on this earth, then that would that would give me the sense of meaning and purpose to my life that I'm craving that I'm missing.

[00:44:52]

Reading the book, Frankenstein could actually make the person realize that that's not, in fact, true, because in the book Frankenstein's Monster, when he discovers the the reason for his existence that he was created by this, the scientist for his scientist own personal purposes is does not feel like, oh, now I have this the satisfying, deep sense of meaning in my life.

[00:45:13]

He's angry and horrified, and this doesn't actually make him feel like his life has meaning.

[00:45:18]

And so this is not an argument that this is necessarily what would happen if you found out why you were put on the Earth.

[00:45:24]

But at least it makes the reader realize that it's not necessarily the case, that getting the explanation for your existence would necessarily give you the sense of meaning.

[00:45:34]

So that's something valuable. That's that's a new insight about yourself, but still a new true insights from reading fiction.

[00:45:43]

And by the way, for our listeners benefit that answer you're talking about Truth from Fiction was published on June 18, 2010. I think if we want to go and check it out, it's. OK, let's move on to a question from Andres Lopez.

[00:46:01]

He asks, This is in relation to a pic that Masimo you posted, I think, a few weeks ago. The question is, what is your take on Samir Acacia's? He's a philosopher of science. Argument about the problem of induction was the successful how has it been received among philosophers?

[00:46:19]

So let me lay out the problem of induction for our listeners who don't know it.

[00:46:24]

And then Musoma, you can actually answer the question. I give myself the easy job so far enough in this case. Right.

[00:46:31]

So so induction, the process of induction is more or less we form our expectations of the future based on empirical evidence from the past. So you look out the window, you see snow, you expect the snow to be cold because in the past, when you've touched snow, it's been cold.

[00:46:49]

But why should snow necessarily be cold now just because it was cold before?

[00:46:54]

We're implicitly assuming a general uniformity of nature that things will continue to be the way they have been in the absence of any reason why.

[00:47:07]

And so can we rationally defend that assumption?

[00:47:10]

Well, the obvious response is that it's always been it's worked in the past, this assumption, but, of course, that the circular arguments we're using inductive reasoning to try to rationally defend inductive reasoning. Right.

[00:47:23]

So now what? This problem has perplexed philosophers for hundreds of years since Hume brought it up in the air and great concern concerning human understanding.

[00:47:33]

So this philosopher of science, Mariquita, claims to be able to resolve it. Right.

[00:47:38]

So the people are interested in reading Acacia's actual article should go to, rationally speaking, entry for September five, 2010. They'll find a link there. OK, so Salmiya, is, my opinion, one of the most brilliant philosophers of biology active today. I don't think he actually solved the problem induction. But but that article is, um, has a series of stunning insights that are that are really worth considering. So the first thing that Samir says there is that the problem with the problem induction is that there is no such thing as induction, which is something that always struck me as in fact, correct.

[00:48:17]

It's just that I couldn't yeah, I couldn't articulate it in the way in which some here did.

[00:48:21]

So his idea is that philosophers often define induction as any kind of reasoning that is not deductive. So they define it by exclusion, essentially.

[00:48:32]

So deduction, of course, as we all know, is the sort of the kind of things that was invented or at least formalized by Aristotle in the simplest kind of deductive reasoning is the syllogism.

[00:48:43]

Right. So if all men are mortal in Socrates is a man, then or then Socrates is mortal, right. That's a that's a formal kind of reasoning that it's called deductive reasoning. The conclusions are necessarily true. If the premises are true, when you analyze a deductive reasoning you have, you can either attack the premises. You know, you can deny that the premises are one of the premises is true. Or you can look at the structure of the argument and see if, in fact, the argument is valid.

[00:49:07]

That is, if it is constructed correctly. This is sort of elementary logic. Deductive reasoning is at the basis of mathematics and logic. The entire structure of mathematics and logic works by deduction, not by induction.

[00:49:22]

The science is the idea is work by induction. In the first person who famously pointed that out was Francis Bacon, who built, in fact wrote a book called The New Organon. The Organon was Aristotle's book on on how Science Works. And the new organon, of course, was Francis Bacon. Way to tell Aristotle that he was full of it and that there was a new ideas and then using a new kid in town.

[00:49:46]

Now that guy needs to be taken down a peg. Exactly.

[00:49:49]

And now Bacon use the word induction and to basically to identify the kind of reasoning that underlined underpins scientific analysis.

[00:50:02]

The problem is then, in fact, induction is a large family of quite different types of reasoning. So there is at least three that I can I can bring up for. For instance, the one that Bacon spent most of his time talking about is enumerating induction. And integrative induction is the kind of induction you were talking about earlier. That is, if I wake up every day and I see the sun going up and I've done that for 200 days and then do that for 400 days and I did that for five hundred days, then it is reasonable to assume that the 500 first day, the same thing is going to happen, OK?

[00:50:37]

That conclusion is the result of illuminative induction. Now, the savvy listeners of rationally speaking might have noticed that no scientist actually uses that kind of induction. I mean, we don't go around in science doing these sort of things. We don't count the number of instances of one type and the. Say, and by the way, type N plus one is going to be of the same and the same kind, so induction is a very, very limited scope in science.

[00:51:02]

Its answer is essentially the equivalent of extrapolation in statistics. And extrapolation is always very dangerous because you're making the assumption, as you correctly pointed out, you're making the assumption that the same function that you discovered explains the data that you currently have. The same function holds outside of the domain of the data that you currently have. Right. And you have no particular reason to believe that that is the case. So in that sense, humor is right. But Okasha points out that there are many other different kinds of inductions.

[00:51:32]

Let me let me just briefly introduce to one is called strong inference. Strong inference is a situation where you have a small number of alternative possibilities and the data clearly and decisively decide between these possibilities. This particular example is, for instance, in biology was in the 1950s when people were trying to figure out the structure of DNA. And at some point, basically the data were compatible only with two different kinds of structures. Either DNA had two or three.

[00:52:05]

And as it turns out, there were two major labs that were in Katrina that were in competition to figure out the structure. One of them, of course, Watson and Crick famously arrived at the right conclusion.

[00:52:15]

That was an example of strong inference because it was a situation where the data were clearly and obviously in favor of one hypothesis and clearly and obviously in favor against the only alternative parties that had remained in place. That's called strong, strong inference. It is a type of induction. But for the life of me, I can't see what it has in common with enumerating induction. Other than the name induction, they're really different kinds of reasoning. Right. The third one is actually the one that most scientists use because strong, strong inference is actually not very common.

[00:52:49]

It can be applied on in very specific situations. The most common type of reasoning in science, it's actually what it's called abduction and abduction is also known as inference to the best explanation. This is pretty much what Sherlock Holmes does when he solves a mystery. So what you have is you look at the data, you look at the evidence, and ideally the evidence is strongly pointing into one direction to one particular answer and strongly pointing away from all the other reasonable alternatives.

[00:53:22]

You have no guarantee that the direction in which the evidence is pointing is, in fact the correct one. But you are you're justified in making the provisional conclusion, arriving at the provisional conclusion that, you know, Professor Moriarty did it because all the evidence goes in that direction. It is perfectly logical that to imagine that there are other alternatives that you would not consider it. It is perfectly logical that the evidence could have been planted, additional and other explanations for why the inference actually failed.

[00:53:50]

But the most rational thing to do is, in fact to follow the data where they lead and provisionally accept that conclusion.

[00:53:56]

That is called abduction, which is a strange term actually, because it brings up images of aliens in the Duchene people from the Midwest and doing inappropriate on them. But I want to be in your head now.

[00:54:08]

Well, you know, we're talking about earlier I gave a figure that I did in my own example.

[00:54:14]

So so the inference of the explanation is, in fact, the most common type of induction in science. Now, Okasha points out that that sort of reasoning isn't really justified by anything like what Hume was objecting to. So that at best, Hume's objection becomes an objection to illuminative induction. But enumerating induction is, in fact, the kind of induction that we rarely, if ever, use in science. So in that sense, I think it's got some really interesting points.

[00:54:44]

The major one, as I said, sort of to summarize, the thing is that there actually is more than one type of induction and the different types are very, very different from each other in nature.

[00:54:54]

Now, why did I say, however, that I don't think Okasha did solve the problem induction? Well, because still there is something that Hume said that it's that underlines all of these all of the scientific enterprise anyway.

[00:55:06]

And that is the assumption of the uniformity of nature. Right.

[00:55:12]

For science to work, we have to assume that that whatever you think of as the laws of nature or the patterns in nature or the mechanisms in nature always work in the same way that they are not capricious, that they can't change from one day to another, that because if they did, then there is no science you can do about it. You wouldn't be able to make predictions. You wouldn't be able to account for mechanisms because the mechanisms themselves would be changing from day to day or from year to year.

[00:55:40]

So the assumption, the universe, the uniformity in nature is in fact, I think underlying all of science and there is really no answer to it.

[00:55:48]

It's just one of the things that that basically you can turn into a. Conditional reasoning, that is, you can say if nature is in fact, reasonably uniform within the domain of certain kinds of processes, then I can do science in that domain. And if nature turns out not to be uniform and change, apparently from day to day, then I can't do science in it.

[00:56:12]

Yeah, you know, the problem of induction is to really bug me. And finally I threw out my hands and said, this is this is something we have to accept as a starting premise, just like conditional on me not being completely insane. Right then.

[00:56:25]

Yeah, exactly.

[00:56:28]

Let's move on to another practical question from HGC. HGC asked, Do you have a favorite book books or an online site or seminar series on CD that could help him attain a greater depth of knowledge about philosophy to help him assimilate what he hears on rationally speaking?

[00:56:52]

What do you think?

[00:56:53]

Well, there are a couple. Of course, we could point them to our own resources. One of the links from nationally speaking is is the five minute philosopher video series, which I put out on YouTube. But that's not in-depth by as the title would imply.

[00:57:05]

It's a five minute long kind of want something more than that.

[00:57:09]

Then there are two resources that people can use. The Internet Encyclopedia of Philosophy and the Stanford Encyclopedia of Philosophy.

[00:57:17]

They're both very good. They're both very reliable. They're both written by expert philosophers, by professionals. But the Internet Encyclopedia of philosophy is aimed at an intermediate level.

[00:57:26]

So it's for educated public, but not necessarily for professional philosophers. The Stanford Encyclopedia is at a higher level. You don't necessarily need a philosophy to understand its interest. You can reasonably well read person in philosophy. You can follow most of the entries in the Stanford, but they're definitely the best you can find out there before you get to actual, you know, the level textbooks in philosophy. So I think those are two of the best resources that I can point out to.

[00:57:56]

If people are interested in a third level that is getting a taste at the least of what professional philosophers do, then the website to go is the paper's website, HPL Papers. That's actually a huge database, multi use database, where professional philosophers put up their own papers. There are links to their own updates on their own websites. You get an idea of what people are thinking about, what sort of discussions are going on in professional philosophical circles. And you can actually customize your own profile so that the system alerts you only, for instance, what's going on in ethics or in philosophy, mind or in philosophy of science.

[00:58:36]

Those are the three things Internet Encyclopedia of Science, I'm sorry, of a philosophy, the Stanford Encyclopedia philosophy and field papers in order of increasing sophistication and will post the links to these resources on the podcast website, rationally speaking podcast dog.

[00:58:55]

For my part, when I was thinking about this question, I was thinking less about encyclopedias of philosophy and more about what books really shaped my views on philosophy.

[00:59:09]

So I think Masimo might agree with me about an inquiry concerning human understanding by Hume.

[00:59:16]

That's that's I think that's a good, really solid introduction to philosophical thinking that I can actually endorse, which I can't say about a lot of philosophical classics I really liked.

[00:59:29]

And I know Masimo won't like this book called Language, Truth and Logic.

[00:59:34]

I was one of the one of the early logical positivist manifestos, essentially a lot of manifestoes, sort of a an explanation of logical positivism.

[00:59:45]

It's very readable, very clear. It was written by E.J. er and there are so it really lays out what I think are some of the major problems with philosophical thinking. This was the logical pacifists explanation for why so much philosophy over the centuries has been misguided and there are some fundamental problems with the logical positivist philosophy. And so after you read the book short, you can go online and look up some critiques of logical, positive positivism, which adds nuance to Eres argument.

[01:00:21]

But I think it provides a really useful first start in thinking about how to dissolve philosophical questions that were that were poorly posed to begin with. Just a couple other introductory books I like Mortel Questions, a series of essays by Thomas Nagel, also very readable and fun, Practical Ethics by Peter Singer.

[01:00:43]

We were talking about utilitarianism before singers that famous utilitarian and and that's also a very readable, readable book about ethics.

[01:00:50]

Well, if we're talking about books, then before we close, I do have a couple of suggestions as well. One, of course, is a classic, which is Bertrand Russell, the problems of philosophy, also a very short book, a classic. It's it's it really gives you a very good feeling for what it means to think philosophically about something. The other one is a book that I actually have used in introductory classes on philosophy. It's a lot of fun to read.

[01:01:16]

It's edited by Lee Bowie and a couple of other people, and it's called Twenty Questions An Introduction to Philosophy. And it's exactly what it sounds like. It's a panorama of the entire philosophical landscape based on 20 major questions like is there a God? What's the difference between right and. And so on and so forth, what is beauty, that sort of things, and each chapter is in fact, it features a short introduction by the editors of the book, sort of laid the ground.

[01:01:45]

And then there are excerpts from some major philosophers on the past and what they had to say about it and really gives you a very approachable level.

[01:01:53]

It gives you a very, very good introduction to what kinds of problems philosophers tend to be concerned with.

[01:01:59]

So unfortunately, we're running out of time for this episode.

[01:02:03]

So before we wrap up, I just wanted to remind our listeners that the rationally speaking podcast is sponsored by the New York City skeptics and that I encourage you to go to the website NYC Skeptic's Dog and check out all of our resources and think about becoming a member.

[01:02:23]

This now concludes another episode of rationally speaking. Join us next time for more explorations on the borderlands between reason and nonsense.

[01:02:39]

The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benny Pollack and recorded in the heart of Greenwich Village, New York. Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.