Transcribe your podcast
[00:00:14]

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense, I am your host must not be looking. And with me, as always, is my co-host, Julia Galef. So, Julia, what are we talking about today?

[00:00:49]

Well, Masimo, today is our most important episode ever, because this is the episode in which we justify this entire endeavor. Today's topic is why is speaking rationally a worthwhile goal anyway?

[00:01:01]

Some people argue that irrationality can make us happier, at least in certain situations, and other people feel that rational is synonymous with cold, soulless and dispassionate.

[00:01:11]

In other words, not human. So today we're going to ask, are there downsides to being rational? And if so, are they necessarily outweighed by the upsides?

[00:01:20]

Good question. Let's start by dispelling this myth that rationality is necessarily opposed to passion.

[00:01:28]

This was this actually has a long history. It goes all the way back, at least to Plato and to some extent to artists who made that argument, that rationality that human beings are the rational animal and then therefore they should be characterized by or emphasize their rationality over other aspects of our nature, including the emotions.

[00:01:50]

But why would these philosophers think that being rational would make you less emotional? I just don't get it.

[00:01:56]

Well, the idea was that if you are if the emotions become too strong, in other words, if they're not controlled, then they'll lead you to do things that you're going to regret that not in your long term interest. And of course, if somebody has or something has to control the emotions, then who is going to do that?

[00:02:15]

And the idea was that the faculty, a reason being the highest and the most developed in humans, at least that's what PolitiFact then than reason should take over essentially in control rationality.

[00:02:27]

Hmm. So yours and my favorite philosopher, David Hume, is famous for establishing a relationship between reason and emotion. And he said reason is an not only to be the slave of the passions and can never pretend any other office than to serve and obey them. That's widely quoted sentiment, but I'm not sure I'm not totally clear on what it means.

[00:02:47]

How would you interpret that, Masimo?

[00:02:49]

Well, Jim was the first philosopher who actually abandoned that tradition of ultra rationalism that goes back to Plato.

[00:02:57]

As I said earlier, he was the first one who actually questioned this whole idea.

[00:03:00]

And he said, well, wait a minute. But in fact, one of the most important things about being human is that we have such strong emotion that we care about things. And what that quote in particular can be read to to say is something along the lines of, look, unless you actually care for something, it doesn't matter how many reasons one can possibly come up with to do something. Unless you care, unless you have an emotional attachment to something, you're simply not going to do it.

[00:03:27]

So this is a first time that a philosopher actually expressly said something along the lines of, look, rationality by itself is not enough.

[00:03:35]

In fact, you went as far as saying that rationality essentially is a means to an end, but the end is determined by the emotions you want to become.

[00:03:46]

Something you know is ports player or an athlete or a philosopher or something else.

[00:03:54]

And then you use reason to guide you to through the best path to get there. But the reason you want to go there, it's not nothing to do with rationality.

[00:04:04]

It's because you care about it.

[00:04:05]

In other words, it's a result of your emotional on the ground, so to speak.

[00:04:11]

Right. And a number of our commenters actually touched on that theme. Costus said that ultimately there's no completely rational reason to take any action whatsoever to sound like humor on the same page there. That's right.

[00:04:23]

I mean, the classic example is very simple, right? So if I get up and go to the refrigerator and take a beer out of the refrigerator, what's the reason for doing that?

[00:04:33]

Well, the reason for doing this because I'm thirsty, presumably, or because I want to have a good time with friends while watching a sports event or something. But the the demotivation essentially is not reason because there are many other ways now the things that I could do reasonably instead. And there is no particular reason for me to get up and get a beer as opposed to something else. The reason I do that is because, in fact, I have an underlying emotional need that it's going to be satisfied by that particular action.

[00:05:02]

Then reason tells me that the beer that I want is actually in the refrigerator, that I have to get up at a particular point and do certain things to actually get there. Right. So even as a tool. So reason becomes a tool. Right. And that idea is is actually now you may have, one can argue, sort of a little exaggerated on the opposite side of to counterbalance Plato.

[00:05:22]

You know, Plato was the guy that said reason has to control things. You almost seems to say the opposite. What it seems to me is to say that the emotions actually control the whole the whole show and.

[00:05:32]

The reason is only a means to an end modern neurobiology, and I think a lot of modern philosophers would try to strike a middle ground between these two. For instance, there is work in neurobiology done by Antonio Damasio, who has written several books. The one that I recommend is called The Feeling of What Happens, and it is about how we get consciousness.

[00:05:56]

I would get conscious feelings and in that book and in a couple of others that he wrote, Damasio argues that actually a human being has to have a balance between the reasoning part and the emotional part. And he shows that neurologically the seed of reason more or less is the frontal lobes is the frontal frontal area of the brain.

[00:06:16]

The seed of, at least in part of the emotions is the amygdala, which are these little two things that at the base of our of our brain.

[00:06:24]

And what neurobiology shows is that the two are very deeply interconnected. There are a lot of neurons that go from the frontal lobes into the amygdala and vice versa. So there's this constant feedback back and forth between the two areas.

[00:06:38]

And we know that if that feedback is interrupted, there are people who have nonfunctional amygdala because of accident or disease, for instance, that breaks in equilibrium.

[00:06:49]

And the resulting human being is anything that you really don't want to be a hyper rational person who, however, doesn't care about anything.

[00:06:58]

DiMassimo I'm really interested in this idea of rationality being a tool that we use just to achieve our ends, because I think it's important to think about this when we're deciding whether we should actually make try to make other people more rational. I think you and I both really big on promoting rationality and critical thinking, but I have the sense that we're actually doing it for different reasons.

[00:07:19]

So for my part, I think I would love to be able to say that the reason that I promote rationality is that I think it's for the good of the world that I'm trying to help society. But honestly, if I really examine my own motivations, I do it because irrationality bugs me frankly, and it's emotionally satisfying for me to try to combat irrationality.

[00:07:40]

Now, I do I do think that on balance, if I had to guess, I'd say the net effect of this endeavor is good. But that's more of a lucky coincidence.

[00:07:50]

But that's a good example of what I was saying. Right.

[00:07:52]

So I think that one can make we can both make a rational in a reasoned argument that a lot of problems in the world, both for individuals and for sort of society at large, derive from the fact that, frankly, isn't enough critical thinking going around.

[00:08:07]

There isn't enough use of reason and rationality in the broader sense and so that we can make the argument that it would be better for human beings at large and for that large to use more rationality.

[00:08:21]

But your example also points out that, in fact, the reason we're doing this podcast, for instance, or the reason we write for the blog or the reason we do this kind of outreach thing things is because we care, right.

[00:08:32]

There is no reason for us to care because we're not necessarily directly, personally affected by the relatively low level degree of rationality in society at large.

[00:08:42]

I mean, if you're shielded enough from that, then you really don't have any personal reason to do it.

[00:08:49]

You do it because you have a passion. You do it because you care. You do it, as you put it a minute ago, because irrationality really irks your nerves. Right.

[00:08:57]

But in my case, I don't see it as a moral duty. Again, I do it because I care, but I don't I don't think I'm doing it out of any sort of moral obligation, whereas I get the sense that for you, you do feel that we have a moral obligation to promote the truth.

[00:09:11]

Yes, I do think that there is a moral obligation to promote the truth. And we should probably have a separate a whole separate issue about moral obligations and when they come from and so on.

[00:09:19]

But even if you don't, the thing is, you know, you can approach the problem from from a purely pragmatic perspective, which I suspect it is what you're doing.

[00:09:28]

You say, well, this would be a better world world if certain kinds of a certain degree of rationality where more common among the population and large and therefore you act on it because as a utilitarian, essentially as somebody who says, well, this is just going to have got good consequences.

[00:09:46]

But again, although that is the reason you do it. In fact, the motivation, the ultimate motivation to spend your time doing what you're doing comes from the fact you have a passion for it, that you care at an emotional level for it, for just a matter of somebody telling you, well, this is how you should spend your time, because these are the reasons why you should do it.

[00:10:06]

My guess is you probably wouldn't, because there's that much better reasons, for instance, for the two of us to do other things other than these podcasts at this particular moment. I mean, I could think easily of rational arguments, for instance, for why we should if we care about humanity at large, for instance, we should be in Africa volunteering our work, helping people who are starving or something like that. You can easily make up that sort of reasons.

[00:10:30]

But they would. Strike enough an emotional chord with us to actually motivate us to do that sort of thing. OK, so the easy examples are the examples in which my urge to make the world more rational has clearly good consequences like, say, combating the anti vaccination people. I mean, I think that has really unambiguously good consequences. But the harder cases are the cases in which we're actually making someone less happy by by trying to make them see the truth.

[00:11:02]

So I'm interested in your claim that it's that it's a moral obligation to promote rationality. Right.

[00:11:08]

What if that conflicts with someone's happiness?

[00:11:11]

That that's of moral obligation? Yes. That moral obligations are, in fact, largely, although not entirely independent of someone's happiness.

[00:11:19]

If you knew that you were going to make someone miserable for the rest of their life by accusing them of a false belief, you would still do it.

[00:11:27]

I think there is there is a limit. No, I don't I don't have that kind of clear-cut opinions about it, but I think there is a continuum there.

[00:11:35]

So, for instance, this is a decent example of what philosophers often refer to as the the red pill and blue pill problem, which, of course, refers to the to the movie The Matrix. If you remember the movie, there was this situation where the reality was that human beings were essentially slaves being used by super intelligent computers. But they these computers basically fed human beings, these this illusion that life was just fine and there was no problem in it.

[00:12:04]

All right.

[00:12:05]

So the crucial point in the movie, the main character is offered this possibility of either taking a blue pill, which would essentially erase his memories of the real situation. He will go back to the fantasy world, live his life in complete ignorance of the fact that he's actually living a falsehood, or he could take the red pill and it would plunge into the reality, as it in fact is, which is much harsher.

[00:12:29]

And you would have to fight the machines and so on, so forth. Now, we know that, of course, the main character in question does pick the red pill.

[00:12:36]

Otherwise, there wouldn't be any movie. Right. And he picked the blue pill. That would have been the end of the movie. That's it.

[00:12:41]

But I think the philosopher would make the argument that taking the red pill is the right thing to do.

[00:12:47]

Broadly speaking, in terms of what about force feeding someone else, the red pill force feeding someone else, the red pill is, in fact something that are even offering it to them. Yeah, I know of its existence and offering them the choice.

[00:13:00]

Good point. So there is a distinction there, right. So one thing is to force feed somebody and another thing is to offer the possibility. I think that personally, at the least, I would draw the line at force feeding. So, no, I wouldn't impose a choice of that kind on somebody else, but I would definitely tell him. I think that would be my moral duty to tell him of that choice, which is why I think it is a good idea, for instance, for me to offer courses and critical thinking and allow people who want to take them to take them.

[00:13:28]

But the problem is once once you've really explained the the rational truth and really made someone understand it so that they actually have that choice, then there's kind of no turning back. I don't feel like someone can fully understand the truth and then just decide to go back to the way they were before, which is why it's such an ethically fraught existence.

[00:13:48]

And in fact, there is an interesting question there, which is do we actually control our beliefs? The idea that a lot of philosophers have is that beliefs are beyond our control, meaning that you cannot decide to believe something once that your brain has done. It's largely subconscious analysis of the situation and reached a particular conclusion. You can be persuaded of a different conclusion eventually by argument or evidence, but it's not like you can will the belief away. So once you know, for instance, that a placebo effect doesn't work, that it's not the real medicine, it is a placebo, very likely.

[00:14:25]

I would guess the placebo effect is going to be gone.

[00:14:28]

Well, that's a great example. What about someone who believes that homeopathic remedies help them get over their cold? Or how about someone with chronic pain who believes that homeopathy helps resolve their pain?

[00:14:41]

Do we treat that as an exception to the overall rule that promoting rationality is the moral thing to do?

[00:14:47]

Right. That's it now. That's a great question. And in fact, I do know that that discussion is going on in the medical community, not just about the iPod in particular, but about the placebo effect in general. You know, should doctors actually use wilfully use the placebo effect to ameliorate people's conditions? If we put it in terms of placebo effect?

[00:15:07]

I don't think that's as much of a problem as positively endorsing pseudoscience or pseudo medicine. So if I were going to say to somebody, yeah, yeah, go ahead, because on Malathy works, I think that's going beyond just promoting a placebo effect that is actually promoting pseudo medicine. And pseudo medicine can have harmful effects because that person may, for instance, not use. Standard medicine, because he thinks that only output he's going to work and there may be serious consequences, health consequences for that and more broadly, that sort of attitude then then reinforces this kind of loose, irrational thinking about medicine and health in general, in society, anything that has negative consequences.

[00:15:51]

So so there is a line to be drawn there between the placebo effect per say, which I think it's fine if medical doctors within limits can can use it. And the promotion, the downright direct promotion of pseudo medicine.

[00:16:03]

Mm hmm. OK, here's another example in which irrationality, holding false beliefs might be in someone's best interest.

[00:16:09]

How about when the truth is extremely disturbing and there's nothing that you can actually do to change it? Like, well, one of our commenters, Angel, made the point that the idea that we're going to die and there's nothing we can do about it, that that if we if we fully face that truth, it might just cause our brains to crash essentially and plunge into existential crises.

[00:16:36]

And that would interfere with our our purpose as as functional human being, functional human being.

[00:16:44]

Thank you. And that's why our brains developed these patches, essentially beliefs and in an afterlife. So Angel says, we must remember that these silly irrationalities are there for a reason before we try to remove these patches and bring people over to the side of logic.

[00:16:59]

That's right. So so, for instance, again, this is a matter of degrees, right? So if somebody if I were faced with somebody who was really old or terminally ill patient, then that person, you know, has a fantasy about an afterlife and so on. So let's say let's take a personal example. My grandmother, for instance, I would never try and I never tried to dissuade my grandmother from her belief in an afterlife because it wouldn't do any good at this point.

[00:17:24]

It's too late.

[00:17:25]

But as far as a younger person, for instance, who in whose case a critical analysis of that kind of belief would actually change his or her outlook for the rest of their life, I think that's that's much more open to debate whether we should try to do that or not, both for that individual, because that individual would, in fact, be living a very different life if he got rid of those beliefs.

[00:17:50]

And also, again, for society at large, because let's let's not forget that these kinds of irrational beliefs do have consequences at a societal level. They do foster other kinds of irrational beliefs and sometimes they even foster violent action within society.

[00:18:05]

So I think that there is a there is a continuum there. Again, there are situations where it probably isn't a good idea and it's not even morally necessarily defensible to to interfere with other people's beliefs. But there are a lot of other cases where that that case in the case for action can be made.

[00:18:24]

An interesting side issue came up. I was at a party the other night and I was talking to someone about rationality, which seems to happen a lot. I kind of amazes me that anyone still invites me to parties.

[00:18:36]

But I was going to say what kind of parties you go ahead.

[00:18:39]

Yes, well, against all odds, they do invite me to parties.

[00:18:43]

And so I was talking to someone and about about this topic and he said that, you know, if you tried to convince someone that you're trying to to convince someone that there's no no afterlife, if you if you do a completely successful job of convincing them, then they can actually, you know, change the way they live their life and and maybe make the most of their life in a way that they wouldn't have if they had been relying counting on an afterlife coming later.

[00:19:10]

But let's say you don't really convince them. Let's say you, which is actually more likely. Let's say you've succeeded in planting a little bit of doubt in their mind that wasn't there before.

[00:19:19]

But you haven't actually convinced them enough to the point that they'd be changing the way they actually live their life. Then I think you could argue you've made that person unambiguously worse off so they don't have the benefit of an actual change in their life, but they also no longer have the comfort of absolute certainty. Right.

[00:19:35]

But but we don't have a control, obviously, over to what degree people can be persuaded of one opinion or another or what they're going to do with it.

[00:19:43]

So the way I see it is sort of the positive flip side of what you just said, which is I am going to present my best argument for what I think is a tour belief or a tour understanding of reality. And then is the other person we're talking about, adult human beings. So it's the other person's privilege and work and duty to work things out for himself. And yes, new people may come up with sort of hybrid beliefs that make only part sense and that may cause some problems, but that may also cause them to think about things more carefully and to, you know, engage in a quest for for years to come about how to make sense of life.

[00:20:21]

Oh, Massimo, we could definitely talk about this for ten more episodes, but we have to wrap it up. You can all go to, rationally speaking, Doug, to comment on this and future topics. For now, we're. Moving on to the rationally speaking, PEX. Welcome back. Every episode, Julie and I pick a couple of our favorite books, movies, websites or whatever tickles our rational fancy. Let's start with Julia Spik.

[00:21:04]

So my pick today is the list of paradoxes on Wikipedia. It's actually a really comprehensive and really interesting catalogue of paradoxes throughout the ages. And they've categorized them by type of paradox. So they've got all sorts of scientific paradoxes, which they then group both physics paradoxes and chemistry paradoxes and biology paradoxes. And then they've got paradoxes in math. And then those are subdivided into paradoxes related to infinity and paradoxes related to recursion, logic, paradoxes, religious paradoxes. It's great.

[00:21:38]

Definitely don't check it out unless you have at least a few hours and a few brain cells to kill. It is a great resource.

[00:21:45]

Of course, as you were noticing earlier, this is a site where the word paradox is used in sort of a broad sense, right?

[00:21:53]

OK, yes, but that's true. Some of them are more unsolved mystery. That's right. Yeah.

[00:21:59]

Science questions that have not then not been fully addressed or something that because technically speaking, of course, paradox is something that belongs only to logic. And I guess to some extent, the math, which I tend to think of, is a branch of logic anyway. But if we're talking about, for instance, what they say, you know, what they categorize on their biological paradoxes, like the first one on the list, it's the French paradox which they define this way, the observation that the French suffer a relatively low incidence of coronary heart disease despite having a diet relatively rich in saturated fat.

[00:22:30]

OK, yeah, that's not a paradox.

[00:22:31]

This is not a paradox thing. Actually, it's a good question. It's a it's actually a very good biological question, but probably I'm sure somebody's actually done the research on it. And it probably has to do with other aspects of the diet, the environment, and possibly even of the genetic makeup of the French population.

[00:22:48]

OK, OK, fair enough. The word paradox, the take away lesson number one is the word paradox tends to get tossed around a lot and and extended beyond its original meaning. But a bit of the paradox is on. There are actual logical paradoxes, although again, some of them are have already been been resolved. And there were only paradoxical in the time in which they originated, like some of the ancient philosophers thought, it was paradoxical that in order to get from point A to point B, you had to pass through an infinite number of diminishing the small distances in between.

[00:23:22]

That was Zeno's paradox, which was paradoxical to the Greeks. But it's really not all that baffling once you understand the concept of a geometric sum and and the mathematical result that an infinite sum of of dimensionally small quantities is actually a finite number as opposed to infinity.

[00:23:41]

We have to wait, however, all the way until Newton and Leibnitz for figuring that one out took some time. I mean, it's it's not surprising that the Greeks were, in fact, puzzled by that. One of my favorite paradoxes on this list, on this website, which is the Wikipedia, at least the paradox is it's an actual paradox, you know, in logic. And it's an important one. Historically, it's called the Barbour Paradox. And it was formulated by Bertrand Russell at the beginning of the 20th century.

[00:24:05]

And it goes something like this. An adult male barber shaves old men who do not shave themselves and no one else can shave himself. And there is no and no good answer to it, because, of course, if he shaves himself, Danny is in contradiction of of the fact that barbers can now shave themselves. But on the other hand, he doesn't shave himself than he does than he does the Israeli.

[00:24:30]

Right. So it is one of those things where, in fact, there is a paradox. There is a situation where logic seems to have hit a wall. And this is one of the things that is interesting about paradoxes, because they do show, at the very least, that there is something incomplete about logical systems. In fact, interestingly, I said this, this was the barbaric paradox is interesting historically, because this came up during the early part of the 20th century quest for a self-contained, logical foundations to mathematics, which is what Bertrand Russell and others were after.

[00:25:02]

And in nineteen thirty one, I believe it was Godo who published is Famous Incompleteness Theorem in Logic and Mathematics, which shows, in fact, demonstrates that there is no such a thing as a completely self-contained, self justified, logical system. In other words, there is no answer to some of these paradoxes.

[00:25:20]

You have to step outside of the of the particular logical system which in which you formulated the paradox to find a solution. But in so doing, you're moving to a different kind of logic. So there is no solution to these kind of paradoxes.

[00:25:34]

There is a limit to logic, in other words.

[00:25:36]

So I'd like to bring up my pick of the for this episode, which is a website called The Following Files.

[00:25:44]

And the Fallacy files is just what it sounds like.

[00:25:47]

It is a great collection of all sorts of information about logical fallacies. And my favorite part of that, the website, is what they call the taxonomy of of logical fallacies.

[00:26:00]

It's a really complex and interesting diagram that shows the relationships between the different kinds of fallacies and how they are connected to each other logically. And the main thing that this taxonomy does is it makes very clear to the user what is the distinction between a lodge, a formal and informal logical fallacy?

[00:26:20]

What's an informal, logical fallacy, an informal, logical fallacy insofar as it goes around dressed in blue jeans instead of tie?

[00:26:27]

And never mind the informal fallacy is, in fact, if a form of bad reasoning, but it is not really in violation, strictly speaking of a system of logic, logical rules. So a logical fallacy. So a formal fallacy is something is a construction is an argument that actually violates directly the laws of logic.

[00:26:48]

But it is an informal fallacy. Doesn't do that, but it isn't fair if it is an instance of bad reasoning.

[00:26:55]

OK, so let's let's pick my my favorite one here is the ad hominem attack, which is uninformed and it is in the form of following.

[00:27:05]

And the ad hominem attack is a situation where instead of attacking the argument that somebody is making you attack the person, you attack the character of the person and then try to undermine his argument essentially indirectly. So if you say to somebody that, you know, you shouldn't believe what he says because he's an atheist or because he is a Christian or something like that, you're making an ad hominem attack. You're not actually engaging the arguments that the person is making.

[00:27:35]

You are trying to undermine his credibility. Essentially, this is done very often in politics, as I'm sure you're aware of.

[00:27:40]

All right. Well, would it be an ad hominem attack to accuse the person of not having relevant credentials to speak about a subject? Because certainly the identity of the person does. It is actually relevant to whether they're making a credible argument. Right.

[00:27:53]

That is a very good point. And in fact, makes it your example makes a distinction between when Adam M. is in front of you and when it's, in fact, a reasonable thing to do.

[00:28:04]

That is, if you're invoking an authority to settle a dispute and say, you know, we're talking about evolution with a creationist. And you say, well, I don't actually understand the details of the arguments, but the overwhelming majority of biologists who presumably do understand the arguments think that evolution actually does happen. You are not, strictly speaking, committing a fallacy because what you're doing is you're invoking reasonable authority, just like we do when we in everyday life, when we go and bring our car to a mechanic or when we are sick and go to the doctor, we don't go to the mechanic when we're sick and we don't bring our car to the doctor.

[00:28:42]

And the reason for that is because we make the reasonable assumption that other things being equal, it is more likely that a mechanic is going to solve a mechanical problem and a doctor is going to solve a medical problem. Right. So in that case, we are in fact invoking a treaty which is invoking an authority. Making organ from authority is sort of the flip side of making a dominant matak, because it's sort of the opposite thing.

[00:29:04]

You're doing exactly the opposite. Right? Instead of undermining somebody, not by by addressing his argument, but by addressing his character, you are doing the opposite. You're building an argument based on somebody's credentials. So it's not a fallacy if you do it that way. It is a fallacy. If you say that because he's a biologist, anything that he says about evolution has to be right now as if you make it airtight, if you make it a absolute statement that just because Person X is is an expert in that field, then whatever he says goes and there are no possible exceptions, then you are in fact committing a fallacy.

[00:29:41]

Right. So it seems like we have to make a distinction between the kinds of arguments that are bolstered by the credentials of the person saying them, the kind of arguments that should stand on their own.

[00:29:49]

Because one of my pet peeve fallacies that that I consider a fallacy is when people make a philosophical argument and they cite the philosopher as if because it was said by an expert in philosophy, it's more likely to be true. The same way if a statement about biology was made by an expert in biology is more likely to be true. But I feel like that doesn't apply to philosophy. A philosophical argument should stand on its own, and it's not made more credible by the fact that it was a famous philosopher who said it as opposed to just some guy on the street, right?

[00:30:19]

Well, he has no expertise that I don't there's no information I'm missing. Aha. Right.

[00:30:24]

It seems to me that we will have to devote a separate issue, a separate podcast to that particular topic, because the implication, what you're saying is that you don't seem to think that philosophy actually requires any particular expertise.

[00:30:34]

And I would beg to disagree. Yeah. Guilty with that, with that assumption.

[00:30:38]

In other words, any technical field requires a certain degree of expertise. Now, however, you are on to something there. If instead of philosophy we use, say, theology, one can make the argument that theology being about nothing because. I don't think the gods exist then expertise in theology is equivalent to having expertise in, I don't know, astrology, for instance, one can say that there are astrological experts, but since astrology is entirely pseudoscientific, then it doesn't really matter that there are experts.

[00:31:09]

It's expertise about nothing. So your point is well taken. It's just that I don't think it applies to philosophy.

[00:31:14]

I think it does apply to some to a lot of the pseudoscience, obviously. All right.

[00:31:18]

We will address this topic in the Future podcast. I insist on it, but for now, we're out of time. Join us next time for more explorations on the borderlands between reason and nonsense.

[00:31:36]

The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benny Pollack and recorded in the heart of Greenwich Village, New York. Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.