Transcribe your podcast
[00:00:14]

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I am your host, Massimo Puchi, and with me, as always, is my co-host, Julie Julia. What are we going to talk about today?

[00:00:48]

Well, Massimo, today we are here with a live audience in New York City at University Settlement on the Lower East Side. And we want to say hi. Hello to you, too. Definitely this wasn't taped, right? Yeah, we had pay extra for the strong applause track on iTunes. So today we're going to be talking about Massimo's latest book, which has just come out. It's called Answers for Aristotle How Science and Philosophy Can Lead US to a More Meaningful Life.

[00:01:22]

So I'm I'm very excited about this, because if I if you had asked me to summarize the dominant thread of all of the rationally speaking podcasts that I've been recording for the last, what's it been, two and a half years now, two and a half years, 70 episodes or something like that? Yeah. So, you know, there are a lot of things that I could say, like common threads that weave in and out of our various episodes.

[00:01:46]

But I think probably the top contender for like a dominant theme, at least from Massimo's end of the podcast, would be that you need both science and philosophy combined in an intelligent way to answer the toughest questions about life, the universe and everything. So when I saw the subtitle of his latest book, I was like, oh, wow. It's like it's like our podcast, but in concentrated, you know, direct, straightforward form. So I like to think that this book was was really inspired by all of your conversations with me.

[00:02:14]

That content definitely was. And I noticed that I was acknowledged on the last date with time. So maybe you should have really put that at the front, though, so that as I was reading the book, I would have this warm positive. Oh, I see that. Too late. Yeah, that's the fault of the editor. Certainly, you're right. The the content of the book does reflect a lot of the sort of topics that we have had conversations over for the last two and a half years.

[00:02:39]

However, the basic idea actually came out essentially out of a joke. Out of a joke. Yeah. So a few years ago, when before moving to moving to see the University of New York, I was at Stony Brook University and I gave a sort of a very informal talks like like the kind of things you give in the evening with other students and colleagues over a beer. And it was about, in fact, science and philosophy of the good life.

[00:03:03]

And it was inspired by was organized around the Monty Python's songs and MovieClips particular, of course, The Meaning of Life. And one of my postdocs at the time, Oliver Bosna, who is now a professor of ecology in Germany, said, you know, you really should write a book about this thing. And I said, no, I shouldn't. And then I thought about it. Actually, I probably should. And that's how it came out.

[00:03:26]

I mean, the outline, the original outline, actually, which, of course, got modified dramatically by the editor before we actually got to the book, pretty much reflected the outline of the talk. So, yeah, it came out of it. You should do this.

[00:03:39]

So so we should we should give our listeners a sense of what it looks like to use science and philosophy to tackle difficult questions in case they haven't been tuning in for the last two and a half years. So I thought maybe we should start with, given that it is late October and we have an election coming up, we should start with politics, which I'm sure some of the particularly interesting chapters in the book Masimo. How would you use science and philosophy to help us understand how we should think about politics?

[00:04:06]

Yeah, so that's a good example, not only because it's topical, obviously, given given the time frame, but because it is in fact one of the best areas where you can combine philosophy and science. So the science there is, of course, social science, political science, sociology and so on and so forth. So you want to you want to have data and beautiful evidence about, let's say, the functioning of different political systems, the functioning of different ways of doing elections.

[00:04:34]

There's a lot of people tend to think that the way they grew up voting is the way people vote. But as a matter of fact, there is a lot of variation across the world in which the way in which people vote, a lot of people tend to think that the system, you know, in this particular case in the United States, a two party system is sort of a natural way of doing things. But that's not also not true.

[00:04:55]

Of course, there's all sorts of variation around. And we have data. You know, we have decades and decades of experiments in different parts of the world which social scientists and political scientists have analyzed. And for instance, there is a really interesting paper that is quoted in the book that looks at the efficiency of different types of voting systems. And it turns out that the best voting system is something that is a combination of of ranking candidates, basically.

[00:05:24]

And so instead of voting for one person or another, you actually give a rank, your first preference, the second preference, third preference, and then it becomes the honor. If people need a second round, then those preference become weighted. And there's criteria according to which you can you know, if you want to maximize certain things, the social scientist can tell you, yeah, that actually is the best way. For instance, you want to avoid runoff elections.

[00:05:49]

You want to avoid voting for the worst candidate. You're voting for a third candidate because you're afraid of losing your vote, the meaning of your vote, that sort of stuff. Also turns out, incidentally, that historically, people had tried these things and actually the Roman Senate was held elections in a similar way, using a very similar system. So that's that's what you can learn from the science perspective, from the philosophy perspective. Of course, there is an entire field there in political philosophy where people have been thinking and debating about what what is the meaning of justice?

[00:06:22]

What is the best way to organize a state according to what criteria? Now, this goes back all the way to Plato's Republic, of course, which was in fact a in part a treaty on how to organize the best state. I'm pretty sure we wouldn't want to do things like Plato suggested, but well, it was a very stratified know. The philosophers, of course, were in charge, and I agree with that. This is obviously the best option.

[00:06:51]

But it was a very sort of stratified kind of society where each one had a class of people had their own their own roles. And it was pretty, pretty rigid, although it had some advantages compared to most contemporary society at the time. For instance, Plato saw no reason why men and women should be doing different things. And if the women were good at doing philosophy or doing whatever it is or being worriers, then that's what they should do.

[00:07:14]

Nonetheless, it wouldn't translate very well with sort of democratic Western type of societies. But it was that was the beginning. And then from there, you got 20, 20, 100 years of thinking about these kind of things, culminating, I think, in books like Unjustice by John Rawls, which is another classic of moral philosophy in the 20th century.

[00:07:34]

One of the other threads running through this book is the degree, sheer degree to which things that we're not aware of can be influencing are our most fundamental judgments about what we prefer or what's motivating us or what our moral values are. And and so some of my favorite examples of this come from actually there all throughout the book. But there are some great examples of this in the in the Politics chapter. What what kinds of things would you warn people about that might be influencing their political decision making other than their conscious, reasoned choices about what the best outcomes are going to be?

[00:08:11]

So we're going into the area of cognitive biases. So one of the things that Aristotle did get wrong was this idea that human beings are the rational animal. Modern research in cognitive psychology clearly shows that we're much better described as the rationalizing animal. We're very good at coming up with all sorts of reasons why we're right to begin with.

[00:08:34]

And funny word, rationalizing it doesn't mean making rational. It means an irrational the opposite. Exactly like using the word truth ising for telling a lie. That's right. Now, yes. Through fazing for dining alone. So now what cognitive psychologists have discovered over the last several years is that human beings are prone to all sorts of confirmation bias is all sorts of ways of misinterpreting or reinterpreting information about the world. So to fit their own, their own agenda, their own preconceived ideas.

[00:09:07]

And on the one hand, you might you might think that being aware of that, it's at least the first step if you know that you're prone to use to misuse your reasoning abilities in a certain way. That's certainly the first step, although certainly not the only one to improve things. But one of the nice things actually that I found, again, in terms of the connection between science and philosophy is that if you look at the major cognitive biases that the psychologists have discovered, cognitive scientists have discovered, they actually match pretty well with the logical fallacies that philosophers have been talking about, the warning people for literally millennia.

[00:09:43]

And so, for instance, the idea that a lot of a lot of people have this tendency to go from observing to events following each other in time to drawing a causal connection between those two events. So this is the basis of a lot of pseudoscientific belief. For instance, I vaccinate my child and then a year later, the child develops autism and therefore the vaccination must have caused the autism rate. So that's that sort of causal thinking. It's actually not warranted by that sort of evidence.

[00:10:16]

It may be that there was a causal connection, but it may not be. I mean, after all, there is a perfect correlation, for instance, between my age and the expansion of the universe. But I'm not causing the expansion of the universe right now. These are common cause, which is the passing of time that is affecting both. Now, that kind of mistake is, of course, well described in logic as one of the informal fallacies.

[00:10:39]

This is the post hoc, ergo Propp, the rock fallacy, which is a fancy Latin for after dad. Therefore, because of that and one of the things that I find interesting about this connection between the cognitive biases on the one hand and logical fallacies and the other is that I've seen a lot of social scientists recently being.

[00:10:58]

Sceptical of philosophy and logic, because they say, look, we're showing that people are actually not logical, they're not they're not rational, that to me that means that it's even more important to get training in clinical thinking and awareness of logical fallacies, because what it shows is that it doesn't come natural to human beings, just like, let's say, probabilistic thinking doesn't come natural to human beings. I mean, the gambler's fallacies and things like that. The reason why casinos make so much money is because we have all sorts of bad ideas, intuitive ideas about how probability works.

[00:11:32]

But that doesn't mean that you cannot take, of course, in probability and start learning about it and figure out that it's a bad idea to bet against the house of a casino on a regular basis, or you get lucky. But if you do it on the long run, you are guaranteed to lose. And once you know that, presumably unless your self destructing, you're not going to do that sort of thing. So I think that actually this discovery of cognitive bias is in in these input from the social sciences are very important.

[00:11:58]

But the lesson to draw is exactly the opposite, that some people actually do draw from it. It is the idea that actually then that means you need even more training because it doesn't come rational, so natural. So instead of being a rational animal, we are rationalizing one, but we can in fact improve our degree of rationality by doing these sort of things.

[00:12:19]

There are several great examples of how the way a question is framed can completely change your your answer to the question. So in the case of politics, you described how people were asked, did Bill Clinton reduce the deficit? And and then another group of people were asked, did the did the deficit go down? And when Bill Clinton was asked, when Bill Clinton was mentioned, people with Republican leaning were much more likely to say that, no, the deficit did not go down because it cannot write the president.

[00:12:49]

It has to go down but go up. But now that's that was turned out to be not the case. Right. And and so, I mean, I wonder if you have any thoughts for how I mean, the I'm interested in how science and philosophy can help us think about society as a whole. But I'm particularly interested in how an individual's understanding of science and philosophy can help that individual figure out what her values really are and and whether her actions have been influenced by things that she doesn't want to be influencing her actions.

[00:13:21]

So I've been personally particularly interested in this question of how the judgements that I think I'm making can can be completely influenced by the way that I posed the question to myself. Yeah. Now, that's that's a good point. And I think one of the best answers actually there comes comes from, interestingly, the philosophy and sociology of science, because if you think about it, scientists themselves will like to to believe and they might even claim occasionally that they're trained in being objective.

[00:13:55]

They're trained in evaluating dispassionately the data, the train, the training coming up with rigorous analysis and all that sort of stuff. Well, if you believe that I have a large bridge down in Brooklyn that I can tell you for very cheap, I've been a scientist. That's none at all the way it works. Most scientists are highly rationalized rationalising individuals because they're very smart. So they came up with all sorts of interesting ways to rationalize their conclusions. And if you let them go, they'll go in all sorts of bizarre directions.

[00:14:24]

What then? Then you can say so. In other words, they're not different from any other human being. They're interested in the same things that other human beings are interested in, you know, money, fame and sex, not necessarily in that order. And so the question is, how come that science works so well? If it's true that the scientists in human beings, in scientists are not particularly objective, they're not particularly rigorous about what they do in terms of sort of getting out of biases of which, by the way, most of the time they're not even aware then how outcome that the outcome is actually such a powerful progressive enterprise, discovering things about the way the world works.

[00:15:00]

And the answer has been provided by people like Helen Agena, who is cited in the book, in the chapter about how do we know things. And according to Longino, the crucial part there is that science is a social activity. You don't you know, this idea that scientists working their own laboratory at night and figuring out things about the world on their own, it's false. If it was the case that was the case three hundred years ago, perhaps, but certainly not today.

[00:15:26]

Today, science is a highly social integrated enterprise, which means that the biases of a single individual scientists are going to be countered by the bias is a bunch of other scientists. And what comes out of that continuous back and forth between different biases is something that it sort of emerges at a societal level at the group and the level of the group of scientists that, by the way, becomes a very powerful argument for increasing diversity within the scientific community. Because, for instance, there are very good examples from the history of science where scientists have come up with ideas that were called.

[00:15:58]

Racist or misogynist, and then the counter to that is to include minorities as much as possible in women as much as possible in science, because that that way, the next time that some bozos come up with this idea that women clearly are inferior because their brains are smaller, there's going to be a bunch of women in the audience and you're neurobiologists going to decide what are you coming up with that and use the data in newsrooms when you're wrong. If that works in science, there's no reason why that can work in society at large, that it's important for the individual to recognize the existence of cognitive biases, to learn about critical thinking and the basic tools of logic and that sort of stuff.

[00:16:41]

But it's also important for that individual to engage in as wide a conversation, as as diverse a conversation conversation as possible with other individuals, because that's the way your ideas get tested. That's how you put something out there and somebody says, no, wait a minute. For instance, you can just give you an example from just the other night I was at home and waiting for friends to come over back home from the theater. And I logged into my Google Plus account and I figured out that there were several people, one of whom is actually sitting here in the audience, who engaged in this conversation about a post that I that I that I put out concerning the fact that the US government does not have an independent any more, an independent manned space program.

[00:17:27]

It's deputising now down to two private companies. And it was a really interesting thing because some of the people that that the conversation clearly had different ideas from the reason I put up the post to begin with and they were challenging me on. Well, what about this? What about these other example? What about that? Now, by the end of the conversation, I was much more careful about putting out what I thought was was a certain, in my opinion, because it was clear that, yeah, that was a good objection and know that that doesn't work.

[00:17:58]

I still think this is my basic point was correct. But now I have to come up with better examples because in fact, my initial example had been demolished pretty, pretty easily.

[00:18:07]

So that is the kind of thing that I think if one does does regularly and honestly, it actually does improve things quite a bit better than talking to yourself, because you always agree in front of the mirror there's this exercise that I sometimes recommend to people who want practice considering opposing viewpoints, especially in emotionally charged situations and who want like practice saying that they were wrong publicly when when warranted. And that's to to post their opinions that they suspect will not be popular on Facebook and to post them in as as not mean or belligerent, but as confrontational way as possible to encourage people to disagree with them.

[00:18:54]

And then they can, you know, change their mind if given good arguments publicly on Facebook. That's right. Yeah, I've done I do sometimes score. I change my mind. That's right. Well, another similar similar technique that it's actually used in and it's pedagogically well, well established is when you're teaching a class with students special philosophy class just to have student assign two different or three different positions to students at random and then ask them to defend those positions, regardless of whether they agree with or with that position or not.

[00:19:25]

And then, as it turns out, there's pretty pretty good evidence that shows that that is one way people change their mind sometimes, or if they don't change their mind, they have come up with a much better understanding of the opponent's position because they found themselves in a situation of having to defend that. Let's say you are opposed to the death penalty and all of a sudden the assignment is OK, trying to come up with the best arguments in favor of death penalty.

[00:19:48]

Oh, well, so the first the first reaction typically is negative. It is people tend to not want to do that because there's a sort of an emotional dissonance between what you believe and a sort of a gut level and what you are asked to defend formally. I think it's a good training for being a lawyer, by the way. But the other thing actually works. People do learn about other people's positions and defend or sometimes change their mind about stuff because they all of a sudden they were forced to take a different perspective, a different point of view.

[00:20:20]

Yeah. And even even just being aware of the reasons of the rationalizing techniques that people use if left to their own devices can help you notice yourself using them, too. And you have this great outline of the seven different ways that people rationalize. I think the context was, again, in politics, people being given evidence, like strong evidence that Iraq had nothing to do with 9/11 and, you know, did they had if they had previously thought that Iraq, Saddam Hussein was behind 9/11, how did they react when given this new evidence and only like two percent?

[00:20:58]

People actually change their minds. It was really small and then the other techniques were things like, well, you know, logic doesn't matter anyway, or I never said that. Yeah, I never said that. I had other reasons for not liking Saddam Hussein. I never thought he was behind 9/11. Or if the president said so, then he must have good reasons. So do these people that don't buy into conspiracy theories and yet they're willing to let you know to trust the authority just because it goes in their direction.

[00:21:27]

Yeah, that was an interesting study. It was done. It was a political science study and it showed that there is a battery of defenses, basically, that people put up when you're when they are challenged on factual basis, which shows you how difficult it actually is to convince people to change their mind. You're right. The percentage of people that change their mind, they admitted to be wrong was small, but it shows you that that is possible.

[00:21:51]

Yeah, but some of the things were stunning because some of the questions like the one you were talking about earlier, you know, did the budget go up or down after under Clinton? Those were not really matters of opinion. I mean, you can ask something like, did you feel that the country was going in the right direction under President Clinton? OK, that's that's a pretty subjective question. You know, depending on how do you what do you mean by feeling what aspect of the country?

[00:22:15]

But if you're asking a very specific question, did the budget deficit go up or down? Well, these are a very simple numeric answer to that question. It's factual. Economists on both sides of the spectrum agree and by the way, was highly publicized because it wasn't so unusual in public perception that a Democratic president could bring down the deficit, then it was only publicized. So this was the kind of thing that people ought to have known or they were easy to look up.

[00:22:40]

And even so, about 50 percent of people got it wrong. It's like it's it's amazing. As people put it, ignorance is the root of all evil, if not all a lot of it.

[00:22:51]

I was particularly interested to just very briefly return to the framing question. I was interested in this spectrum that I started noticing emerging in your book of ways in which our judgment is influenced by things that that don't feel like conscious reasoning on our part. And so they run the gamut. So on one hand, you have things like realizing from, you know, learning, scientific learning about the world, realizing that, oh, wait, my decisions are all caused by by chemicals in my brain and by neurons firing.

[00:23:25]

And that's that means I can't have free will because my decisions are all caused by these interactions of chemicals in my brain. That philosophy. Can I think you made a great case. Plus, we can help you recognize that that's not actually that that scientific understanding shouldn't actually make you feel like you're any less in control of your decisions that you didn't vote for merely a description of your decisions being made, not a proof that you're not making them. And then at the other end of the spectrum are things where I think you you should be concerned about the influence that these factors are having on your decisions, like the fact that holding a cup of hot coffee or a cold beverage can make you think that the person you're talking to is more friendly or less friendly or recognizing that that actually is a good one.

[00:24:18]

So so philosophy is as a reputation for not being practical. Here's a practical tip. Next time you go out on a date or on a job interview, make sure that everything that's right, hot beverages, not cold drinks, because it is it turns out there's pretty good evidence that you are going to judge or you're going to be judged much more positively if the person is holding a hot beverage and whether then the benefits she or he is holding a cold one.

[00:24:43]

So it's like, go figure. Now, that doesn't mean, by the way, one needs to be careful about these things because these are interesting facts about the human beings respond. But that doesn't mean that that's the end of the story. Right. I mean, there's there's a bias. Yes. You can quantify that. There is a bias, but it doesn't mean that it's as simple as, oh, I'll give you a cup of coffee, I'll get a job that doesn't work quite that way.

[00:25:05]

It just means that there are several factors. Right. And it is probably on average you'll have. That's right. And it's not necessarily long term. So you may get off well on the first date with the cup of coffee. But if you're a jerk, your second date is not going to go well, regardless of what the weather gets cold. You have to like leave with her to Florida and stay there until it's warm again, becomes competition.

[00:25:25]

So what I was saying is there's you give all these great examples of things that influence our judgment and they run the gamut from things that that don't really threaten our our sense of having made those judgments ourselves to things that that do feel like alien control of our of our judgment that we don't endorse influencing them. And then there was this really interesting gray area in the middle, like evolutionary biology, teaching us that the the moral intuitions that we have were built into us because they were adapted for our ancestors.

[00:25:58]

That we we feel more inclined to help out people who are genetically related to us because that was adapted for our genes to help them proliferate and also that we feel a moral disgust at, say, asymmetrical faces or at things that aren't discussed, names or things that feel just viscerally discussing to us that we have a built-In tendency to conflate that with moral judgment. And so so to me, the most interesting questions are in that gray area. When do we when should we choose to override?

[00:26:36]

Right. And the example one of the examples you just brought up is particularly interesting to discussing morality. Right. So, first of all, let's open a small sort of bracket about evolutionary psychology as people that have heard our podcast before. No, I am actually that tend to be fairly skeptical of evolutionary psychological explanations of human modern human behaviors. But they do make their plausible aften are plausible stories. It's unfortunately very difficult to come up with very good evidence, empirical evidence that a particular explanation for modern behavior being traced back to the place of sin is actually correct.

[00:27:13]

It sounds good and it's possibly it's plausible, but that doesn't she shouldn't be taken as established, because to do that kind of research is actually very difficult. Unfortunately, human behavior, most human behaviors don't fossilize and we do not have a lot of other closer the that we can compare with. The two most closely related ones are chimpanzees and bonobos, and they're very different from each other. Chimpanzees are very aggressive and engaging in intergroup warfare. Essentially, the bonobos just have sex all the time.

[00:27:41]

So it's like, yeah, and they're equally distant from us. So it's hard. But one can make a reasonable argument that, for instance, instinctive behaviors that are still powerful in modern societies such as xenophobia, you know, the fact that you don't automatically distrust people that don't look like you or people that are that belong to outside groups. It's very likely. It's very possible. It's very plausible that they did evolve because, in fact, there was a long period in human history where pretty much anybody from the outside was not going to be a friendly thing.

[00:28:15]

It was going to be a problem. Now, the question is therefore as well. So that's the natural if that's the natural way in which women beings respond to outsiders as from a social perspective. So we have these very strong feeling, that very strong reaction. Should we or should we not override that right now in most Western societies? I think we would agree that, yes, we should that there is no no rational way why we should be treating.

[00:28:45]

You should automatically be distrustful of people that don't look like us, especially in a multicultural society when these people are not outsiders, actually insiders, multiethnic ethnic society. So there is one area where the biology tells you something about where your sense of morality comes from, where why did you develop these very strong reactions about things that you perceive like threats or not? Or do you perceive like they're right, that the right thing to do or the wrong thing to do?

[00:29:12]

But at the same time, your reason has to come in. Your understanding of reflecting on those feelings has to come in and say, yeah, but wait a minute, just because I automatically feel that way, it's not actually right to feel that way. There is an interesting experiment in the book that that deals directly with racism, for instance. So it turns out you can measure the the inclination toward racism subliminally. You can without the individual's knowledge, because you can expose people to certain cues that have to do with racial perceptions and then you can measure the skin conductance or things like that or the level of stress of the individual.

[00:29:50]

And you can tell how the individual is reacting regardless of what he says about what he's why he's reacting one way or the other. And so they did the experiment with a number of people, some of whom were very consciously considered themselves non racists and other people who, on the other hand, apparently had not thought to too carefully about this this kind of thing when the first group of people was faced with evidence that, look, my friend, you think you're not racist.

[00:30:16]

But in fact, I just showed you a picture of a black man and your adrenaline level went up through the roof. What do you make of that? And the interesting thing is they were ashamed. They said that is not what what I want to do. This is not who I am. So I'm going to be even more careful about controlling that sort of stuff. The second group cannot said, oh, yeah, that's the natural reaction, because I do think these people are, in fact, very dangerous and they better stay away from them.

[00:30:42]

So that is an interesting example where you can see that there is a biological psychological reaction to certain situations. But the brain, the reflective part of the brain can override and say none of this is not the right thing to do. So we stop and slow. Let's think about this, so in the earlier chapters of the book, you talk about moral judgments and how we reach different, very different moral judgments, depending on which sort of modules in our brain get activated.

[00:31:09]

And it's it's often not even under our control or we're not the ones who decide which modules get activated. So and I'll just briefly describe the trolley problem people. This is a common thought experiment philosophy as opposed to people. They ask, OK, if you if there was a train that was going to hit five children sitting on the tracks and you could divert it to another track where it would hit one person sitting on the tracks, would you divert it?

[00:31:37]

And most people say yes, better for one person today than for five people to die. But then the next iteration of the question is, what if the train's heading towards the five children on the tracks as before? But but this time, the only way you can stop the train is you're standing on a bridge above the tracks and there's a very large man next to you. And you can choose if you if you want to push him over the bridge onto the tracks where he will land and his sheer bulk will stop the train.

[00:32:02]

So it's funny that in the early version of that, the second type of experiment, the man was described as a very fat man. And now it's a very large I just know it's interesting, but. That's right. Exactly. Yeah. So so even though most people will divert the train, very few people are willing to actually push a large, big boned man onto the tracks and those various interpretations for why we have so much more moral disgust at the idea of doing that, even though the actual moral of the actual life calculus comes out the same.

[00:32:38]

It's it's more visceral. It's more like feels like more of a direct harm that we're causing. We have these sort of built in rules both evolutionarily and from society against like physically causing someone harm. And so, so, so the unwillingness to push the large man onto the tracks is kind of a what's called a deontological intuition. And you're following a rule that says you don't do active, you don't actively cause harm to an innocent person, even though the you're you're ultimately deciding to have five people die instead of one person die.

[00:33:16]

And the alternate way of answering the question, if you are willing to push the large man onto the tracks, is a utilitarian approach to ethics in which you look at what the consequences would be and you pick the action which causes the consequences, which are best overall. So putting the large man is violating this rule, but it has the best consequences because one person dies instead of five. And there are ways to influence which mode of thinking, which mode of moral reasoning people use by things seemingly as trivial as what font do you use when you write the question.

[00:33:50]

So more utilitarian thinking is more of an analytical way to think as opposed to an intuitive way to think. And so you can bump people into analytical mode by using a hard to read font. Somehow the act of trying to decipher hard to read font makes you more inclined to think analytically. You're more likely to push the Batman man on my bad back. Right. And so right. So it was all long lead up to saying it's not, as you like, really eloquently articulated in the book.

[00:34:19]

It's not clear that that knowingness about ourselves, that we have these two different modes of reasoning and can use one versus the other depending on influence, is knowing that there's no obvious answer of like, oh, well, now that we know this, we can clearly see that this is the wrong way to reason. Right. And, you know, now that we recognize that, we can make sure we reason this other way. But there's an interesting I mean, what I take out of the trolley experiments when they're done.

[00:34:41]

This has been done also in collaboration by philosophers. The thought experiment originally was a philosophical thought experiment. Philosophers have really codified experiments because they don't cost anything and so they don't require grants and all that sort of stuff. But recently they actually, Connie, the scientists are going together with philosophers and sort of set up a old way of doing this. When people are being scanned, people's brains are being scanned so that we actually know quite a bit about what's going on literally inside people's heads when they make those decisions.

[00:35:09]

And again, this is another interesting example where I think the science and the philosophy sort of inform each other. It's not that the science is going to solve the philosophical question. The science is going to tell you, you know, well, this is really what you should do. But what it is going to tell you is how your brain works. And therefore, if you think that this is what you should do, you need to take into account the way your brain works.

[00:35:30]

And it's inclined to to to think before you can actually correct it. If you decided that that's not the way to go. For instance, in the case that the junior set up the contest between the two cases, as it turns out, you can show that people's frontal lobes, frontal cortex is involved typically in the first version of the Delamar, the one where you pulled the lever, and which means that your your rational thinking capacities are engaged in this.

[00:35:57]

In that case, you're looking at the thing dispassionately, but in the second case where you push the big boned guy off the bridge, turns out that all of a sudden your amygdala are very actively involved in the area of the brain where they are very they're connected with emotional responses. So you can actually show that, yes, people are switching from one mode of thinking to another because they are, in fact, engaging their emotional responses. Now, you start to ask the question, you know, OK, but what is the right thing to do right now?

[00:36:31]

Once you show people what they're doing and why they're doing it, then you can ask them what? How do you square these two things? What kind of assumptions? You know, all of this is often done by people without actual explicit reasoning. It's not that people think about, is it? Yeah, I'm a utilitarian. Therefore, I should push the guy or not. I'm at the ontologies, therefore I shouldn't. But most people don't even know what those categories actually mean.

[00:36:53]

But what you can do is you can do the phone experiment, you can do the brain scanning, and then you can say, OK, as it turns out, there are these different ways of thinking about ethics. And you have been switching between two of those when you change your mind between those two things. Now, think about it. What do you actually want? If I allow you the time to reflect on these things, what do you actually think is the best way to go about it and why?

[00:37:18]

And I'm not suggesting there is an answer to it, but there is a way there are ways of thinking about it that are often implicit and it's always a good idea. When we are, we run into trouble to move from an implicit way of thinking, of intuition, intuitional way of thinking, which can be very useful in a pinch to a more explicit, more reflective way of thinking and say, right. The reason for the contradiction here, I can't consistently do one or the other what I have to resolve that contradiction in the way.

[00:37:49]

By the way, the brain scans also showed that the minority there's a very tiny minority of people who actually answer yes to both questions, but they pull the lever and throw the guy, those people, many people who wouldn't pull the lever. But when push the guy, that's a good question. I don't think so. But that's a good question. I don't know that they don't think so. But there is a small number of fairly tiny component number of people who actually do both.

[00:38:15]

And it turns out that those people have brain scans that are typical of sociopathic behavior and it's really amazing. So it's like they're completely disengaged with their emotional responses. A lot of those people are found on Wall Street, by the way. There's the fact I'm going to just making this is a joke, actually. People have studied the psychological profiles of highly regarded CEOs on Wall Street, and the degree of sociopathy is very high, which you might imagine.

[00:38:42]

That's an interesting another one. As another one of my interests, especially when I was a practicing scientist, was that the desire of nature, nurture, interactions, gene and environments shaping people's behavior. And as it turns out, there are two broad categories of people that that fall into the sort of sociopathic profile when they're young and they've all been completely different directions, depending on the environment in which to which they're exposed. If you are if you have sociopathic tendencies and you're exposed to an environment with, you know, a non supportive family, low level of education and that sort of stuff, you're likely to, in fact, become a violent psychopath if you are the same kind of person, but exposed to a very supportive behavior, a very supportive family environment.

[00:39:26]

You know, you go to good schools and all that stuff. You become a CEO on Wall Street. So it's like it's the same idea. But the outcome is very different and it's the result of the environment.

[00:39:36]

So takeaway being, if you're big boned, don't work in a train track. Take away more practical advice from philosophy.

[00:39:48]

What about something like you're probably familiar with Peter Peter Singer thought experiment. If you if you saw a child drowning in a pool and you could jump in and save him and in the process ruin your thousand dollar suit, would you feel morally obligated to do it? And almost everyone says, yes, I would not let a kid die to save a thousand bucks. And then he says, well, you know, analogously there are children starving in or, you know, suffering, dying of malaria and other parts of the world.

[00:40:14]

And you could easily save save one of their lives by spending a thousand dollars. Do you feel morally obligated to do that? And these people are like, oh, but are hard pressed to come up with a morally relevant distinction that would make them morally obligated in the case of the kid in front of them and not the kid, you know, who's far away, who they are never going to see and can't picture. And, you know, it's pretty plausible that the reason we feel more like morally obligated in the first place, but not the second case, is that our moral intuitions evolved to to deal with people who are right in front of us to respond to to visible suffering and not to probabilistic lives saved, even though the actual impact is, you know, an expectation the same or even better in the case of.

[00:40:56]

Because lives are. Cheaper to save in the bedroom. So so my my intellectual judgment tells me that I have just as much moral obligation, even if I can't see the person in front of me. And yet my intuitive moral judgment tells me I don't feel as morally compelled. So. Well, the problem is there is, first of all, so penis piercing starts from a utilitarian perspective, a consequentialist way of looking at ethics. And from that perspective, what you just said does, in fact, follow.

[00:41:26]

And by the way, singer, who is a very controversial figure because he advocates all sorts of unpalatable things, including euthanasia of a severely disabled young children. He's also OK with killing young children who aren't severely disabled. I mean, very young children, very young children. Well, I mean, in that he doesn't see any like like when they're born. That's right. That's right. If you're OK with late term abortion, there's no reason why you shouldn't be.

[00:41:51]

Yes. Anyway, ESA is a very controversial, as you might imagine, for those reasons. He also does, in fact, live up pretty much to his own expectations, because I think that the figure is something that gives away twenty or twenty five percent of his salary, which is a pretty large percentage. So he tries to live up to his expectation. You'll find them occasionally writes editorials in The New York Times tried to convince people to sort of go into the same in the same direction.

[00:42:15]

But here's the problem. So the explanation of the of why people most people don't do that could be the one that you just put forth. And there's a disconnect between the way we feel about certain situations and the moral decision making that we can do when we reflect about it. But that is if you, in fact, start from a utilitarian perspective, utilitarianism is not the only type of ethical framework that you can adopt following a rule that when you see someone need, you have to help them.

[00:42:43]

That's correct. And that's the theological one. Or you could come up, talk about the ethics of care, which usually comes out of feminist theory or virtual ethics or in communitarianism, which build in a different differential treatment for people that you're supposed to, because those people you have duties towards, those people that you don't have people on the other side of the planet. So you have duties to your daughter, your son or your parents or your friends because they depend on you, because you interacted with them, because you developed that particular relationship.

[00:43:18]

So it is true that if you abstract to the level of numbers, then these people are where they are abstractly speaking, just as important from the universal perspective as these people over there. But the fact of the matter is that we do not have a universal perspective. We have a perspective that comes from social interacting with people, people that depend on us for certain things. And so there are some ethical frameworks for which that is not a contradiction at all, that it actually is part of the idea that you are, in fact, morally obligated to have preference for people that depend on you, for people that know you, for people that have come to trust you and so on, so forth, as opposed to a stranger.

[00:43:58]

That doesn't mean a stranger is not important or it doesn't mean that a stranger should therefore be left to die in the cold. But is that other things being equal? You actually have a larger degree of moral responsibility toward the people, including yourself, because part of the idea, of course, is that you are moral agent just like anybody else. And although it is true again that from a universal perspective, you are just as important as anybody else from your point of view of yourself, of course, and the people that are close to you, that's simply not the case.

[00:44:30]

Human relations don't work that way. So it does depend on what kind of ethical framework. And I'm not suggesting that there is a final answer there. In fact, that one of the chapters in the book is called a handy dandy model menu because need any more? Because it allows you to sort of look at the major options on the table and say, well, actually, this is the one that at that, for whatever reason, feels more important and interesting or well articulated and so on.

[00:44:58]

But the fact is that there are frameworks and there's a bunch of remorse that don't work. But there's a bunch of stuff in ethics that doesn't work. And that's, I think, what ethics, the way ethics works. I think of ethics as a way of reasoning, more than a set of answers know. It allows you to think about what your assumptions are when it comes to moral decisions and what follows from those assumptions. And sometimes things that follow from those assumptions are not palatable.

[00:45:22]

And at that point, you could say, oh, maybe one of my assumptions was, in fact mistaken or I have to reject it or I have to modify it.

[00:45:31]

We're almost out of time, I think. But is there is there any issue that you think science could do well to incorporate more philosophy into you or vice versa? Well, it depends. Yeah, that's a good question. So it depends on what one means by incorporating. Some philosophers are often accused of not paying attention to the facts. In other words, just armchair philosophizing. It's the. It's the quintessential way in which you make fun of philosophers, although, interestingly, nobody has a problem with armchair mathematician, for instance, nobody nobody accuses mathematicians.

[00:46:06]

You know, you don't care about the facts. You just do math. Well, yes, that's what they do as a as a profession. But philosophers actually do pay attention to the facts. They if you cannot be a serious philosopher in the 21st century, in whatever area of philosophy, with the possible exception of history or philosophy, and not paying attention to what science is saying, if you're a metaphysician, you better pay attention to what physicists tell you about the nature of the world.

[00:46:32]

If you are an ethicist, you better pay attention to what social psychologists and evolutionary biologists and neurobiologist tell you about it, because those things do build a general view of how the world works. And you're thinking you're reflecting about the world has to therefore be based on the best understanding that we have of the world, and their best understanding comes from science. So in that sense, no philosopher, no serious philosopher would deny that facts and science are important.

[00:46:59]

What philosophers, least many philosophers, do reject is that it is all a matter of facts and it's all it all reduces to empirical evidence. You know, this is my favorite villain, as you know, in this area. Sam Harris, who a couple of years ago came up with this moral landscape book where he suggested that science and not philosophy has is capable of producing answers to moral questions. But in fact, there is not a single example in the book of a new insight into moral philosophy that is provided by science.

[00:47:30]

And that's because science deals with facts. But facts still need to be evaluated. You still need to reflect on those facts based on whatever values you have. And the values are not determined by the science. They're not determined by the fact they may be accurate or influenced by that, but they're not determined by the other way around. It works interestingly. So I wouldn't suggest that a scientist, let's say a practicing scientist, shouldn't necessarily be a convert convertible into converting in philosophy or aware of what philosophers do.

[00:47:59]

Scientists can do a lot of their work without any awareness whatsoever of what philosophers do. But if you're curious about what you're doing and how you're doing it, it might be from time to time a good idea to look at what philosophers of science, sociologists of science and historians of science come up with, because that gives the scientist a sort of an outsider's view of its own discipline. And sometimes that may even help, not not answering actual questions, empirical questions, because, again, that's the province of science.

[00:48:34]

But thinking more broadly about why are you asking those questions? Why are you going in that in that direction and why are you going about it that way? As it turns out, there is a mystery to it is a sociology to it and there is a philosophy to it. So there are, for instance, one of the best examples currently of that is the debate in science, in physics about string theory, in the status of string theory. It's the most important advanced theoretical advance in fundamental physics for the last several decades.

[00:49:03]

And yet it's in trouble. As the title of a famous book by Liz Mullin, who is a physicist, put it, The trouble with physics, because for we're looking at the first in the nation of physicists since the last over the last 150 years or so where there has been no major theoretical breakthrough that has been accompanied by a beautiful confirmation. In other words, string theory may be a theoretical breakthrough, but there's no experiment that actually validates that theory or invalidates the theory it cannot be done, at least not at the moment.

[00:49:37]

And one of the questions, why? Why is it we got what's called a lost generation of physicists? And his answer is as a physicist as well. That's because physicists, the fundamental physics community, is not paying attention to its own sociology and philosophy and history. He claims that what has happened is the physicists of the 20th century and moved away from an awareness of issues in epistemology and issues which are in basic philosophy of science. Earlier, physicists were very much aware of those issues Einstein bore and that that group was very much aware of philosophy of science.

[00:50:16]

Modern physics moved away from it. They are unaware of sociology, of science, which shows that in fact the fundamental physics community is very small. And when a community of experts is very small, they tend to that it's easy to bias that community's judgment one way or another without any particular objective reason for why that should be their bias. So, for instance, if it became very popular to be to work on string theory when it came out, because it was thought to be very another very novel way of looking at things, that was in the mid eighties.

[00:50:51]

So there was a period where you really couldn't get a job unless you were working in fundamental physics, unless you were working. String theory, if you're going to hold, you can be a Ghadry student, unless you were working and found them in string theory, then those people got on good jobs. They became, you know, the kind of people that decide which papers get published in refereed journals or which grants get funded on what research. And if the entire community becomes enmeshed and that way of thinking, then, of course, you have that only string theorists are funded, hired and so on and so forth.

[00:51:21]

And that according to someone who I repeat, it's a is a physicist, not a philosopher. That is one of the reasons why we're in this impasse, because there are, in fact, alternative ways of looking at things. There have been outstanding proposals out there, but very few people are working on it because the community is so small that it's got bias. Now, presumably this is not going to last forever because eventually physicists themselves will recognize that, OK, this is really not working.

[00:51:45]

Let's try something else. I mean, they're not masochists. But this morning's point is that it's an unawareness of the sociology, history and philosophy of physics that has led to, in part to this situation. And perhaps more awareness of that might have avoided sort of wasting an entire generation. The history comes in because one of the major arguments in favour of string theory is that it must be true because it's so simple and beautiful. It's an argument that you often hear from physicists who apparently don't realise that beauty and simplicity are not empirical concepts.

[00:52:19]

You can't test for beauty and simplicity, of which they're aesthetic concepts. So in other words, they're making they're bringing in a pretty big philosophical baggage into this aesthetic preference. But regardless of that, Smolin says, actually, historians of physics will show you very clearly and very easily that time and again throughout the 20th century, physicists prefer the particular theory because it was on those grounds and then that theory turned out to be wrong empirically.

[00:52:46]

I feel like you could test for beauty to set up like a physics theory of hot or not dotcom. And then you've got all this data right. But you test these for people's opinions about beauty, not know whether the universe is somehow supposed to be regulated by beautiful laws. Whatever you want to test for truth. Yeah, exactly. That's right. Exactly. I think we're all out of time. So I'm going to wrap this up. But with the the closing exhortation to all of our listeners to check out answers for Aristotle, I would highly recommend it to anyone who likes the podcast or just I think we'll just have you shout at your question.

[00:53:24]

Then I will repeat it into the mix so that our listeners later on can get that phrase. You are going to bring up Sam Harris to disappoint you. Yes, and I'm not disappointed. Now, I know that Sam Harris is one that goes back three years later. Mm hmm.

[00:53:43]

Well, how can we be so bold that did you know now you've got to talk about how can I be thorough? All right. What's going on? All right.

[00:53:54]

All right. So the question was, Masimo, how can Sam Harris be as wrong as he is? That wrong? Wrong man. First of all, ask him in a few days, because we're both going to a workshop on the implications of philosophical naturalism, and that's going to be a fun one. You know, Jerry Cohen is going to be there. Richard Dawkins then then all sorts of interesting people that would disagree with. So Sam Harris, why is he so wrong?

[00:54:22]

For a variety of reasons. One of which is that he actually has two legitimate targets in his book. If you read the introduction to the book or the very first part of the first chapter, he tells you what his targets are and his targets are religious way of thinking about morality, which if things are misguided because he thinks that religion has nothing to say about morality or worse, it actually teaches the wrong sort of morality. And at the opposite end of the spectrum is second target is moral relativism.

[00:54:54]

So this idea that there is no right and wrong that your opinion is just as good as mine and that if a society engages, let's say, in genital mutilation of young girls, then one, that's that their prerogative is the same difference, that doing it or not doing it is the same difference between liking dark chocolate or milk chocolate. And of course, as you all know, I I hope that the answer is dark chocolate clearly now. So those are targets that are legitimate.

[00:55:26]

I do I do agree with him that moral relativism is pernicious and that religious based morality is pernicious. In the book, the last chapter, the last three chapters are devoted to religion and morality. And in particular, the last one deals with the dilemma, which is this classic dialogue by Plato, where Socrates at some point basically asks the fundamental question to his to his interlocutor, Euthyphro. And Socrates says, you know, so OK, you're telling me that you know what is right?

[00:55:55]

Because you know what the what the gods. But ask answered this question, is something right, because the gods say it is right or something right or the gods say that something is right because it is right and, you know, are you different? Thinks about it. And he says, I smell some kind of trap from Aristotle, from Socrates, and says, now, clearly, the first one, you know, something is right because the God says right.

[00:56:14]

And of course, that cannot be the case, because if that's the case, then morality becomes a matter of of of might makes. Right. And you're doing it because the gods tell you to do it because otherwise they're going to incinerate you. But they could decide that genocide is right or that rape is right or whatever. And what are you going to do then? Then you do for these are not the examples that Socrates brings up. By the way, I'm paraphrasing you, divorce is OK.

[00:56:37]

Yeah, you're right. That's that's clearly not the answer. So the case is then, in fact, the gods say that something is right because it is right. In which case, of course, you don't need the middle man or the middle God to tell you. Right. You don't you can figure out the thing on which the result of which the end of the story is you do not. Even if gods exist, they have nothing to say about morality to us.

[00:56:58]

We can figure our way out. Now, notice that I just made a philosophical argument, OK? And in areas that seems to be entirely unaware of the fact that there is a large number of philosophers who would agree with him, that that religious based morality and and relativism are in fact morally bankrupt. And they would be able to provide very good reasons for why that is the case. Instead, Harris chooses to ignore that. And in a famous or infamous footnote to the first chapter, he says that he's not going to engage the philosophical literature because he thinks that terms like utilitarianism and the ontology increase the boredom of the universe.

[00:57:37]

That is not an argument, my friend. This is that is anti intellectualism in its purest form. Imagine if I were writing a book about how the brain works and I said, but I'm not going to engage with the cognitive science literature because I think that terms like ephemerally and frontal lobes increase the boredom of the universe. I mean, nobody would take me seriously. So the real question is not why samaris goes that way. It does have those two targets and he thinks he can come up with a better answer.

[00:58:03]

The real the real question is why is is his answer so popular? Why is this so many people are so ready to agree that, yeah, philosophy has nothing to do with morality. It's all about science. And I don't have a particular good answer to that one. I don't know, sir.

[00:58:17]

Oh, so you just use the word serious for purposes, for purposes of your life.

[00:58:29]

Don't repeat the question. The question was Masimo. Use the word relativism in his previous answer. Could he please define what he means by relativism? Yeah. So relativism is this idea that has been put forth in actually not even originated in moral philosophy, but in cultural anthropology, that there's no there is no objective way in which you can actually judge whether a particular practice within a particular society is right or wrong. If they do it, they do it.

[00:59:01]

But there's no no reason, no grounded reason why you coming from the outside and can say, oh, no, genital mutilation of women is clearly young women is clearly wrong because we come equipped with the same sort of some kind of cultural baggage baggage that it's different from theirs. And who am I to say now? That's a simple version of it. Now, that did spill over in discussions about moral relativism, which is, again, this idea that therefore there is no grounding for a moral judgment other than it's you know, it's my opinion is my feeling.

[00:59:33]

It's my gut reaction. It's whatever whatever it is. But there's no rational reason to say that something is actually morally correct or incorrect. Now, to some extent, that's a caricature, because very few people actually that I know of subscribe to that version of moral relativism. More and more nuanced versions are simply cautionary. You know, be careful before you make a judgement about that practice being wrong, because as it turns out, you may not have good reasons for it.

[01:00:04]

You may not understand that culture well enough. You may not have all the information. And by the way, the same can be applied to some of the stuff that you do in your own in your own culture, and you may be unaware of it. So as a cautionary tale, I think that's reasonable enough. But when it's pushed to the point, sort of the extreme point that Sam Harris is is reacting to, I do think it's nonsense.

[01:00:25]

And in fact, it's pernicious nonsense that, by the way, I should say it's not because I am I think that moral troops are out there in the universal sense. Morality is relevant to social beings of a particular type and moral reactions, moral feelings evolved in those in that particular species in response to particular environments, social environments that we find ourselves with. So if the if emotion who? Belongs to a known social species. Come on, come on.

[01:01:00]

I think it would be really difficult to explain what we mean by right or wrong and that sort of stuff. That said, I said during the main segment of the podcast that I think of of ethics as a way of reasoning, not as a set of answers. So it's about I pretty much think of it as logic applied to what we think, what our moral questions, questions of well-being or flourishing for human beings. Right. So if we agree, for instance, that a good starting axiom is other things being equal, we should maximize maximize individual people's ability to flourish or in the world's best well-being, then what follows from that?

[01:01:36]

How are we going to do that? The answer to that is partly, partly logical. So it partly deals with philosophy and partly empirical. You know, what is it empirically that makes human beings flourish in a certain way and that then belongs to science, social science in particular? I will just add the addendum that there is another alternative to the first option of thinking that there is one moral truth and the second option of not caring about morality at all, which is how moral relativism is usually portrayed.

[01:02:05]

You can think there's no moral truth and yet have very strong preferences that people not, you know, mutilate their daughters and and take really decisive actions to try to prevent things like that from happening without thinking that that is the objectively correct moral position. Yes, you can do that. It's a weaker position and you can do that.

[01:02:25]

Next question on the question of why do you give an example of what is the inspiration for San Antonio for having me?

[01:02:41]

So the question was Masimo described science as this collective collaborative endeavor. However, we have examples like Newton, who made amazing scientific strides, essentially working on his own Masimo. How would you explain that? Yeah, that's an interesting question. So there's two answers to that. First of all, it's only partly true that Newton was working on his own, right. I mean, even even in the 17th and 18th centuries, there was already a network ever since Galileo or let's say the time of bacon and Galileo and therefore later Newton, there was already eight academic none in this modern sense of academy.

[01:03:16]

But there was a group of people that were interacting. Newton, I think, was in fact one of the founders of the Royal Society, which means that he was aware of the fact that you needed these kind of sort of peer evaluation and peer feedback. So that part of the question, but part of the answer is, well, it's not entirely true that he was working on his own. The other part of the answer is yes. And Newton, in fact, wasted a lot of time doing stuff that is no value whatsoever.

[01:03:42]

I mean, he actually spent more time on alchemy and biblical criticism than exegesis than he did on physics. Now, why do we most of us don't know that today? Well, because that sort of stuff is crap and went off the wrong way and we sort of ignore it. We only kept the good stuff that Newton did. One could argue that if you don't have been subjected to more peer review and more peer pressure, you might not have wasted that much time doing out doing alchemy.

[01:04:09]

It turns out, however, that sort of temperamentally Newton was very, very much yes, he was very. But it was very much known into the business of listening to what other people used to say. This is famous. One of the reasons he's famous is this phrase that sounds very magnanimous and very modest. If we can see further, it is because we are we are on the shoulders of giants. As it turns out, that was actually a dig to a colleague who was short in stature.

[01:04:39]

So the guy was a nasty son of a bitch, just like, you know. So, yeah. And of course, he went after was it Leibnitz? I think because they both discovered differential calculus. Right. And Newton was nasty for all of his career. You had his own papers reviewed by his own friends and loved newspapers reviewed by nasty people. And he went after Leibnitz even after Leibnitz died. You just wouldn't let it go. And so it was like, yeah, an interesting psychology of science there.

[01:05:09]

Any more questions than that? And I question why you chose not to.

[01:05:21]

Yeah, that's a good question.

[01:05:24]

Come you have you disagree with. Right. Right. So the question was why answers for Aristotle? Why not answers for someone else?

[01:05:35]

So there's two reasons for that. One is a good one. One is not so good. Let me start with the not so good one. This was not my title.

[01:05:44]

So the original title of the book was The Intelligent Person's Guide to the Meaning of Life, which of course was a take on The Idiot's Guide to it's a series of book books, as you know, that it's very popular, apparently, because the. A lot of idiots, or at least a lot of self professed idiots. So that was the original thought of the publishers didn't go for it. And we went back and forth and I discovered to my chagrin, that I do not have in my contract power to override the publishers.

[01:06:13]

The publishers are bound to listen to what I have to say, but not to go with what I have to say. That said, the answer, the answer is for Aristotle was in fact does have a logic. Aristotle is by far. If you look at the book I saw is by far the figure among scientists or philosophers and most cited, most quoted throughout these almost there's no chapter in which Aristotle doesn't appear one way or another. And the Dutil therefore refers to you.

[01:06:41]

Even if you look at the cover, which I actually think it's beautifully done by the basic books artist, it it has a young guy who is supposed to be Aristotle on one hand, sort of pondering things and the philosophy. On the other hand, there is this beautiful shell of a marine organism that Aristotle studied. So is the science Aristotle that was in fact, if not the first, certainly one of the one of the first and the most prominent ancient thinker who understood that you have to do both what we today consider philosophy and science or philosophy and natural philosophy, as it used to be.

[01:07:16]

There used to be called that it's important to write and think about, let's say, ethics or metaphysics, but it's also important to do research. We actually did fieldwork in the island of Lesbos, for instance, in biology, spent summers there. So it's important to do to actually deal with the facts. Now, you got some of the facts wrong for the entire physics of result is wrong, but you have to start somewhere. And it was remarkable that you had the right idea that to combine what we today call the science and philosophy to come up with the best answers we can.

[01:07:52]

And in fact, I'm not unhappy. Of course, the fact that he was wrong, because part of the point of the book is that these are the best answers. They're not the true answers. These are the best advice you can get. It's not the true advice. It's reversible. If the science or the philosopher changes, philosophers changes just as much as the science, usually at a slower pace. You know, philosophy is one of the things that is wonderful about having moved from professional science to professional philosophy is that in professional science, you publish a paper now and somebody is going to respond to it and criticize it within weeks, and then you're going to respond after weeks after that philosophy, things are much more relaxed.

[01:08:30]

You know, you see today a paper that is responding to a paper published in nineteen seventy nine. The guy's probably dead, but it's like, you know, slowly but surely we're going to get there and it's like that's progress. But it takes millennia. And, you know, it's, it's, it's a long, long view of things. Interesting. I always assumed that you picked Aristotle because you couldn't think of any other philosophers whose name began with a work work so.

[01:08:53]

Well, yes, of course. That was the original reason. I don't I think we have time for one more question.

[01:09:01]

OK, I just want to bring up another reason why or why he was why he was able to accomplish both. I think that as time goes on, science gets or it becomes more important to be part of the social network. But in the early days, the fruits of your get that. You want to rephrase that I become involved. Yeah. So the comment was that it's in reference to the question about Newton and scientists working alone, that perhaps science has just gotten harder over time as we've solved the easier questions.

[01:09:47]

And so it's more necessary to be among a community of other scientists. Right. You wouldn't like that assessment of you went for the low hanging fruit? Of course it did. It fell to the ground, fell on the ground. He's had. Right. By the way, that one also the story of the apple falling on Newton's.

[01:10:03]

Had that actually made up, how many more lovely? Not only that, but I'm not going to ruin it for me. He made it up in a letter to a friend to make, you know, put a sort of a more dramatic setting to his to the development of his ideas. Next, you're going to tell me Washington didn't chop down a cherry tree and then tell the truth about it. I will not, of course. But anyway, so he is out here now.

[01:10:27]

So Newton would not be happy with that decision. But I think you're right. That is, you know, you can start look at Galileo, right? So the guy discover invented essentially the telescope when it had been invented before, but it was the one that perfected perfected the instrument to the point that it was useful for astronomical observations and literally anywhere. He pointed a damn thing. You discover new physics. Well, you can't do that now. I mean, if you can if you have a really, really serious, seriously costly, large telescope, you're probably going.

[01:10:56]

And pointing that out, putting it somewhere where people already know that there is stuff, right? So, yes, that is the problem with science being a progressive, it's not a problem, but that is the outcome or the implication of science being a progressive enterprise. Right. Things become more costly, more difficult, and progress becomes more slow. I think the same, interestingly, applies to philosophy because, you know, early on when people started thinking about logic, logic, nobody thought about logic before.

[01:11:26]

So let me hear the principle of not contradiction. As it turns out, it took, you know, more than two thousand years to make significant advances over Aristotelian logic. But then now between the end of the 19th century and the beginning of the 20th century was the explosion of different ways of doing logic. And now we have a professional, highly sophisticated profession where I challenge anybody who who is not a professional magician to actually get past the first half a page of a technical paper in logic.

[01:11:57]

You just not going to get anything because there's a huge amount of background knowledge and is necessary. So this is true, I think, of all academic fields to make progress. If there is any academic field that does not make progress, you probably recognize it because anybody can come in and invent something completely new and people will say, wow, that is totally new. And nobody thought about that before. But I'd be I think you'd be hard pressed into finding that kind of example.

[01:12:23]

I don't think we're going to wrap up this concludes another episode of rationally speaking. Join us next time for more explorations on the borderlands between reason and nonsense. The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benny Pollack and recorded in the heart of Greenwich Village, New York. Our theme, Truth by Todd Rundgren, is used by permission.

[01:13:12]

Thank you for listening.