Transcribe your podcast
[00:00:14]

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to Russian speaking the podcast, where we explore the borderlands between reason and nonsense, I am your host, Massimo, and with me, as always, is my co-host, Julia Gillard. So, Julia, what are we going to talk about today? As if I didn't know on this particular occasion?

[00:00:50]

Well, today we're going to take a totally objective and unbiased look at a book called Nonsense on Stilts How to Tell Science from Bunk. I think I know the author of that book.

[00:01:01]

Yeah, it's who's the author? And let me look this up. Some philosopher. Massimo, kill your chief.

[00:01:07]

Yeah, that's right. I've heard of him. That's right. We're going to be a little bit self-indulgent this time. We're going to be talking about my own book, but we checked and plenty of other hosts of podcasts and radio shows do that from time to time. And we figured it is an interesting topic, so we might as well talk about it. Yeah.

[00:01:22]

And, you know, everyone who's anyone is talking about nonsense on stilts these days. So we didn't want to be left out. Right. So I really enjoyed this book.

[00:01:31]

And I thought it was really interesting because you take the the classic demarcation problem of how do you tell science from non science and you break it down into more complex questions, because it really there really isn't just one demarcation problem. There's also the question of how do you tell science from pseudoscience, science from philosophy? How do you tell hard science from soft science? And you talk about all of those questions in turn. So one thing that was running through my head as I was reading was I've written a bunch of posts on rationally speaking by now about debates that take the form is this art or is this love?

[00:02:09]

And I've argued that on a superficial level, you're just arguing about definitions. And the answer is, you know, you could just say, well, depends on how you define art or love. And in order for the debate to be non-trivial, there has to be a reason why it matters whether you call the thing art or love or science. So my question for you to start off is, why does it matter? Why does it matter whether we call something science or nonsense?

[00:02:33]

You know, it's a great question. And at first I want to comment on your on your easy way out, which we shouldn't take. That is. Yeah, one could define science as whatever scientists do. And in fact, some sociologists do define it exactly that way. But that first of all, it's completely unhelpful and entirely tautological. And second, it would be like like saying that baseball is whatever baseball players do, that's not right. Baseball players do a certain particular kinds of activities and that have certain rules in a structure and so on and so forth.

[00:03:04]

And you're defined as a baseball player. If you do that, it doesn't work the other way around. It's not that the activity is defined by the player.

[00:03:11]

The same goes with science is not that that anybody can be a scientist no matter what they're doing. And there are certain rules of the game, if you want to call it a game. And there are certain practices, certain activities that go with it. But the fundamental question, as you pointed out, is, well, why does it matter that we agree on a definition, even a broad definition? And, you know, we've certainly we've we're fuzzy boundaries that as I agree, as I argue in the book.

[00:03:38]

Well, for one thing, because science in modern society has enjoys a high level of influence and funding. So we're talking literally about billions of dollars. You know, if you are a scientist that, you know, legitimate credentialed scientist, you can actually compete for millions of dollars in funding from agencies like the National Science Foundation, the National Institutes of Health, and so on and so forth, the Department of Energy. So it matters in terms of dollars.

[00:04:08]

It matters also in terms of prestige. If you say that something has been scientifically proven, if you're advertising, for instance, a product or a particular point of view, and you said, well, that's scientifically backed. That means you're now using the prestige of science to sell something or to convince to get a point of view across or to convince people of a particular opinion. So science has the ability, in fact, to influence our our lives, sometimes dramatically so.

[00:04:36]

So I think that's one of the several reasons why it matters to define and to agree more or less. What is it we're talking about and speaking more abstractly, those are all very good practical reasons.

[00:04:47]

But maybe it matters. Maybe it matters based on how certain we can be of a of a claim made by a field, whether it's scientific or a non-scientific right.

[00:04:58]

That's a subset, I guess, in some sense of the prestige issue. Right. So if you're saying that, you know, let's say a particular kind of alternative medicine is a science, then you're making a claim of reliability, of prestige. People should be paying attention to it. People should be, again, funding it and following whatever results taken seriously, whatever results are proposed. That's why, obviously, we see a lot of Pseudo-Science Sciences that try to use the trappings in the terminology of science.

[00:05:32]

I mean. We have creation science, which to me sounds like an oxymoron, but why would a creationist, you know, typically a fundamentalist religious believer who has a worldview that is antithetical to science? Why would they call their their ideas scientific creationism?

[00:05:50]

Well, because they realize that science has that kind of of prestige, that kind of cachet that they need to latch on to make themselves, you know, taken seriously.

[00:06:03]

The same goes with a lot of other pseudosciences. So, you know, paranormal investigators often try to do things in the way scientists do things because they claim that they're doing a science. Ufologist is trying to do the same sort of thing at a certain number of ufologist. So there is that and that idea that science is a good thing and science works. So by extension, any field that can legitimately call itself a science, then it gets some of that advantages.

[00:06:30]

So let's take a look at a field where the term science has been appended to it. And some people dispute that the legitimacy of that which are the social sciences. You start off the book by talking about the distinction between the hard sciences and the soft sciences, which include things like psychology, sociology, economics, and one of the accusations that's made of the soft sciences to the effect that they don't deserve. The the term science after their name is that they haven't made much progress as a field in terms of a sort of cumulative body of knowledge that they've generated like the hard sciences have.

[00:07:10]

So what what do you think are the measures of progress that that we should look at?

[00:07:15]

Right. When I did the research for the book, I found some some interesting and not widely appreciated things about this issue, which is, for instance, the fact the psychological results in psychological literature are much more highly repeatable than people tend to think. If psychologists do the same experiment with the same kind of experiments over and over with a variety of subjects, they actually do reliably get the same results. In fact, some studies compared to the reliability of replicability, psychological experiments with that of experiments in physics, and they're about the same, which is pretty surprising.

[00:07:52]

The what the difference is, is that in physics, of course, results are much more precise. Quantum physicists can make predictions that are several decimal points. In fact, many decimal points are precise. But in psychology, that's simply not possible. And but the reason for that is not because psychologists are incompetent or don't know what they're doing or they're sloppy or anything like that is just because psychologists, just like evolutionary biologists or ecologists, they study systems and a much more complex than subatomic particles.

[00:08:20]

They study systems over which you never have that kind of degree of control during experiments as a physicist in a particle accelerator would have. So the difference between so that is not what the replicability of experiments, the reliability of experiments and therefore the cumulative ness of of sort of information as a science is not really the difference between soft and hard sciences. But the major difference seems to be, is that let's take psychology as a textbook example, is that there's no theoretical cumulative unless there's empirical cumulative effects, but not theoretical.

[00:08:55]

In other words, are the replicability you're talking about is just the results we get when we measure things. We're not we're not making theoretical conclusions here.

[00:09:03]

Exactly. The theoretical on psychology is much weaker in the in the area and so is sociology. In the area of theoretical advancements. There is no it doesn't seem to be an overarching theory in psychology, similar to what you find in physics that, say, general relativity or quantum mechanics or even in evolutionary biology, for instance, you know, evolutionary theory that the winand theory is the overarching conceptual framework that that informs and organizes all the biological knowledge. There's nothing like that in psychology or in sociology.

[00:09:31]

There is a lot of interesting things that we know and they're are repeatable and they make sense. And yet there is no overarching theory. Now, you also mentioned or, you know, part of the part of the same group of disciplines is economics. And economics is a different case in some sense is almost diametrically opposite of psychology in the sense that there is if we're talking about classical economics, not what it's called these days, behavioral economics, which is sort of an emergent way of doing economics, is still a minority way of doing economics.

[00:10:02]

And but it does deal with the vagaries of being of human human psychology. But classical economics, which is based on rational agent theory. So this idea that people make always the Maximilien form, the maximally rational decision, that's a very theoretical informed it's a strongly theoretical informed approach, unlike psychology. I mean, there is a theoretical background there, but it's empirically a disaster. I mean, to actually make predictions based on optimization theory and rational decision making theory.

[00:10:32]

You get stuff that actually. Doesn't square at all with the real world as it as it works, because, of course, not surprisingly, human beings are neither optimally informed nor, in fact, far less optimally rational.

[00:10:45]

Right. Well, which is why behavioral economics was developed. Exactly. So it sounds like there's there's two sources of uncertainty in the soft sciences. Well, actually, there's one source of uncertainty in the hard sciences, which is our measurement error, how close we can get to the direct measurement. And then there's there's that in the soft sciences. But there's also the additional uncertainty because of the variability of what we're measuring. That's right. Based on all these other factors that we don't have in.

[00:11:14]

Right. In the simple cases of, say, particle physics.

[00:11:16]

Now, compare that with an actual a real science such as astrology. Right. In the case of astrology. There has been neither empirical, cumulative nor theoretical in this for 20 plus years. We know that theory is fundamentally flawed because we know that, for instance, constellations are simply optical illusions, stars that a member of a particular constellation happen to be, in fact, the very different distances from Earth. So they're not actually part of any physical group.

[00:11:43]

So the theory is flawed. We've done the experiments and in fact, in the book I detail some of these experiments that have been done and published. The major journals I in Nature magazine several years ago published a major study of astrology where they gave certain personality profiles to a number of top astrologers, and they asked him to match the personality profiles with the horoscopes of the same people. And guess what? The the matching success was exactly what you expect by random choices.

[00:12:11]

So you and I would do just as well as the top astrologers in the world, which, of course, should spell trouble for astrology as a discipline. So what makes then astrology? Pseudo science is not the fact that there is no theory. There is, in fact, a theory of sorts. It's not the fact that there are no empirical data. There are effective vehicle data, just that the theory is fundamentally flawed. The empirical data clearly disprove the broad claims and yet people still keep practicing and believing.

[00:12:38]

And that's the classic definition of a pseudo science.

[00:12:41]

All right.

[00:12:42]

So let's talk about another criterion that you discuss in the book, which is that the criterion of falsifiability and one example of a dubious science that you bring up is the search for extraterrestrial intelligence study.

[00:12:57]

And you say that one thing that calls it into question as a legitimate scientific enterprise is that it's not clear what evidence he could possibly obtain that would cause them to abandon the search or conclude that it was a fruitless search.

[00:13:11]

And I was I was a little on the fence about this example because it's what they're doing. What they're trying to do seemed a little more akin to it seemed less like testing a hypothesis about the world and more like trying to find a cure for a disease.

[00:13:27]

So, for example, what if we saw more practice than an actual science? So, I mean, they're searching for something. They're not making a claim that something is true. But so, for example, if we searched for a cure for Alzheimer's for a couple hundred years and couldn't find anything, that would be unfortunate.

[00:13:42]

But would that mean that we would have to conclude it wasn't a scientific endeavor?

[00:13:46]

No, but I do think there's a difference between the two cases. So, first of all, I don't actually claim in the book that the that study is anything like a pseudo science. I consider it a borderline parody of science.

[00:14:00]

In fact, it's right. It's interesting that a few days ago, actually, I was interviewed by a radio show that is produced by the City Institute, and I had a great conversation with the host of their show. They produced regular show for National Public Radio. Read the book. Yeah.

[00:14:14]

Apparently we still had a great conversation, even though I have my obvious qualms about safety. But look, here's the thing about setting.

[00:14:25]

It's clean, not testing and hypotheses in anything like any any mainstream science. What I mean, there is a hypothesis that right there bodices that there are out there, technological civilizations that are interested in communicating with us.

[00:14:38]

OK, and then about this, of course, is provable because the moment we do get intelligent signal from one of those civilizations, well, that's it. We're done. It's the opposite has been confirmed. The drama, as you pointed out earlier, there is no way to really reject the hypothesis because, you know, you could keep looking forever. And there's always the possibility that the civilization in question is around the corner or just one extent or it's just about to start communication or you haven't looked in the right place or we haven't looked at the light frequencies and so on and so forth.

[00:15:08]

Right now, I think it is a worthwhile endeavor within certain limits of funding. I mean, I wouldn't spend billions of dollars doing that, but it is certainly a worthwhile endeavor.

[00:15:19]

I think that it is largely because the foundational idea is reasonable. I mean, after all, we do have one example of a technologically advanced civilization, the. Wants to communicate with similar civilizations out there, so why not? I mean, it's certainly not a preposterous starting point, right?

[00:15:42]

But it's different from your example of searching for a cure for disease for the following reason. I think, first of all, we have a fairly well-developed theory of of the spectrum of human diseases. You know, we know about we have a germ theory. We have molecular biology that informs us about, you know, why certain mutations cause certain kinds of diseases. We have developmental biology. I mean, there is a whole range of disciplines that ground that research, that medical research to into very definitely a scientific scientific grounding.

[00:16:15]

Now, it's still a matter of trial and error, but that trial and error is, in fact, informed by quite a body of scientific knowledge in the case of the city. What I want to bring up in the book is that the theoretical foundations are actually fairly limited. I mean, really the only theoretical foundation specific to say to you is the famous Drake Equation, which was proposed in the 1950s by Frank Drake, who was one of the originators of the entire endeavor.

[00:16:41]

And it's you know, it's a nice idea. It's a it's a simple equation that puts together a lot of parameters that will give you an estimate of the number of civilizations, technological civilizations in the galaxy. The problem with the Drake Equation is, first of all, that's the only, as far as I know, that's the only theoretical background of the entire endeavor.

[00:17:00]

And it's more than 60 years old. So we're talking about a science that has actually not produced much more in the way of theoretical insights for the last 60 years. That's not good. The second thing is several of those parameters are essentially next to impossible to actually estimate. You know, the parameters include things like the number of stars in the galaxy.

[00:17:18]

Well, for that one, we have a very good estimate, the number of planets orbiting those those stars. Until recently, we had no idea of that number. We only had the sort of our solar system as an example. But as it turns out, as I point out in the book, we now have more than 150 extrasolar planets that have been discovered. So we begin to have an idea of, at least in the neighborhood of our own star, how many other planetary systems that are.

[00:17:43]

So those are numbers. And you can sort of put some kind of estimate in there. But then there are numbers like the average length of a technological civilization.

[00:17:52]

Well, there we don't even have an equal one because we do not have one technological civilization. But thankfully, it hasn't ended yet. So we don't know how long we're going to last. So we have absolutely no idea what kind of number to attach to that estimate.

[00:18:06]

OK, so maybe it makes sense to to detach Saidi's search from their theoretical claims about the likelihood of that search yielding results. Now, we definitely can't conclude this discussion without some talk about the relationship between philosophy and science, because that's that's a big part of the book. Yes.

[00:18:23]

And so, as I understand it, your book was advocating for science and philosophy of science to occupy distinct but overlapping spheres. Right. It sounded like you were encouraging philosophy of science to not just work on describing how science works, but prescribing to, at least in some circumstances, tell scientists how they should be conducting their research.

[00:18:46]

Is that extent right? So the thing with philosophy of science is that it has several different objectives and some of them have very little to do with the practice of science as far as scientists themselves are concerned. Others, on the other hand, tend to overlap. So let's start with the ones that don't overlap. You know, famously, Steven Vineberg, Nobel physicist, several years ago wrote an essay called Against Philosophy in which he asked, you know, when was the last time that a philosopher answered a scientific question?

[00:19:15]

And when I read that, I said, well, that's a stupid question. Philosophers are not in the business of answering scientific questions. That's what we got scientist for. So is actually.

[00:19:25]

But that's a misconception that is actually fairly popular among among scientists. What have you done for me lately essentially is the question. But I point out in the book that that's like asking like complaining that the New York Yankees haven't won an NBA title yet. Well, they never will because they don't play in the NBA. They play baseball, they don't play basketball.

[00:19:44]

So so there is so it's it's important to understand that philosophy of science as a field has as a goal to understand how science works.

[00:19:55]

That's completely independent from helping scientists to further their theories or to find out about discover empirical findings. The philosopher is somebody who looks at science from the outside and says, wow, there's this activity here that is tremendously efficient. It works very nicely. You know, we get all sorts of interesting things out of it. How does it work and why is it that other kinds of human endeavor don't work quite as well at doing and doing similar things that hence the demarcation problem?

[00:20:22]

Why is it the pseudo science doesn't work as well as science does?

[00:20:26]

So that kind of philosophy, as far as I can tell, is entirely independent of the practice of science as. From the point of view of a scientist, scientist really don't need to know anything about that kind of philosophy of science unless they're sort of genuinely curious about their own field. But but there's no requirement there. The more interesting part and as far as scientists are concerned, comes in those areas where philosophy, science really becomes a borderline with theoretical science.

[00:20:50]

And there are examples in in theoretical evolutionary biology in and as well as in theoretical physics, where there are several papers that have been published in the last few years on the structure of evolutionary theory, on the concept of species, for instance, in biology or on certain consequences, consequences and interpretation of quantum mechanics, where you really cannot tell whether the author is a philosopher or a scientist unless you actually look at the affiliation, the departmental affiliation in the published paper.

[00:21:18]

It's hard to tell. And that to me is an example.

[00:21:21]

It's an interesting example because it is an area where we're talking about not straight scientific theory. We're talking about the implications or the conceptual implications on the conceptual basis of certain scientific theories like the different interpretations of quantum mechanics, for instance, a scientific theory should be. Interpretable it is what it is, and in fact, there is a school among physicists who says subscribes to the idea that, you know, I'm not interested in interpretation of the shut up and calculate school school of thought.

[00:21:53]

But if you're not of the shut up and calculate the school of thought, if you actually, like many physicists are interested in the interpretation and the implications in terms of, one would almost say, the metaphysical interpretations of quantum mechanics. Now, you're already one foot into philosophy, and that's what I think the collaboration between philosophers and physicists or biologists is appropriate. All right.

[00:22:15]

Let's wrap up this section of the rationally speaking podcast and move on to the rationally speaking PEX.

[00:22:37]

Welcome back. Every episode, Julia and I pick a couple of our favorite books, movies, websites or whatever tickles our irrational fancy. Let's start with Julia Spik. Thanks, Massimo.

[00:22:48]

My pick of the book called Historians Phalluses toward a Logic of Historical Thought by David Hackett Fischer.

[00:22:55]

I'm only in the middle of it, but I'm really enjoying it. And I wanted to talk about it today because it fits in pretty nicely with our discussion of your book, Masimo. Each chapter is about a different type of fallacy that we frequently commit when we try to explain what happened or how it happened or why it happened in history.

[00:23:14]

And I don't know enough history to know how prevalent all of these fallacies are. There could be some straw men in here that don't really occur very often. But from the history I have studied, these really ring true. So, for example, the first chapter is called Fallacies of Question Framing. So an example of that would be the fallacy of the false dichotomy. And Fisher gives a list of reputable papers, a long list of papers that start off with this false dichotomy like the abolitionists, reformers or fanatic's Jacksonian Democracy, myth or reality, Plato, totalitarian or Democrat and so on.

[00:23:51]

And of course, when you're writing a paper like this or when you're reading a paper like this, you can consciously reject the either or dichotomy. And plenty of people do say, well, you know, it's more nuanced than that. It was a combination or it was somewhere in between. But even even then, you're still thinking about the question in terms of these two options when sometimes the better answer is none of the above.

[00:24:10]

Maybe that is not at all a useful dichotomy, but it doesn't make for a catchy title.

[00:24:14]

Yeah, and the part that I read so far of this book that I thought was most relevant to your book is Fisher's argument that a lot of the questions historians ask are just unanswerable questions like was such and such event inevitable? The classic example of that is, was the civil war inevitable? Another example is there's a famous paper asking whether railroads were indispensable to the U.S. economy as development. So the paper looks at the capabilities of all the other forms of transportation that were available to us.

[00:24:47]

But as Fisher points out, all the evidence we have about the other forms of transportation at that time in the US comes from the world in which there were railroads.

[00:24:55]

We just don't know what would have happened in the absence of railroads. So he concludes that these counterfactual questions are really metaphysical in nature and that we shouldn't even be asking them.

[00:25:07]

And he acknowledges that this would narrow the scope of history. But in response to that, he has a quote that he gives and that I like. And so I'm going to close with that quote. He says, A rigorous attempt to purge history of metaphysics will serve to narrow historical inquiry to those who protest. The result would be a little too narrow. One might repeat the words of Nelson Goodman. You may decry some of these scruples and protest that there are more things in heaven and earth that are dreamt of.

[00:25:38]

In my philosophy, I am more concerned, rather, that there should not be more things dreamt of in my philosophy than there are in heaven.

[00:25:44]

Earth. That's right. Quote unquote. Yeah, and that's a general question, actually, even outside of history as it pertains to historical sciences. You know, one of the famous disputes in the last part of the 20th century in biology was between Richard Dawkins and Stephen Gould, each one of whom used this metaphor of rewinding the tape of life from the beginning of Easter life on Earth and figuring out what would happen. And Gould's perspective was that if you rewind the tape of life, then completely different things would happen.

[00:26:16]

We wouldn't be here and not even something like us would be, in fact, very likely there might be only bacteria on Earth.

[00:26:24]

Dawkins thought experiment was, on the other hand, while if you rewind the life and then you push it forward, you would get pretty much the same kind of stuff. Maybe not exactly human beings, but certainly some kind of vertebrate with a big brain and bipedal and so on and so forth. Now my question to both of them has always been how the hell you know? Right. Because we know we cannot, in fact, rewind the tape of life.

[00:26:46]

We can't do the experiment. And there are so many uncertainties. It's a typical example of what philosophers call a and a determination of theories by the data is the data are simply insufficient to discriminate among a large number of scenarios. And therefore there is nothing much sensible that you can say.

[00:27:01]

Right, which you talk about in regard to evolutionary psychology in your book.

[00:27:04]

That's right. Exactly. That's another example. And then speaking of philosophy, my pick of the episode. Speaking of philosophy. Yes. Well, you know where the podcast is incredible, I guess, both science and philosophy and inevitably so. I think so. My pick is something called Ask a Philosopher if the website is Philosophy Pathway's Dotcom and if if you go into the questions section of that website, it is you can ask any question you like pertinent to philosophy.

[00:27:33]

And a professional philosopher will answer an. If you are a professional philosopher, you can go there and answer the question. This is a project that was started in 1999 at the University of Sheffield, and it's been pretty successful. There is there's a constant stream in a large archive of questions. Some of them are more interesting than others, as you might imagine, from this kind of operation. But for instance, one question and asked I'm just looking at the ones I've been asked recently is somebody wanted to understand the difference between the relationship, between philosophy and reasoning and good reasoning, you know, is is that that philosophy is, in fact equivalent to rationality.

[00:28:10]

Is it the entirety of rational thinking or is it a subset of it? The same question can be asked for science. You know, a lot of people, in my opinion, make the mistake of equating science with rationality while rationality is a much broader concept than than the science. And the same is true for philosophy. Philosophy should not be equated to rationality that are in rationality. I think it's a broader it's a broader well, actually, in this case, it's a more narrow part of what philosophers do, although myself, I tend to be inclined to a rational philosophy in other kinds of philosophy.

[00:28:40]

Another question is about the relationship between being a moral philosopher and being moral.

[00:28:46]

Is it? The question is, would you expect moral philosophers to be more and more than other people?

[00:28:51]

I think I've seen a study about that, actually, and the answer is no, no, exactly. That's a good question. The answer is no. It's an empirical question. But but, of course, the deep philosophical question is, well, how do they deal with it? How do they justify the idea of modifiers of words if they're not themselves? And by the way, some of them actually some moral philosophers actually do not just talk the talk, but walk the walk.

[00:29:10]

Peter Singer at Princeton University is the example, you know, is is a consequentialist to don't philosopher who is convinced that as we have a moral duty to give out most of what we earn because there's so much pain in the world and in fact, does in fact, does do that. Now, whether you agree with him, his philosophy, it's a different matter. But at least he actually doing what what he says is doing anyway. So the website is pathway's, let's see, philosophy, philosophy pathway's and the the section that is interesting.

[00:29:43]

I think it's Aska philosophy.

[00:29:45]

It does look interesting. And if that description alone wasn't enough to entice our listeners, I'm looking at the sidebar of topics right here. And according to the sidebar, there are 78 questions relating to sex.

[00:29:57]

So that's that's a sure winner on that salacious note. This concludes another episode of rationally speaking. Join us next time for more explorations on the borderlands between reason and nonsense.

[00:30:15]

The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benny Pollack and recorded in the heart of Greenwich Village, New York. Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.