Transcribe your podcast

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I'm your host, Massimo Puchi, and with me, as always, is my co-host, Julia Gillard. Julia, what are we going to talk about today?


Today is another very special one hour long Q&A episode. Oh, yes, I look forward to this very much. And it had been a while since we had done a Q&A and they assumed our many listeners were, you know, about to explode with all of the stirred up questions that they were just waiting to ask us.


So it was time. And we have a great crop of questions today from the readers of the Rationals speaking blog and the our fans of rationally speaking podcast on Facebook and our various social networks. So let's dive right in. All right.


Let's start with a question from William Andrew, who wanted to know how much we think that works of fiction, like books or movies or TV shows, actually influenced people's rationality and skepticism and also their attitudes about rationality and skepticism. He says that he he gets angry when there is sort of a stereotypical skeptic character that gets proven wrong in a, you know, particularly grating way as they often are, as they usually are in TV and movies. But he wonders how much we think that those kinds of representations actually affect the way people view skepticism and rationality.


I think they do. But there is actually some empirical evidence that goes a few years back where, for instance, people that watch shows about the paranormal, even if they're fictional shows about the paranormal, they tend to be, you know, more prone to accept the claims of the paranormal. Unless there is a large, big disclaimer at the end of the of the show that says and, you know, this was actually fiction and none of this is true and all that sort of stuff, really.


So a disclaimer like, say, at the beginning of the show doesn't have the same effect. I don't recall if that was the case. But but the thing is, it's interesting that a disclaimer does have this effect. But without a disclaimer, even if it's perfectly clear that it's fiction that these people are watching and still has an effect, at least temporarily, on their acceptance or not of paranormal beliefs, I think more generally it's, you know, the power of fiction, television, you know, literature and so on.


Shaping our the way we see the world is it's it's pretty clear.


And therefore, which is one of the reasons why, you know, I hated the X Files, for instance, because, you know, as good of a show it is might have been.


The problem is that the so-called sceptic on that show was a bumbling idiot who was essentially refusing to see the evidence in front of her eyes, which is not my definition of skepticism for sure.


And, you know, I don't know quantitatively how much influence that show actually had on people's beliefs, but it certainly didn't portray skeptic's in a positive way at all.


Yeah, I have this talk that I give called the Straw Volcan, which I believe I've mentioned on the podcast before.


But the gist is basically that the portrayals of rationality, of critical thinking, of skepticism in popular media are these caricatures of rationality, just like a straw man is a caricature of your opponents arguments so that the term that was coined not by me, but by TV tropes is the straw. Volcan because Spock, the Vulcan on Star Trek, is the archetypal sort of the poster child for this phenomenon.


And so, you know, the the straw Vulcan embodies various Takar caricatured aspects of rationality, which are not actually features of real rationality, like expecting everyone else to be to behave rationally when clearly that's not the case and and having no emotion or trying to trying to not have emotion, which plays into the old stereotype that, you know, that there is a trade off somehow between being rational and reasonable and having emotions of a positive kind, which is clearly not the case.


In fact, most of the research in neurobiology shows that a healthy human brain mind needs a balance between, you know, logical thinking and emotional reactions. So it's certainly there. It's nowhere in the actual science, but it's clearly plays into the stereotype.


Yeah, and I, I can't say with strong confidence that the causal arrow goes from in the direction of people watching the stuff and then that affecting their actual views of skepticism and rationality as opposed to these portrayals, merely reflecting the way people already thought about skepticism and rationality. But based on, you know, more general evidence of the sort that you were describing about how fiction shapes people's expectations of how the world works, I would I would be willing to bet on on the causal arrow going at least to some degree in that direction.


Well, yeah.


And frankly, if we're talking about human sociology and psychology, I actually bet that that this is they're reinforcing mechanisms going on there. I mean, this is true not just for skepticism particular, but for the. Trail of pretty much anything on television or in novels, right? I don't know, shows presenting, you know, minorities or women or gays in a positive fashion have emerged only a certain point in recent history. And you could you can wonder whether they they shaped the way in which society thinks about those issues and those in those situations or vice versa, that they were actually possible only once that society accepted certain kinds of, you know, of alternative viewpoints.


And I think the answer is both. That is some of these shows are iconic, like I'm thinking of the Bill Cosby Show, for instance, early on, their iconic. And they probably did have an effect on sort of normalizing situations and for most people were not normal. But at the same time, they were probably unthinkable, you know, a decade earlier or two decades earlier because the public were simply not ready.


Now, the question is, can you think of a positive fictional skeptic? Yeah.


So my favorite example of this is the fanfic, Harry Potter and the Methods of Rationality, which has tens, tens of thousands of very devoted fans who look forward to the update every week. It's by Eliezer Yudkowsky and I.


So I just finished doing, I guess, about 90 interviews of applicants for this like rationality boot camp that I'm going to be an instructor at in upcoming weeks.


And one of the questions that I actually wear boots and you're in a camp or is it just metaphorical? Uh, just this one.


I'm not gonna touch that. OK. So, yes, one of the questions that I asked people was how they had gotten interested in learning about rationality and probably at least a quarter of them got interested because of reading Harry Potter and the methods of rationality that they had.


Really they'd never really engage with ideas of skepticism or critical thinking before.


But they this book was just so engaging and compelling and and the only just like this completely new and revolutionary example of a character who who gets ahead, who who like succeeds because of his use of of rational thinking techniques.


Like I've I think I've used this as a pick before. I've certainly blogged about it.


But it's just like so cleverly done the way I mean, the conceit is that what if Harry Potter were actually a rationalist who who used, like, his intellectual curiosity and the scientific method and and Bayesian reasoning to a to navigate this world in which he finds himself this magic world. And so, for one thing, you wouldn't be Harry Potter.


But anyway, he's he's like a person. He's a yeah. I mean, it's true that that the you know, the original Harry Potter story becomes sort of less and less relevant to what is is actually doing as he writes the story. But it is kind of a cool background against which to set the story.


And and the the great thing about the way rationality is portrayed in Harry Potter and the methods of rationality is that you actually use you as the reader, see the process that Harry goes through when he figure something out, like you see him figuring out, figuring out the rules by which magic works in this world and figuring out solving mysteries like why are some people born with magical ability and others not like he actually thinks to himself, OK, what are the possible hypotheses?


What evidence could I collect that would help me distinguish between these hypotheses? And he goes uncollected and figures it out as opposed to, for example, Sherlock Holmes, who is often touted as an example of, you know, a rationalist and lately there because he's my favorite example or one of my favorite examples.


But yes, go ahead. So I.


I have my own I mean, I like Sherlock Holmes, but I have a number of complaints about him.


But one thing about the the Robert Downey Jr. naked in bed or that was that you didn't see the second show.


I apparently did not mistakenly. Well, you should go and check going out tonight. So we're not talking about that one. Obviously, we're talking about the original series. Oh, yeah. Well, OK.


So the reason I what I wanted to complain about right now is that Sherlock Holmes solves mysteries using information that the audience does not have access to, essentially like stuff that the audience basically could not have figured out.


Encyclopedia Brown is like the kid version of this where encyclopedia solves cases using like esoteric trivia or, you know, his his incredible powers to notice tiny details that the audience couldn't have used. So you're not really privy to, you know, the reasoning method there.


It's just this genius who knows a lot of stuff that comes in handy.


But the other thing that I would like to complain about regarding Sherlock Holmes for on the topic is his lack of probabilistic reasoning, like he basically just tries to use pure deduction. So he he rules out hypotheses in this very sort of caricatured parian way. And there's actually this famous Sherlock Holmes quote where he says, Once you have ruled out the impossible, whatever's left must be the answer, no matter how improbable it is, which is this is an approach that's called strong inference.


And it's actually a type of. Inductive reasoning, but you have to be so you have to be certain that you are correct about having ruled out those things as being impossible.


Right. And it's strong in France actually works in only very specific domains. It works and has worked historically pretty well in fundamental physics and chemistry, but very few other things. However, I think you may be talking more about sort of the more recent incarnations of Sherlock Holmes in a television series and things like that. I actually contributed a chapter for a book that is forthcoming called The Philosophy of Sherlock Holmes.


Oh, wow. Really? Yeah, really an expert. It's all about, you know, obviously logical reasoning and basic logics and inferential procedures and so on. And in one of the points actually that I make in the in the essay, in preparation for which I reread the entire canon, the entire show, that was a huge sacrifice for your mother.


I can't tell you how much of a sacrifice that was not.


But anyway, so I read all the novels, which there are only four of them, and then all the short stories. And as it turns out, Sherlock Holmes uses a variety of methods, including something very close to probabilistic inference, except that it does so in different ways, in different stories. And so there really isn't a Sherlock Holmes method that is consistent throughout sometimes uses, you know, a simple induction, sometimes uses an inference of the best explanation and sometimes uses deductive approaches.


So it really is more of whatever works kind of approach, which is which makes sense in terms of the investigator.


Now, my other favorite character show, I guess, about skepticism is, of course, Scooby Doo.


Oh, yes. And actually, the original commenter brought that up as an example. And I never actually watched Scooby Doo.


Oh, you missed something. My parents wouldn't let me tonight go on and check Scooby Doo, at least a couple of episodes. And the second Sherlock Holmes with the naked scene with Dongjun.


I appreciate how you look out for me, but yeah, I, I have heard good things about, about Scooby Doo, so we should probably add that to our sadly short but hopefully growing list of positive portrayals of rationalism and skepticism in fiction while we're on the subject of ways to to evaluate evidence and to test your theories about the world.


Let's zoom out a little bit and take a question from Scott D, who asked, what is evidence, what makes an observation evidence that supports the particular conclusion rather than simply an unrelated bit of trivia.


So I will I'll give you my take on this and you can tell me if you agree or that's that's a very good question.


It a very good point. As a matter of fact, just by coincidence, one of my students asked me the same question today in the classroom teaching it at CUNY.


So you are so ready for each of these questions.


Kind of just so I would say that this question has, at least in its formal version, a very precise and clear answer. Something is evidence for a theory. If you would have been more likely to see that evidence, if your theory was true, compared to how likely you would have been to see that evidence if your theory was false, Bayesian of you, that is.


Yeah. So that was I was sneaking that in without calling it business and but yeah. No, that is that is just based on the the clearly correct like probably correct theorem based theorem.


And the nice thing about that formula is that it doesn't only tell you a piece of an observation is evidence for or against a theory.


It allows you to quantify the strength of the evidence to do so by comparing how much more likely that evidence, that observation would have been given your theory versus versus not that theory.


And and to some degree, we intuitively reason like this, but there are some major exceptions.


And keeping this definition of evidence in mind has allowed me not completely, but to a large extent to correct for some of the systematic errors that I've noticed in my my naive, intuitive way of updating my beliefs based on evidence.


So I think that one of the common ways that people misjudge what evidence is or what to count as evidence is that they intuitively ask themselves, is this observation consistent with my sort of working theory? Like, is it possible to explain what I'm seeing with my theory and if so, that I counted as as evidence for my theory? Whereas the actual question is, should be, is this observation more likely under my theory versus another theory? So, for example, a kind of a stark violation of this Bayesian definition of evidence would be there was an infamous quote.


From a congressman back in World War Two who said he was arguing for the internment of Japanese Americans because they might be spies for Japan, and when it was pointed out to him that there was no evidence of subterfuge on the part of Japanese Americans at all, no evidence, he said, that is only that is like even stronger evidence that there is actually a conspiracy because they're hiding it so well.


That's a typical reasoning of conspiracy theories. Yeah, exactly. Like. This is proof that clearly there is a conspiracy.


Exactly. So, you know, so he was he was saying, I can explain this. This results using my theory that there's a big conspiracy of Japanese Americans conspiring against the U.S. But the real question is, which is, are you more likely to see no evidence of a conspiracy if there is a conspiracy or if there isn't?




Yeah, right. So so evidence what counts as evidence is actually context dependent. Of course it is to some extent. And I'm going to try to put forth these some of the problems with this thing. But by recalling a joke among philosophers about evidence and inference, which is this. So there are these these three or four philosophers, I don't remember the exact number who are traveling through New Zealand and they're doing this by train. And out of the window of the train, they see a black sheep.


And so one of them says, oh, clearly, New Zealand sheep are black. Another one says, well, clearly some New Zealand sheep are black. The third one says clearly that New Zealand sheep is black. And the fourth one says clearly at least half of that New Zealand.


So the same exact evidence, of course, counted for different kinds of parties in different ways.


And the idea is there is that facts are facts only if they are seen in respect with a particular hypothesis, facts, facts or evidence.


Something counts as evidence only if it is within the context of a particular part.


Is this actually. Yeah, sorry if I just use the word evidence in isolation like that, I should have used it as evidence for or against, for or against.


In fact, there is a famous quote by Darwin who wrote in to it, to a friend of his, that he was puzzled and by the fact that so many people don't seem to understand that the facts are so only in the light of a particular theory that the facts otherwise become a random collection of things about the universe. They only acquire sense as evidence if they are in the context of a theory. And the interesting part about that quote by Darwin is that it was in the context of a huge debate that was happening at the time in the middle of the 19th century between John Stuart Mill and William Whitall.


Will and and Mill were discussing the meaning and the applications of induction. And so they were basically laying the ground for the beginning of epistemology and philosophy of science. William Will, by the way, was the guy who coined the term scientist. A scientist comes from Will. It's a it's a mid 19th century.


And interestingly, it was coined in analogy with artist because there was no word for what these people were doing and said, well, it's gotten scientists.


And so Will is the guy who came up with the idea of concealments or inference to the best explanation, which is, as I mentioned earlier, actually a lot of what Sherlock Holmes does, although Conan Doyle erroneously calls it deduction, but it's never really actually Holmes uses deduction use is more likely a type of conciliating inference of the best explanation.


And there was this huge debate. And the funny thing is that that will who was an influence on Darwin, they were in correspondence, which was it was convinced, at least initially, Darwin was just not doing science because he wasn't using inference or the best explanation in writing The Origin of Species.


And Darwin was meant by this thing because he thought that that's exactly what he was doing, that he was collecting all these kinds of evidence in service of a particular theory, and that all that evidence was pointing in the direction of the theory of evolution by natural selection. In other words, he was absolutely convinced that he was doing inference to the best explanation. As it turns out, modern philosophers and historians of science agree with that, which was correct. He had the correct interpretation.


What was the argument for for Darwin not doing that, that Darwin simply collected a bunch of facts, but because you already had a theory essentially, and he was cherry picking things and Darwin said that historically, that's not the way it went. It's that if he studied collecting evidence, started collecting facts about natural history, that started pointing to certain phenomena. And then once he had a theory of. He enlarged the search and in sort of and he reinterpreted this the the evidence and the facts in light of the theory, and he was frustrated by the fact that, well, kept saying, you know, you shouldn't do this because you shouldn't start with the theory.


Interestingly, Sherlock Holmes has several quotes where he says that he contradicts himself himself in some court. He says, you know, that you never start with a theory because that that sort of bias is the way you look at the evidence. And in another case, he says that you actually have to start out with some hypothesis because otherwise you're blind. You don't know where you're going.


Yeah, I mean, I guess you could sort of distort the evidence for later once you write, like, think of a theory.


But but as you were saying, it's not the evidence isn't our observations aren't useful to you unless you're using them to update your the credence that you put in this theory or that theory.




I mean, these an organic relationship between evidence and theories, of course. So, you know, on the one hand, it's true that evidence is not evidence unless it is in the service or against a particular theory, as Darwin said. But it's also true that theories emerge more or less organically if emerge organically by the fact that you observe certain things, certain patterns and in the world that you that you want to make sense of.


Yeah, and one final point to make on that. A huge amount of observations can constitute at least some bit of evidence about a huge amount of of possible theories.


So the the typical refrain that, you know, if something isn't statistically significant, then it's then it's not evidence for a theory is I mean, it's it's not like publishable evidence, but that doesn't mean that it's not you know, it doesn't mean that, like, it is more likely that you would get that suggestive evidence.


It's very sweet. Yeah. Yeah. So I'm basically thinking about evidence this way in this very Bayesian way. It's very like greyscale way allows you to avoid some of the sort of binary thinking mistakes that I think you won't get any argument from in terms of the so that actually.


So that leads pretty naturally into a question from from Remy, who asks what our opinion is on the debate around statistical tests, sort of the standard set of statistical tests that are taught to the students and the kind that are used in academic journals.


So there's like a lot of different things we could talk about here.


But I think I think you probably probably asking about the kind of hypothesis testing in which you get a P value, like you test some null hypothesis, which is is essentially that there's no effect like that. The effect you're you're investigating isn't actually real. And you get a set of evidence and you are able to calculate based on that evidence and your your sample size, a p value, which represents how likely it would be to get evidence as extreme or more extreme than you got if, in fact, the null hypothesis were true.


In other words, if there were no effects of the sort that you're investigating and the jury I'm sorry, I think the jury is definitely in at this point, although some of our listeners may disagree on this one.


And so if we cast the debate in terms of sort of a standard frequent E approach to statistical analysis based on P values and the like parties, isn't that sort of stuff, which is the school of statistics that in which these kinds of tests fall?




And it's called fragmentation, because in within that school, probabilities are frequently are measured as frequency of events, as opposed to saying a Bayesian approach where a probability is an estimated. It's an estimate of your belief, essentially the group belief in a particular hypothesis.


So I think the evidence is definitely in in this sense. The Bayesian approach is definitely superior, both I think, in theory and in practice, and and there's been a huge shift, although gradual, in several scientific disciplines toward paganism. So, for instance, until 15 or 20 years ago, nobody was doing anything pretty much invision analysis in biology, but now almost all phylogenetic inference. So the reconstruction of of relationships among species is done using Bayesian inference.


A lot of psychological research is done and model comparisons in ecology and evolutionary biology are done using Bayesian inference. Of course, there's a huge burgeoning field of of decision making theory that it's that uses Bayesian or Bayesian like approaches and that sort of stuff. Now, there is an interesting point about why didn't Bayesian analysis sort of emerge much earlier since it has been around, after all, for a couple of centuries? And one of the answers to that, at least historically speaking, is very interesting.


It is that Bayesian analysis was actually used heavily during the war to and during the Cold War, for instance, in the group for decrypting messages from from the enemy. But that work was classified. And so there were a lot of top statisticians and people that worked that realized that the measured approach was superior for those kind of problem problems who simply were prohibited from publishing their work for decades.


Oh, I suppose. Yeah. So within the sort of academic circles, the Bayesian approach, sort of when one came in and went out and came in and went out until I think it's fair to say, over the last 20 years or so, it's become more and more established.


Now in a book that I wrote a few years ago with Jonathan Kaplan called Making Sense of Evolution, we actually have a whole chapter on hypothesis testing, in particular the not like the idea of a null hypothesis and P values that go with it. And one of my favorite and we're pretty critical of it for a variety of reasons. The main one is that the null hypothesis tends to have a built-In advantage. That is, the two approaches basically are either you start with a null hypothesis, which is nothing happened essentially against the parties.


Is that something happened where you don't have to specify that something it's a pretty generic sort of approach versus a model comparison approach where you have a bunch of different hypotheses and you're actually running them against each other for how well they explain the data or they account for the data.


So anyway, there is a very funny, very well-trained, well articulated paper that deals with the deficits of the P value approach. And I don't remember the author of the of the article. It's easy to find because of the title. The title of the article is The Earth is Round P Less Than Point of five kids. And this is this is part of a discussion at the time that the article is from the late 70s, a discussion that was going on in particular in the social science literature about the merits of the two approaches.


And but I think that even now, even in the social sciences, a lot of people actually use the switch to to a sort of Bayesian approach to decision making comparisons.


Yes, I'm obviously, as people can tell by now with you, Masimo, on the the sort of formal superiority of the method.


But it is I mean, I think there's some fields in which it is implementable, but in most hypotheses, it's really hard to know how to actually carry out vision analyses like.


So what I was saying earlier about what the definition of evidence is in this framework, you're comparing the when you're trying to quantify the strength of the of the evidence in front of you, you're comparing how likely it would be if your hypothesis were true versus how likely it would be if your hypothesis weren't true.


But but that that phrase if your hypothesis weren't true, it's encompasses like an unimaginable number of possible other hypotheses.


So even to take something, let's take some really sort of simple and straightforward example, like you flip a coin ten times and you get eight heads. So you're null hypothesis that the coin is fair.


And you want to know how likely is it that I would get these eight out of 10 heads of the coin referrer as opposed to if the coin weren't fair? Well, if the coin weren't fair encompasses and the family hypothesis.


That's right. Yeah.


Like, it could be biased in any number of ways. It could have a you know, an average.


It could it could have come up heads, you know, on average 60 percent of the time, or it could come up heads on average, 70 percent of the time and so forth. And so you have to come up with some sort of distribution of how likely you think various degrees of bias for the coin are in order to answer how likely is is this string of heads?


To come up under the you know, not the null hypothesis, and that's just a simple example of the sort of easy to quantify the at least the spectrum of ways that your null hypothesis of a fair coin could be wrong.


But out, you know, in the wild, with the kind of hypotheses that we're actually interested in, it's really hard to know how to how to articulate what all the different ways that your hypothesis could, all the different alternatives to your hypothesis.


True, but that also brings up another issue with sort of the Bayesian approach, which was one of the reasons actually was and still is to some extent very art by by some people in some statistical circles, which is the idea of Prior's and where you get them. So the primaries are the you know, the the the apriori probabilities that you attach to a particular hypothesis before the next round of data is in. And you're supposed to be modifying, of course, the Pryors according to the data you get, the so-called posteriors, which are the updated probability for that, for each hypothesis you're considering, and then you sort of restart the cycle and you can use the posteriors as the priors for the next cycle.


Yeah, and the nice feature of Bayesian updating on evidence is that with a few weird exceptions, no matter what problems you start with, if you update on evidence in a proper Bayesian way, you will convert. Right.


The problem is that that sometimes that convergence can take a long time, obviously. And the major problem that that I think it's it's a perfectly valid point to raise is that in some cases it's clear how to specify priors in an objective way. You can have you know, you can use certain distribution of frequencies, like in biogenetics. Yeah.


If genetics or in the coin flipping thing, for instance, you can you know, you can use priors that are, you know, or if you know something about the dynamics of the phenomenon that is being generated and so on. But in other cases, you don't have objective priors. And so somebody use what are called subjective prioress. You know, basically you give a sort of a you attach a number, although it's understood to be a vague.


No, not it not exactly a precise estimate, but you attach a number to your subjective belief that a particular hypothesis is now the good news is what you just said a minute ago, which is even if you start wildly off the mark, subjective priors, given enough rounds there, will converge to the actual, you know, to the to the. But, of course, if you do start way off, it may take a large number of of rounds.


And there's all this discussion about, you know, should you Fisher, for instance, for instance, who was one of the most vehement opponents of Bayesian approach throughout the 20th century, thought that it was simply unscientific to start with, you know, something subjective to include subjectivity in statistical analysis. Well, actually, that is one reason I like the approach to this idea that you can use. It makes sense to use subjective probabilities, because that's what a belief is.


It's it's a subjective estimate of how much you actually trust a particular hypothesis. And, of course, you're supposed to update it, you know, with evidence. And yes, there is no guarantee that you're going to converge on the on the answer in a usable time. But that's just what it means to be a human epistemic agent.


The fact the problem with the frequent this approach, of course, is that they define probabilities as in terms of frequency of events and all the interesting cases in a complex world, you just don't have enough events to estimate a frequency. In some cases, you have no events whatsoever.


You know, one of my favorite examples from the Cold War is at some point a number of statisticians were asked by the US government to estimate the probability that a US bomber on patrol will lose an atomic bomb. And it had never happened up to that point. So the frequencies had absolutely no answer. They had no way to even begin because there was no frequency database to sort of start working with.


The Belgians went to work and they used corollary knowledge. They started asking experts, engineers and physicists and so on. Well, so how likely it is that an airplane to begin with of that particular model is going to have a malfunction? How likely is it that you think in your estimate there is going to be a malfunction of the of the of the cargo operation and so on and so forth? And they come up with an estimate that actually predicted in the range within the next year or so, an accident would happen and nobody took him seriously until the accident.


Accident actually did happen only a few months after the report was was sent back to the US government. This was off the coast of Spain. And it turns out that that bomb hasn't been found yet, which is quite interesting.


So this is that's that's the power, you know. So there is a subjectivity, but it is a power in that because the real world is complicated and sometimes a subjective approach is the best you can do. So it's better than nothing. Otherwise, you're not aware of the likelihood of losing an atomic bomb, for instance?


Yeah, I think there's actually one really simple. And hopeful improvement that we can make on the standard way that that hypothesis tests are done in, you know, typically in journals. Now, I was talking a little bit earlier about this this fallacy of binary thinking of something either being evidence for a theory or not being evidence for a theory. And this manifests in the way that people report P values in their studies. A lot of the time they report they don't actually tell you what the P value is.


They just say like P less than point of five, for example, like point of five might be the is sort of a standard cutoff for calling something significant if your P value is less than point of five.


So this results in these often in these really absurd results where, for example, in one study I read recently, the authors were testing. If their theory were true, then they would see an effect in the population, but not in population B, I won't go into all the details. And in population, hey, they did see an effect with the P. P was less than one of five and in population B was not less than point five.


And so they reported this. As you know, clearly this is evidence for a theory. But if you actually look at the P values buried in an appendix.


Let me get P was for one point six. And so they've got roughly the same amount of evidence for and against their theory.


But because they felt the two pieces of evidence fell just on the other side of each other, of that arbitrarily decided threshold of point of five, they got to claim that like, yep, we got evidence for our theory.


That's a very good example.


Yeah. So let's go back to movies and TV for a little while.


This question comes from Geotech, who asks, What are your thoughts on time travel, movies and TV incorporated into their stories all the time? But is something missing from my brain when it can't find time travel anything but illogical? I suspect that at some point in your reading of the various philosophy and fill in the blank of a certain sci fi movie here books, you might have come across, something that might help Geotech out.


Well, I just reviewed for Philosophy Now a collection of essays called The Philosophy Your Doctor, who obviously it's all about time travel.


And so actually I recommend that it's a it's a really good book. It touches on a variety of philosophical issues, including issues of identity and ethics and so on. But of course, there is an entire section on the philosophy and physics of time travel.


And yeah, it is actually a complicated issue. As it turns out, time travel is logically possible and possibly physically possible, meaning that physics themselves don't seem to agree about whether it is possible for macroscopic objects to engage in time travel. I mean, there are depending on how you read certain certain aspects of general theory relativity, you may or may not be able to do time travel depending on what the the actual structure of spacetime is.


There is no logical impossibility. However, there are so philosophers tend to make a distinction, at least three different kinds of possibilities. Right. So there is logical possibility. There's physical and then there's contingent possibility. So the logical possibility is the most encompassing of them. All right. So something can be logically possible, but not physically possible, because, for instance, let's say that violates the laws of physics. You know, there's nothing in logic that precludes the laws of physics being different, but they are where they are.


So there are certain things that are logically possible, but physically impossible. And then same way there are things that are physically possible, but not contingently possible, because it turns out just because of the way historical events turned out, for instance, that I'm in New York right now, I could be wrong. It's physically possible and certainly logically possible, but it's not contingently possible because I happen to be in New York and, you know, there's no way for an instant transfer to to Rome right now.


So, like, if you were to spontaneously disappear, like all of your matter, which disappear, that is logically possible, but not physically possible. Correct.


And if you were to be in two places at once, that is logically impossible.


And if you were to also physically separate. So these are sort of nested and if you were to be in California right now, that is contingently impossible because you're in New York right now. It's not correct. It wouldn't have been physically impossible for you to be in California if history had been different up till now. That's exactly right.


Now. So, so so time travel is logically possible. That doesn't mean that that there are no logical problems with time travel. Right. So there's some of the, you know, perhaps the best known as the grandfather paradox, the idea that. So what if I actually was able to go back in time and either by accident or willfully somehow wanting to kill my grandfather, then what would happen to me, since obviously I am possible only because my grandfather was in fact alive and so on and so forth.


So now there are several solutions to this kind of problem. So. One solution is that, well, perhaps somehow it would be actually impossible for you to kill your grandfather so that you keep failing. You know, you can you can try, but you keep failing because otherwise they will sort of interfere with the way things developed temporarily. I don't find that many philosophers find that answer a particularly compelling. But it is possible, for instance, that OK, well, so then you kill your grandfather and you generate an alternative reality at this point.


And in that alternative reality, as it turns out, you're not born, but in your original alternate reality in which you were able to to go back in time, you were in fact, born. So essentially now you have a splitting universes, sort of like a multiverse or a parallel universe kind of situation that's more palatable, for instance. But it's you know, it's not it's not clear if you can actually do these sort of things physically, obviously, logically.


Again, there are some interesting puzzles. Another one is backward causality. So the backward causalities did. So what if I go back in time and I haven't built a time machine, I go back in time and and tell myself how to build a time machine. And that's how I get a time machine to begin with. Well, now it seems there seems to be a problem there.


Right, right. Right. Tonight after information come from. Right. Exactly.


So those kind of things depend again, on the on the on the on the structure of of spacetime.


If time is sort of a linear thing as opposed to a loop, a close, a closed loop, then certain things can or cannot happen. So the short answer to the question is, no, time travel does make sense, but it's really tricky. And so to do a very good at a good time travel science fiction story, it's really tricky. Now, I do love Dr Who and I think it's very clever, but sometimes they get twisted in a logical pretzel because of the situations which they get.


So for instance, the doctor, apparently it's obvious throughout several seasons of the Doctor Who series that there are some rules that apply to time travel, one of which is that there are certain things you can change if you go back in time. And then there are some of what the doctor calls fixed points. And those things simply cannot be changed no matter how much you try, because they will cost too much of a disruption and in the fabric of the universe now, whatever that means.


But it makes for a good plot, obviously, because then there are constraints on what the doctor can do. Otherwise you essentially will be able to to use space as as the playground as if you were a God. Right. And changing things all the time.


Right. Right. So the the fourth kind of impossibilities is narrative and possibility. Narrative and possibility. That's right.


There may be a lot of controversy over what kinds of time travel are actually logically or physically possible.


But some of my favorite examples from movies are are clearly not how any sane version of time travel could possibly work.


And I sort of collect these because I find these examples so delightful. One one example probably many people are familiar with is from back to the future where when Marty Flaggers Marty McFly goes back in time and and accidentally disrupts the chain of events that causes father and mother that that had caused his father and mother to meet and then give birth to him. Right.


He he's carrying this photo of his family and he notices that that he and his siblings are starting to fade away and disappear from from the photo, as you know, his his chance of remedying the situation.


That's a version of the grandfather paradox, right?


It's the same logical problem. Right.


But but the specific problem that I was alluding to here was the idea that that in in the world that Marty has accidentally created, someone just took a photo of the empty backyard with no children or a photo.


I think there was someone, maybe the father or mother in the photo who like, you know, if Marty hadn't succeeded in fixing the destruction he caused, that person in the photo would have just had their arm around.


Nothing at all like that. Yeah, that was the implication. And then my other favorite example comes from this. Don't even remember why I watch this movie. Maybe I was sick or something.


It's called Kate and Leopold with Meg Ryan and Hugh Jackman. And Hugh Jackman plays this. Well, that's because you Jackman was in there. But no, I think I was like, OK, fine.


He plays a 19th century aristocrat who accidentally falls into a time portal and ends up in modern day New York and falls in love with Meg Ryan. And the the gist is that if he had continued to live in his time, he would have become an inventor who would invent various important things. And one of the things that his inventions would have made possible was the elevator, among many other modern conveniences. And so when Hugh Jackman gets accidently yanked out of his own time before he can become this inventor and, you know, ends up in modern time.


One of the things that immediately happens is that all of the elevators in New York City and presumably in the world suddenly break and plummet to the bottom of the elevator shaft, like the implication being that in this world in which this inventor never existed, someone took the trouble to build thousands and thousands and thousands of non working elevator shaft thrip with elevator boxes in them.


That didn't work.


Yeah, I think that the conclusion of these from these examples is that if you want to do a movie or a TV show on time travel, you better get a good physicists and a good philosopher as consultants because otherwise you're going to need to travel very, very quickly.


I love that idea of a possible job market opportunity for philosophy grads as consultants to science fiction movies.


That's actually an excellent idea. Let's move on to a question. This question was actually debated among a number of commenters. I don't remember who initially opposed it, but it got a lot of play in public. Dushan, Shunda and Roy were among the people debating this. They were they were debating the question of how much how we decide how much blame to assign or how much blame is deserved by a victim who like knowingly or carelessly contributed to their own victimization.


So one example that was mentioned was like women wearing revealing clothing and thereby increasing the chances of them getting assaulted. Another example was someone like leaving their wallets, like flashing a lot of money around and thereby increasing the chances of themselves getting robbed. And the reason that I really wanted to address this question was that I I think my thinking about it was somewhat confused until relatively recently.


And what really helped me resolve the confusion was I was recognizing that the question itself, as originally posed, how much blame does such a victim deserve is not really logically well-formed.


And there are you can instead disambiguate the question into a set of actually logically well-formed questions of which you can answer. So one such question would be how much does it increase your chances of getting victimized if you do X, Y or Z, for example?


And that is an empirical question which doesn't answer another question is not really an empirical question, but just sort of a decision that we like collectively as a society make, which is how much do we want to legally hold someone responsible for what happens to them if they do X, Y or Z?


Like, do we want to not punish robbers if the the person that they robbed flash their money around?


Do we want to not punish rapists? If a woman was dressed provocatively? Our answers are no, we do not want to have that policy in place.


Right. And then the third question that you could ask is it is an empirical question, but it's just an empirical question about how sympathetic you feel towards someone who is victimized because of what they did that contributed in some way to that causally contributed to that victimization.


And there's not really I would argue there's not an objectively correct answer about how sympathetic to feel to someone. Yeah.


And what people are sort of reaching for when they argue about how much blame someone deserves as they it seems like they're implying that there is a right answer about how sympathetic you should feel. Or maybe they they're I I'm not sure if they know exactly what they're arguing about.


They might be arguing that the legal question they might be arguing about the the question that I don't think is a real question about what the right amount of sympathy to have is right now.


I agree with you, with your take. I think that the problem with the question, which I think, by the way, was was posed by Ian. But the problem is that there is an ambiguity in the question. In particular, there is an ambiguity in the word blame, you know, and you separate. I think I think very clearly the first especially the first two examples you gave. You know, the first distinction you made was between, you know, but if they blame, you mean, you know, how much causally did you contribute to the event?


Well, then one could argue that human beings being human beings, you know, if you flash your money or wear miniskirt or whatever it is, yes, you did contribute causally to what happened because you remember beings react in a certain way, some human probabilistically anyway.


Exactly right.


So in that sense, you are to quote unquote blame. But it's not a moral blame.


It's a causal it's a causal efficiency situation.


The second example, second meaning that you were pointing out, is actually really an ethical one and a moral one that is, you know, frankly, wearing mini skirts or flashing money. It's not legal. We don't consider that a moral deficit of somebody who does that sort of thing, at least most of us don't. And therefore, the moral blame is entirely on the on the on the perpetrator of the crime and not on on the victim.


So the victim is causally responsible to some extent, but it's not morally responsible and that the. I think it's very important you made it very clearly, the third aspect is that, you know, the emotional reaction there, you're right. It's an it's both an empirical question. And I don't think necessarily that is a right answer, although I suspect that if people were to make that distinction between sort of causal conditions, you know, a contribution and and moral culpability, things would actually that distinction would actually clarify even one's own feelings about about the situation.


The example that Ian was one of the examples actually that you mentioned that I thought was interesting and was the murders by Afghan mobs last night, workers which were was incited by Terry Jones burning the Koran. Right. And so the idea there is that, you know, Terry Jones should have known by doing what he did that probably was going to cause a certain kind of reaction by certain kinds of certain certain people. But at the same time, so so he's causally responsible.


But is it since we don't think there is anything immoral or unethical, illegal, certainly about burning a book, even though if you happen to be somebody else's sacred book, then frankly, it's it's a matter you know, the moral responsibility still squarely on the shoulders of people were actually killed in response to that kind of of of action. I mean, you can make the argument also from a secular perspective, right. There was a similar situation when the famous Danish cartoons of the prophet were published.


And, you know, people actually have made the argument that the cartoon authors were actually responsible, obviously indirectly, for the mayhem that followed. And again, causally. Yes, but morally, I seriously doubt it.


Yeah, it's really hard to phrase that statement about the causal connection in a way that doesn't sound like blame. Right.


And so I'd actually even prefer I mean, even though I think it is correct to say that someone causally contributed to their own victimization, like officially. That's correct. The wording that I prefer because it doesn't trigger that emotional reaction is there is like a higher probability of the victimization if you do X as opposed to if you do Y, because then like words like responsible and contributed to sound like blame.


So like blaming. Yeah. I just like one last comment on that. I won one thing that became clear to me after thinking about this and talking to some friends about it a while ago was that there is there's a difference between the advice you give in private and the advice you give in public as sort of a policy statement.


And so a lot of the a lot of the brouhahas that have occurred around this kind of question have been, for example, there have been a number of cases where a police officer or someone makes a statement advising women not to, like, get drunk at bars or not to dress provocatively or not to go in certain neighborhoods or out alone or something like that. And people a lot of people get really offended that it sounds like the guy is blaming women for their own victimization.


Right. And they know the blame should be on the perpetrators of the crime, not on the women themselves. Women should be able to dress however they want and, you know, get drunk if they want to. And then other people respond like, well, it is empirically true that, like, you know, the probability of a victimization goes up if you do these various things. So what's wrong with what he's saying? Like, isn't that good advice to warn people about this, you know, probability?


And and I think at some point I was sort of more in that letter camp, but after thinking about it more, I realized that what is good advice when you give it like in person, like, you know, mother to daughter or friend, a friend is it has a different effect when you stated publicly, because when you said it publicly, even though it's still an empirically true claim, it has the effect of implying that our society should be focused more on changing the actions of women than on changing the actions of of men.


And so, you know, it's shifting at least the perception from a causal component to a moral component. Right. And which is not what we want clearly. So, yes. As a sound advice to a friend or to your daughter, it's perfectly now and then, by all means, go ahead and keep doing it. But a police officer, a mayor or whatever other official personality should not be engaging in that sort of thing. They should simply squarely say, no, it's unacceptable that, you know, women get raped or whatever, the old money gets stolen, whatever it is, the situation that we're talking about.


Yeah, I think I just I had never really made that distinction between private versus public advice before, but I think it's an important one to make sure we have just a few minutes left.


So I want to make sure I take a request which was made by Dushan Chunder but seconded by at least one other person, including in public. He wanted to revisit a discussion that we had months ago about the question. I forgot actually what context it came up in. But Masimo, I think you.


Made the claim that the philosophically examined life is a better life, and they were, is that is that enough of a memory or the laugh of I would never say such a thing.


I would never say such a thing now? Well, first of all, Darshan actually wants what he calls he wants to learn the most detailed answer. So now it's not going to happen in a couple of minutes. But but so I don't think I would have to go back and check where exactly I said that. But I really would be surprised if I said that philosophy is the best way to live. No, it's not.


Clearly, what I meant was that reflecting on your life helps or it's bound to help the way you live your life. In other words, that doing occasional reflection. So, you know, Socrates famously said that the unexamined life is not worth living. I am not sure that I would go that far, but I would certainly say that the unexamined life is the examined life is more likely to be well lived than the kind of life that you just live by, by random choices, right and left.


I remember now the context in which we are discussing this. It was with I think it was the John Stuart Mill quote about Socrates and the pig and like, oh, you know, that like it's better to be Socrates unhappy than a pig, happy or correct.


And if the press thinks otherwise, it only because he doesn't care. Yeah, that's right. Well, so you agree with that?


I tend to agree with Mel on that one. And in this particular sense, I mean, so the implication there is not that the philosophical life in particular is better.


The implication is that a life that allows you to reflect and make choices and, you know, you know, pondering about your your the things you want to do and not want to do in life, which the pig clearly cannot do, even even though it may you may still leave a life of, you know, with problems because human beings do have this tendency to to to complicate their lives too much, basically.


But the idea that Mill had was that at least you do have a choice. You're conscious of what you do and you have much more possibility than a pig to make it. I mean, if you want, you can live the life of a pig. And, you know, you always have that choice. The pig does not have the choice to live your life. So the idea that male was this was in the context of the discussion the male was having meat was was doing between higher and lower pleasures.


Right. And so Male said that, look, all pleasures to some extent are good in life. And, you know, low pleasures are things like food and sex and things like that or cheap entertainment. Let's say like you watching that movie when you were sick or something, I was like, yeah, yeah.


High pleasures, you know, things that are that require a little more sophisticated, complex way of approaching life. So, you know, particular refined foods or opera or whatever the high culture you want or reading a book.


And, you know, milk was Miller's point was simply was that that if you are capable of appreciating those higher pleasures, you always have the possibility to engage also in lower pleasures. And your life becomes a balance of how much time you want to spend doing one and doing the other. But if you don't if you're not capable of the so-called higher pleasures, then you're stuck with the big thing.


But that makes it sound like this ability to self reflect is a means to the end of ending up more satisfied. But M. is actually saying it's better it's better to self reflect, even if you're less satisfied.


Well, that's compared to not being able to self effect, but being satisfied.


So it sounded like he wasn't catching everything out in terms of, you know, ultimately just in terms of happiness or, you know, satisfaction. He was he was saying that there's something inherently better about self reflection, regardless of whether it contributes to it, regardless of how much it contributes to your happiness.


That was the claim that I think.


Yeah, that's that's there may be. Yeah, you're right. There may be it may be too strong to say regardless of. But but the idea is that if Socrates dissatisfied still is going to have, if I remember correctly, a much broader range of possibilities, even though many of these are going to be dissatisfied, then then then we simply content with rolling in the mud, essentially, because, again, Socrates does have the ability to roll in the mud if he really wants to.


If that's what he ends up doing, doing, then then if that's his choice.


I think there is also there the issue of with the ability of making more choices and reflecting the choices, I think that the possibility of being dissatisfied actually increases.


So the pig simply does not know what he is missing out there. He has no, no, no comprehension of the stuff that is that could go wrong and that is missing or that he could be doing and so on and so forth. Socrates does. And so Socrates is aware of it and. Awareness is is a is a curse in its own of its own making, basically, which Mel feels is still preferable to being weak.


And I think that the empirical evidence is on the side. You know, I have that.


I think that if that is first of all, most people don't actually rolling the mud, for one thing, you know, even people I mean, once you know what is out there, you can't make yourself ignorant again.


So the fact that they chose that doesn't necessarily mean that they wouldn't prefer to be ignorant or that they wouldn't be happier if they were ignorant. Like what people choose to be a self reflective or to not be self effective if it is such a choice. I'm sorry if that is actually a choice. Can't really be taken as a sign of what what the better life is. Yes.


But now where there's a possibility of equivocating here on the on on the term happiness, as we were saying in another episode, the idea here is that if buy happiness, you just simply mean, you know, low pleasures, you know, fulfilling pleasures.


Then the the the big by definition is one of the happiest, you know, animals in the world.


But the pulp mill is agreeing with that. He is agreeing with them. It's done worse. The pig is happier.


Right. But he's using similes in the case of Socrates is using or in case of human beings using the more sort of a you demonic view of happiness that is a fulfilling life. So you might be frustrated or or somewhat, you know, unfulfilled in your life, but you still have your own goals and famil. That's what gives meaning to your life.


See, I'd be fine if you said I would prefer to be a theocracy. Socrates dissatisfied than the pigs at the side. What I don't understand, what doesn't make sense to me is him saying that it's inherently better.


Even if you wanted to make the claim that, like you think it's inherently better for a human being, but then I don't even know what he means by better. He's not saying happier. He's just saying better and used in isolation like that. I don't understand what the word better means.


You daemonic, it gives meaning what you do know. But he's saying that you daymon eudaimonia is better than happiness.


And he's not just making the claim that Socrates has eudaimonia and the pig has happiness.


He's saying it is better to have you Demona than to have happiness.


Correct. And perhaps you should do a separate epigenome. OK, perhaps you should. All right. I've been cured. It's been a pleasure. And we are unfortunately all out of time. So this wraps up another episode of the rationally speaking podcast. Join us next time for more explorations on the borderlands between reason and nonsense.


The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benny Pollack and recorded in the heart of Greenwich Village, New York. Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.