Transcribe your podcast
[00:00:14]

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense, I am your host. And with me, as always, is my co-host, Julia Gillard. Julia, what are we going to talk about today?

[00:00:47]

Masimo, today, our topic is parapsychology, which is the study of psychic phenomena like extrasensory perception and precognition, remote viewing. So we're going to talk about its scientific status or lack thereof. What's the best evidence for it in the academic literature? And what does the study of parapsychology tell us about the conduct and practice of science in general? Wow.

[00:01:13]

Yeah, well, I guess I'd like to start with an anecdote then, which has nothing to do directly with the scientific study of parapsychology, but it has to do with a commentary on the scientific study in college. Okay.

[00:01:25]

Anecdotally, so this happened during the last few months. It's something that actually lasted for a number of weeks. A colleague of mine, a young colleague of mine, Martin Baldry at Kent University in Belgium, and I are putting together an edited book for the Chicago Press on the demarkation problem that is on on the difference between science and pseudo science. And, you know, we've got a lot of contributors, as usually is the case with these books.

[00:01:54]

And most of the chapters are excellent, or at least very good. And some of them are not so good. And, you know, very rarely ever it happens in these books that you actually have to project a chapter.

[00:02:06]

In this case it happened. And the reason it happened is because we asked a couple of people who should go, of course, I mentioned, but a couple of people who work in areas that are related to the sociology of science more than the philosophy of science.

[00:02:23]

And they both one of them in particular, but both of them submitted chapters that were very sympathetic of parapsychological research.

[00:02:32]

And we were kind of surprised. And we looked at the start presenting new evidence or just reanalysis of old neither presenting a general argument based on the idea.

[00:02:42]

In one particular case, I can tell you some of the details, because the paper, that paper is not going to be published in our book anyway. So it may be some time before it comes out. But in one case, basically, the person in question was making a comparison between recent studies by a parapsychologist who actually got quite a bit of coverage in the press last year when he published these these paper that allegedly demonstrates basically clairvoyance.

[00:03:18]

We may talk about the details of this, but I was going to go on so that one. So so the the other of these chapter made the parallel between Ben's leaders study and string theory. And he said, you know, why is it exactly that string theory, which, after all, is based on no experimental data at all, is treated within science much, much better than, you know, something like evidence based paper like Ben. And now the interesting that is an interesting question in general from a sociological perspective.

[00:03:54]

But the problem is that one of the answers to the question of why is it that Bems study is not quite as well received, has not been quite as well received in science as, say, string theory is, is because epistemically there are huge differences. Right. String theory is true that it doesn't make hasn't been confirmed experimentally, but it is built on a huge amount of both theory and experimental data in physics over the last 100 years.

[00:04:21]

Bems articles pretty much like everything in parapsychology. On the other hand, hasn't built on much and the evidence is in fact very debatable. The expert, as we will probably talk about later, the experimental designs are questionable, the data analysis are debatable and so on and so forth. So it seemed to me, in other words, to Martin and I seemed that the answer to the sociological question of why is that one is treated so much better than the other, despite the fact that they both make extraordinary claims, string theory claims that there are 11 or 13 dimensions or whatever them claims that there is such a thing as clairvoyance.

[00:04:58]

Um, it seems to us that the answer wasn't just sociological. They may certainly be a sociological component to it, but it's episodic.

[00:05:06]

It's well, that's because one is well founded on other stuff and the other one is not. So it's very surprising this has happened actually a couple of times. I mean, that then I got colleagues who are much more intrigued by part of psychology than I've ever seen a philosopher or sometimes even a scientist being intrigued by any other kind of pseudoscience.

[00:05:29]

And which is why, of course, would part that. While we're doing this this particular episode, right, well, I'm I'm personally intrigued by it, although I'm less interested in the alleged phenomenon itself than I am in, you know, what the intellectually responsible way to to think about it is?

[00:05:45]

I like to think about the experimental results that we're getting because they're as you said, there is a debate about how we should be reacting to the studies that purport to show evidence of a psychic phenomenon like remote viewing or clairvoyance or E.S.P and and how to deal with the meta analysis that looked at studies, a whole collection of studies on the subject and find an overall effect, because there are a lot of responses to that.

[00:06:12]

And maybe we should first start by explaining what a meta analysis is because it is very relevant to this kind of of issue.

[00:06:19]

OK, well, I will I'll do you one better and back up more and and talk about what this research into side phenomena actually entails. So I wanted to focus on what I think are two of the most convincing experiments on psychic phenomena. So one of them, one of the areas is by recent studies, by album that you mentioned. But more going back farther are these series of experiments called Grunsfeld experiments.

[00:06:46]

So they basically involve at their best, I would say they involve putting a subject in sort of a sensory deprivation tank or sensory deprivation situation where the subject's eyes are covered, usually with have the ping pong balls that are like taped to their eyes. They look funny. They look funny.

[00:07:04]

Well, and the subjects are are bathed in this like red floodlight. And there they have headphones over their ears to feed white noise into them to block out any potential sensory stimulation. So the overall effect is very eerie. They kind of look like, I don't know, alien fetuses or something.

[00:07:20]

I find a picture on Wikipedia.

[00:07:22]

But so the idea is that that the reason that E.S.P doesn't generally show up is that it's this week force and that it tends to be drowned out by other sensory information. So if we if we block all other sensory input, then the subject should be able to know better, use his or her ESP powers if they do exist.

[00:07:42]

But then also the other purpose of blocking out sensory information is to block any potential chance that the subject could receive any kind of information about the images.

[00:07:53]

Right? Yeah, exactly. The target. So anyway, so they look at there, as I said it, they're at their best.

[00:07:58]

These studies involve a set of of images. I think in the last one I read about, it was four images. And and the computer randomly chooses one of those four images to be the target. And it and then I think it displays it on a screen, in a separate room, or maybe the one of the experimenters colleagues tries to sits in the separate room and tries to transmit that image to the transmitter. Yeah, to the subject and then the subject.

[00:08:27]

Then reports what image he received. Oh, sorry. The subject looks at the four images later and says which one of those four was the one that he felt being transmitted to him.

[00:08:36]

So the images chosen randomly out of the four.

[00:08:40]

And then if the subject's success rate across all the subjects, this was done again and again.

[00:08:45]

If the subject's success rate is better than one in four, significantly better than one in four, then that's considered evidence that there was actually some kind of transmission going on.

[00:08:54]

While you were explaining the protocol, I saw at least ten different ways in which this thing can go horribly wrong. Go ahead.

[00:09:02]

No, I mean, that's basically the set up.

[00:09:04]

But then I was explaining that because there was one of the most famous pieces of evidence allegedly for psychic phenomena was this meta analysis of Grunsfeld experiments that was conducted by Daraban actually back in I think it was in 1994 in which he found that subjects across all of these different trials of the failed experiment had a 35 percent success rate at picking the rate, the target image that had been transmitted to them, which is far more than the twenty five percent we would expect by chance.

[00:09:34]

So, I mean, this was one a meta analysis and then there's been a series of other meta analyses since then. Both reanalyzing the study is the Darelle Bement, his co-author analysed and also analysing nugan failed experiments that have come out since then. And some of those meta analysis have concluded, no, there actually isn't an effect when you look across the studies. And some of them have concluded that there is Richard Wiseman, who many of our listeners probably know, published a meta analysis in, I think 1999 with Julie Minter, I think was the co-author and analysed studies that had come out since Daraba Ms.

[00:10:08]

Meta analysis and found no significant effect whatsoever. Right.

[00:10:12]

But, you know, this is this is the problem with meta analysis. It's never really clear which studies to include in your meta analysis. A lot of the controversy tends to be which over which studies are valid enough to actually including your meta analysis. If there are some studies that are clearly flawed, then, you know, even if they find a strong effect.

[00:10:31]

Strong side effects, you don't want them in your meta analysis because they bad studies. So first of all, again, we need to spend a minute on what is a meta analysis to begin with.

[00:10:39]

Oh, am I supposed to explain that you let me get sidetracked for, like, five minutes? Well, I it was interesting, but we still need to get to the madness. So how did the analysis work?

[00:10:50]

It's a very common type of of sort of review, a systematic review of the scientific literature. It's used in in psychology. It's used in ecology and evolutionary biology in a lot of fields.

[00:11:04]

Typically, what it is, is is supposed to improve and substitute what used to be and actually frequently still is a qualitative review of all of the literature. So let's say that you want to make a point about the frequency of, you know, one type of speciation, for instance, in our species generating process over another.

[00:11:24]

Typically early on in the biological literature, for instance, what you could do is you look you read through a bunch of different papers and then you form some kind of opinion and you write a review that is actually a narrative of your understanding of the literature.

[00:11:40]

And you say, well, on balance, I think this is happening or this is not happening now at some point over the last few decades, actually, it's it's methodology is not that recent people started thinking, well, wait a minute, we have you know, science is about quantitative quantification, at least at its best.

[00:11:58]

So we have a lot of data and we have this analysis that are presented in tabular form in the regional papers. Why don't we try to come up with some way in which we can sort of instead of making a qualitative judgment, making some sort of quantitative judgment on the available literature. Right. So there are there's a variety of techniques, but other ways of doing meta analysis. But essentially, this is what it is. You you average the results, the quantitative results of a bunch of different experiments, a bunch of different published papers.

[00:12:30]

And you apply statistics that aren't meant to quantify, first of all, whether there is a signal, the strength of that signal and the statistical probability across all studies that that signal is actually a real thing as opposed to B noise or something else. Now, here's the problem. I'm wondering, first of all, of course, meta analysis, the results of that analysis dramatically and obviously depend on the quality of the papers. If the original papers are garbage, you can do all the Midnighters you want.

[00:13:01]

You still come out with garbage here. The the old computer science saying that garbage in, garbage out holds perfectly well. So typically a bump, especially in controversial fields like parapsychology. The first thing you think is well made analogies is whatever you want, but metahumans, whatever you want. But if the original data are not that good, you're not going to get anything and it doesn't matter how convincing the statistics seem to be.

[00:13:25]

The other thing is meta analysis, of course, averages out a lot of things in terms of experimental designs, for instance, you know, so so you have actually genius data that the bones were not done in the same way, but the same people in the same circumstances and so on and so forth, which means that you can have all sorts of you know, these are possibly the results of one kind or another. And furthermore, there is another problem, which is the failure or problem.

[00:13:52]

That is all you can do with math analysis.

[00:13:54]

You can look at published data. What you don't have access to, of course, is the papers that were never published. Presumably the papers that are not published were not published because they didn't have they didn't show significant results, which means that you have to account somehow and there are methods for doing this. But they are controversial in themselves based on a lot of assumptions. You have to compensate for the presumed number of papers that were never published simply because they had negative results, which, of course, will weigh down the statistics.

[00:14:28]

Right.

[00:14:28]

Well, you know, I've read some attempts to estimate that. Right. For the meta analysis of the of the failed experiments.

[00:14:35]

And there are all really I mean, they all conclude that you would have to have a huge number of unpublished, insignificant, canceled experiments in order to to counteract the such a significant effect of the published ones.

[00:14:50]

The figures tend to be, let's say, one meta analysis by I think it was Ray Hyman, who is a skeptic who tends to criticize a partial skeptic.

[00:15:01]

Yeah, OK. Well, I mean, he is a skeptic of XYZ research.

[00:15:04]

So in this context, yeah, actually, I'm not sure that this is by him. I know he's done one, but the figure that I'm reporting, I don't know if it's by him, but the estimate was a ratio of something like 15 to one unpublished, unpublished, insignificant study is to publish significant studies would have.

[00:15:19]

Yeah. So basically saying, look, for every study that's published showing a significant and again, self study, there would have to be 15 other unpublished ones that we don't know about in order to negate the results in other estimates are even higher.

[00:15:30]

Right. But when you actually look at the details of the data analysis, the sample of studies, as it turns out, for instance, a good number of the so-called published studies were not actually published. There were abstracts present to conferences where, you know, informal papers basically published on the web and so on and so forth. So you actually. Being included in the media, that's not actually actual studies. That's right. At least in one of the early math analysis, he suffered from a large percentage of papers that were actually not papers, which is not surprising if you think about it, because since parapsychology is in fact not considered by most people a scientific discipline, it's not like you have a lot of places where you can publish your results.

[00:16:10]

Yes, there is the Journal of Psychology. But other than that, you know, mainstream psychology journals very rarely publish this kind of research. In fact, the leaders by them is is one of the few exceptions. In fact, the only exception I know of, there may be more.

[00:16:26]

So the problem is that those estimates of file drawer size are very suspect.

[00:16:33]

And the reason they're suspect is because parapsychology is not a field well understood field in terms of publication of records like ecology or evolutionary biology.

[00:16:42]

Even there there is uncertainty about this. The file drawer, in fact, because it's not that the file drawer shouldn't count on the papers that were actually written and rejected, but all the research, the piece of research they've been done and never even gotten to the point of being submitted or the point of, you know, somebody does the analysis of something fails and you never get even to write the paper. Now, in large fields like an ecology, evolutionary biology and in mainstream psychology, you can have some some idea of the file drawer and the actual file drawer, because there is a large body of literature that you can actually look at and there's a large number of people you can ask about.

[00:17:19]

Well, on average, what is the number of things that you publish versus not publishing that sort of stuff in parapsychology? That seems to me much more difficult to do precisely because of the nature of the field, although we should be clear that the publication is a major problem in other fields as well, not just parapsychology.

[00:17:34]

What we're talking about is just the difficulty of estimating how big of an effect it is, and that's what's confounded by the lack of reliable data on parapsychology.

[00:17:40]

But I mean, there's is I was just looking at a there was an article by Ben Goldacre who writes the bad science blog.

[00:17:48]

And so he was talking about the extent of the wilder effect in plenty of fields.

[00:17:53]

And so he showed this there's this cool method called a funnel plot where you you plot all of the studies in a measuring a particular effect, like looking at a particular research question and you graph them according to how large or prestigious the study was.

[00:18:09]

And the idea is that the larger and more prestigious the study that is larger and more like well conducted the study is the the more agreement there's going to be among the studies about the actual effect in question.

[00:18:23]

And then as you get, you know, less prestigious and smaller studies, there's going to be more variation in the effect.

[00:18:31]

But if there's no publication bias, the fact that variation should be equally distributed, both larger and smaller than the true of factor than the fact measured by the the good study is right.

[00:18:40]

But instead, what you actually see when there is a publication bias fact, as there usually is, is this sort of skewed distribution. Yes. You distribution where like if there were no effect, there would be this triangle or this Upside-Down funnel shape where at the top of your graph where the good studies are, there would be a very tight cluster and then it would sort of spread out as you get down to the bad studies. Instead, you see this, the skew where the bad studies are skewed to the high end of the graph that they find, you know, on average a higher a stronger effect than than the good studies.

[00:19:10]

Right. And actually, the graph that Ben Goldacre shows to illustrate this effect was a graph of studies looking for publication bias. And that was.

[00:19:19]

Oh, that's interesting. Yeah, it's a meta meta study. Yeah.

[00:19:24]

Now, one of the things that I think it's unfair and I'm not going to mention this this word several times during this episode when psychology is concerned, but one of the things that is actually unfair as a criticism of these studies is that the effect is always small and that somehow is indicative of, you know, something funny going on, basically, because if you only get very tiny facts, that probably is the result of random fluctuations or undetected biases and things like that.

[00:19:53]

I mean, it's a serious criticism of any kind of research when the effect is small.

[00:19:57]

But frankly, there's a lot of very good science where the effect is small, not only in fundamental physics, for instance, but even in ecology, evolutionary biology and in psychology. Very often the effect is very small. And in fact, the kind of effect you mentioned from the meta analysis of the field experiments I think are suspicious, not because they're small, but because they're too large, because if you're talking about thirty five or thirty eight percent success rate against a 25 percent expectation by chance, that's not a small effect.

[00:20:28]

That is in effect, the up only or largely in a matter that is an issue are pretty much in every well conducted piece of research. And it should be easy to replicate. It's a large effect.

[00:20:40]

Right. So I, I agree with your general point. I agree with both of your general points. One, that that many effects in nature are actually small effects and that that alone is. Not a problem, should not be a criticism and also your point that that what was your second point or whatever it was like, can actually take this little piece and then playing over and over, whatever it guess, you know?

[00:21:08]

So I agree with you that many of vaccinator actually are small. And I also agree with you that really large effects should, in theory, be able to be replicated.

[00:21:18]

But I would disagree a little bit that the fact that that those large effects and old experiments aren't replicated all the time is necessarily a sign that they're serious because, you know, there's no exact replication.

[00:21:30]

Like, you know, things are experiments are conducted under slightly different conditions with slightly different populations of subjects.

[00:21:37]

So it could you know, it could be that like on sunny days, people are more psychic. And it's just hard to know because there's no actual theory about. I would agree, but I would agree.

[00:21:48]

But I think that my general take toward the end of the episode, I think we should try to come up with to move on to a broader, broader view of what is all this stuff tells us. But but for a while, let's keep talking about the details.

[00:22:03]

So one of the problems is I think the psychology suffers death by a thousand cuts, none of which is lethal. So, you know, you can raise the problem is that you can reasonably raise a lot of these objections and they're very difficult to to avoid for parapsychologists. One can say, well, but that's not fatal. The other one is not fatal. The third one is not fatal. True. But once you have a lot of them, you have sufficient doubt, particularly because, of course, the claim being made is quite extraordinary.

[00:22:36]

Right. Because if we're claiming telepathy or if you're claiming clairvoyance, you're not making a standard claim in psychology, such as, you know, people react in a certain way when they're under stress. You're making an extraordinary claim and it really is up to you to show extraordinary evidence in favor of that claim.

[00:22:54]

Um, so that's one of the problems with the with the meta analysis and the effect size. The other problem is that Susan Blackmore, for instance, who used to be a simple sympathetic to parapsychology and then sort of not sort of then really changed her mind quite dramatically. She changed the mind in on the basis of her own experience, both doing psychological research and visiting laboratories that were doing parapsychological literature's research. And one of the most damning articles that Susan Blackmore published a number of years ago at this point was essentially recounting a visit to one of these labs that were doing they were engaged in these sort of experiments, and she was appalled by the actual experimental conditions, which were far from the kind of tight situation that you were describing a few minutes ago.

[00:23:44]

So there were all sorts of of possibilities for the transmitter to contaminate the results. There was, in fact, the the conditions were not, in fact, you know, sealed. And, you know, the receiver was not sealed at all, or at least partially not sealed so that there was a possibility.

[00:24:04]

But more importantly, there were incredible.

[00:24:06]

There was an incredible number of subjectivity occasions. For instance, the images were often very complex. You know, most people that are that are familiar with the early part of psychological literature tend to think of these the famous rain experiments at Duke University.

[00:24:25]

But the symbols. Yes, those using very simple symbols. Squiggle.

[00:24:29]

Yes, exactly.

[00:24:32]

And you know, those experiments, which, by the way, I actually done when I was in Italy and I was young and I was I didn't think I was convinced I was psychic when I was like seven.

[00:24:41]

Oh, well, I wasn't convinced I was a little older, but I did it anyway. And it was fun to do. And of course, I never got any results that were not statistically significant.

[00:24:51]

But the thing is, the advantage of that kind of approach is that the symbols are very simple and it's hard to imagine a large amount of ambiguity in the interpretation. On the other hand, there are several examples in the literature that the Ganzel experiments where the images are complex and they themselves lend lend themselves to various interpretations. By the way, there is another type of experiment is sort of related and often mentioned as another strong type of evidence in favor of parapsychology.

[00:25:22]

This is the moment of this dream telepathy experiments which suffer from similar kinds of problems.

[00:25:27]

So when you interpret in this case a dream essentially, or you interpret the transmission of a complex image at the moment of discovering what counts as a success depends on how flexible you are about what counts, what constitutes a hit or miss?

[00:25:46]

Well, I completely agree. That seems like a really poor way to. Conduct to study, but that's why I was more impressed by the Kansfield experiments where they had people pick one of the four images that they thought was most likely to be the one that was transmitted to them. And the original image that was transmitted was selected randomly from the four. So that seems like you can accuse that design of, you know, being vulnerable to subjective interpretation. Right.

[00:26:09]

Except, again, that there is a problem with leakage. But, yeah, leakage of information from the transmitter to the to the receiver, which is one of the things that Blackmore was pointing was pointing out.

[00:26:22]

Even so, why use complex pictures anyway? And why let the receiver pick as opposed to the receiver simply say draw before even government comes out of the of the Guenzel room, what the image is or what the target that was received was?

[00:26:40]

What would be the advantage of having the person, the receiver, draw less probability of contamination because you write down the result before you get any outside of the situation on the other of the you know, before you actually are given a choice. It's OK if you do it the way they do it, it's like having a multiple choice. Right answer as opposed to coming up with your own answer. And, you know, I was meaning to you a mental image.

[00:27:03]

Why shouldn't you be able to pick the image?

[00:27:05]

Well, I mean, I assume their response would be that it's sort of fuzzy. And so you have like a vague sense of what it looked like. But, you know, you're going to draw something and, you know, maybe it could look sort of like one of the images and sort of like another. And you're not going to really know until you see the images.

[00:27:21]

Which one do you think it was? Except, of course, that that does sound a lot like an excuse, because if there is actual communication coming through, it would be very compelling. If you could show that out of the millions of possible images that I can transmit, you actually pick something that significantly looks like the one that I transmitted.

[00:27:39]

But if I narrow it down, if I constrain the choice to for well, now you don't have a universe anymore to choose. You've got it a much more a much smaller sample. Yeah.

[00:27:51]

I mean, the difficulty with parapsychology is that the things that they're claiming that they're hypothesizing exist are these small effect sizes that don't always work under all situations and that are fuzzy. And, you know, there's this inexact transmission. And so even if they're right, those things would be really hard to detect. So it sort of puts the field in this pickle, which reminds me of the pear lab.

[00:28:14]

The hair lab. Yes. So the parent lab, which stands for Princeton Engineering Anomalies Research Laboratory. All right. This is actually closed in February 2007, although it operated for a good number of years of years from I think since nineteen seventy nine. This was started by Robert Jahn, who was the dean of the School of Engineering and Applied Sciences at Princeton. And basically what he wanted to do was to set up a lab to study psychokinesis. But, you know, it's probably psychokinesis.

[00:28:47]

That is the ability of moving objects with your mind is that apparently it's very difficult to show for microscopic objects.

[00:28:55]

You know, there is a famous objection to psychokinesis that Kurt Vonnegut came up with. All of you who believe in psychokinesis, please raise my hand. Right. So it's difficult to do. So what did the people do? They said, well, if Michael Psychokinesis difficult to show, perhaps we can we can demonstrate Michael Psychokinesis. We can do this. They did these experiments where essentially subjects were asked to use their mind to slightly after the fact of the movement of particles, millions of particles on a target.

[00:29:30]

And therefore, since every particle counts as a as a hit and you're talking about millions and millions of data points for for for repeated for number for a number of years. So you're talking about a huge database.

[00:29:42]

Now, the lab claims to have succeeded, to have shown, again, very tiny finding in this case. The effect is really it is very tiny. We're not talking anything like the thirty five to twenty five percent of the Dansville experiments.

[00:29:56]

Now, the problem is, of course, that first of all, other labs try to repeat that in Germany, one in Germany, for instance, and they failed. And that's right there problem because these were these were really supposed to be experimental conditions that are that are easy to control and easy to repeat. We're not talking about many, many chances for for contamination or for interpretation and that sort of stuff.

[00:30:20]

Also, the data, the data size, the sample size is huge, which means that in theory at least, you have very good statistical power.

[00:30:28]

Now, the problem is this, that the the experiments were based on use of random generator. A computer was randomly generating the targets. The problem with that, of course, is that, as I think any computer programmer can tell you, it's very difficult, if not impossible, to actually have a random generator, an actual. In a computer, which means that if you repeat the experiments millions of times, what you're more likely to do is to pick up on whatever bias was present from the beginning in the experimental apparatus.

[00:31:04]

Right.

[00:31:04]

I mean, if you if you're affected, is that your experiment finds it is quite small, even if it's very statistically significant. That could be any number of things. That could be a very slight problem in the methodological design.

[00:31:18]

Like maybe a few of your subjects have the ability to tell the the sugar pill from the actual medicine or, you know, maybe there's a few subjects who are just really good at reading the face of the researcher and interpreting it like all it would take was a tiny, tiny deviation from an ideal uncontaminated experiment.

[00:31:36]

And if you have enough a large enough sample size, you're going to get a significant effect.

[00:31:41]

That's right. Albeit a small one. Now, again, this is a problem in other in other areas of science as well. Absolutely.

[00:31:46]

Yeah, but the way it's usually dealt with is by replication. Right. So if you replicate if you can change the conditions, including change in the laboratory and replicate it and get the same results or similar results, then the more you do that, the more the idea is that you're getting away from any systematic or any systematic bias and you actually picking up something that is there is a serious indication of an underlying signal.

[00:32:11]

Otherwise, it's actually very, very easy to come up with statistically significant results that are, in fact the result of by microscopic biases. Instead of Michael Psychokinesis. It seems pretty clear to most observers that what the lab picked up was micro deviations in their parathas macro micro biases in their in their apparatus.

[00:32:32]

Yeah, I so as I said at the beginning of the episode, the reason that I'm interested in one of the main reasons I'm interested in the parapsychology studies is because of what it tells us about the way we conduct science in general, because a lot of these problems are.

[00:32:46]

I mean, if you if you're if you're nearly convinced, you know, just based on apriori, you know, scientific theory that parapsychology can't is really, really unlikely to be true, then this is kind of a useful test case for science. Like if our science if our standard scientific methods give us significant results in these cases, then that tells us there's something wrong with the way we're doing science and that includes the way we do science and lots of other subfields that aren't parapsychology.

[00:33:15]

So, you know, as I mentioned, the publication bias is a problem in other fields.

[00:33:18]

And there there have been a number of really damning recent studies, including one a couple of years ago in the New England Journal of Medicine that looked at the studies that had been registered by drug by pharmaceutical companies with with the FDA.

[00:33:33]

There are thirty seven of them.

[00:33:35]

And every one of them that came up positive was written up proudly and published in a journal. But then there were also and there are thirty seven of those. And then there are also three studies which had negative or like really dubious results, and 22 of those were not published at all.

[00:33:51]

So that's a really significant failure effect.

[00:33:54]

And that's not even counting the companies that bury harmful results like Vioxx and with their drugs causing heart attacks. One of the other major problems sort of related to the failure effect is this confusion between exploratory research and confirmatory research. So you can go on fishing expeditions in inData in the world to try to generate hypotheses about phenomena that might be happening, a reason for those phenomena. But but that and that's fine. But that's very different. And that should be kept completely separate from from testing your hypotheses.

[00:34:25]

And what you can't do is use the same data to come up with a hypothesis and test that same hypothesis. Right. That's a big no no. Yeah, it's a big no no.

[00:34:34]

And I think and other people agree with me that this is responsible for one recent, very celebrated and controversial result in the parapsychology literature, which was the study published by Daraban, who we've been discussing with regard to the NFL stuff. And this was published last year in one of the very top journals of psychology.

[00:34:52]

And he basically he did I think it was nine nine featured in The Colbert Report as well as he did.

[00:34:59]

Colbert make fun of him? Oh, OK. And you should watch the bookmarked.

[00:35:06]

So, yeah. So he did nine lab experiments testing the ability of college students, of course, because that participated in experiments, testing their ability. They're cheap.

[00:35:18]

They are. We will work for candy bars anyway.

[00:35:23]

So so the computer program would flash a photograph on the left or on the right side of the screen.

[00:35:27]

And students were supposed to predict beforehand, I guess it was which side of the screen the image would appear on.

[00:35:37]

And they were able to do that at a rate better than chance for certain certain subjects and certain kinds of images. So most famously said which subjects were subjects were much.

[00:35:49]

Better than chance at predicting the appearance, the location of the image when it was an erotic image.

[00:35:55]

So, of course, this I call that porn induced clairvoyance. That's a very catchy name. So, yeah, he made headlines because the album is, you know, famous researcher and it was published in this top journal.

[00:36:08]

And because the results were, you know, kind of amusing into the writing, now you can see you can begin to see where Colbert started making fun of the whole thing right now. Now it's my man. So but when you actually look at Ben's paper, they tested a lot of different things. So they looked at not only erotic pictures, but pictures of different kinds of animals and neutral pictures. And I think pictures of landscapes and most of those tests did not come out significant.

[00:36:37]

So and we don't even know how many other things he looked for that didn't come out significant. So if his hypothesis is that people can are clairvoyant with regards to erotic pictures, that's fine. But then he has to test that hypothesis again. He can't you know, what he was doing in his experiment basically looks like exploratory data analysis at best.

[00:36:58]

At best. Yeah.

[00:36:59]

And, you know, it's interesting to hear the rebuttal to Ben's work that that pointed out this problem, say that he quoted a textbook written by the album about doing experimental work in psychology.

[00:37:12]

And and so them is talking about how the conventional view of the research process is that we derive hypotheses from theory and then design and conduct a study to test these hypotheses and analyze the data to see if we can confirm our hypothesis, but says this is not how our enterprise actually precedes psychology is more exciting than that. If you see dim traces of interesting patterns, try to reorganize the data to bring them into bolder relief. If there are participants you don't like or trials or observers or interviewers who gave you anomalous results, placed them aside temporarily and see if any coherent patterns emerge, go on a fishing expedition for something, anything interesting.

[00:37:49]

And not to be fair. Well, to be fair to them, I assume that he would at least pay lip service at the very least to the idea that, you know, after that you should conduct your your, you know, confirmatory study.

[00:38:02]

It's possible that we should mention, by the way, that there is a very slow in that critique of that particular study by James Ulcoq, which was published in a skeptical inquiry in January 2011. And it's really worth the reading because the other ALCA goes into details of data analysis, experimental design inconsistencies, and within the within the paper itself, in the way in which the experiments were conducted, the selection of data of certain data over other and so on and so forth.

[00:38:30]

And that's, of course, just based. It's a critique just based on the published paper, not without access to the original protocols and raw data, which really ought to be the case in these under these circumstances.

[00:38:41]

Now, look, one of the things that defenders of psychology, including BAMN, point out is that, well, you know, the quality of research is about the same as sort of run of the mill research in psychology. And if you if if these kind of results were presented about a more non-controversial stopping in psychology, people would simply move on and accept the results and so forth. I find that a very bizarre argument for two reasons. First of all, because it was we were saying earlier, these are very extraordinary claims.

[00:39:12]

So it's pretty natural that the standard of evidence ought to be higher. You know, when people started talking about, you know, cold fusion, this was not a run of the mill experiment in physics or in chemistry. And that that is why there was so much scrutiny of those claims, which turned out to be unfounded. More recently, the claim that is still being debated and investigated about the discovery in physics of super luminal, you know, particles of particles that move at the speed of speed that higher than the speed of light, which will violate the general theory.

[00:39:46]

Well, it's not a run of the mill physics experiment, which means that people are not going to accept that until the original protocols are looked at. The experiment is going to be replicated and so on, so forth. So they make the argument that, well, you know, if it were a normal type of paper and in psychology, nobody would raise a problem. It's sort of disingenuous or belleza.

[00:40:05]

I mean, it's both true and also it is true. Not a defence. Exactly. It's not a defence.

[00:40:10]

And this is I mean, I would say that this is points to a general problem in research with using classical statistics, using classical hypothesis testing, because the way that is generally done is that people publish the results of if they're below a certain threshold of significance, usually a point of 5.1.

[00:40:28]

But what that what people tend to forget is that what that significant level tells you is the probability of getting your data, you know, data, you're as extreme as yours or more extreme if the null hypothesis of no effect were actually true.

[00:40:41]

But what we're interested in is the probability of the hypothesis being true. And so we have to weigh the chance. We would have, you know, just by chance gotten results this extreme against the prior probability, the prior belief that we have about the hypothesis being true.

[00:40:57]

So it's really unlikely that the hypothesis is true, then the chance that we would have, you know, if there were a five percent chance of getting results, you know, just as extreme as ours by chance, then, you know, maybe that's actually more likely than our hypothesis.

[00:41:09]

You know, we're starting from the classical statistics. To be fair.

[00:41:14]

There have been people who had done Bayesian analysis of, you know, the math of the meta analysis about the Ganzel experiments we're talking about. But again, the results they're controversial depends on how the decision analyses are done, what players are using and so on and so forth now.

[00:41:28]

But but there's another problem that I was wanting to point out related to the to the defense that, well, these are the kinds of results you get in most psychology papers. Well, I think that a reasonable objection, objection to that is, well, to better for all of psychology papers then, you know.

[00:41:47]

So what you're telling me that, well, a bunch of papers are bad quality. Therefore, why are you complaining if I'm producing another paper or bad quality now, having done science? Not in psychology, of course, in evolutionary biology and having read in refereed a lot of papers in ecology and evolutionary biology, I can tell you from first person experience that there is in fact a lot of garbage out there that gets published in pretty much every field. And there are various reasons for that.

[00:42:13]

But mostly that contrary to sort of popular perception of science, scientists usually don't engage in replicating other scientist experiments unless the results are in fact claiming some major discovery that really needs to be checked, like the examples that we're talking about earlier, that cold fusion and so on, replication.

[00:42:33]

It just doesn't happen. It's you know, you do a little thing with a small sample sample size. You publish the thing, the statistics look OK. The results are marginally significant, but you're not claiming anything particularly exciting or anything that overturns, you know, physics or biology as we understand it. Therefore, it gets published often in second or third year journals, although you'd be surprised the number of papers published in top tier journalists.

[00:42:56]

Yeah, actually the with the Daraban study that was published in this toxic paper journal, there was a replication done by three people, including Richard Wiseman, that failed to replicate all those results.

[00:43:10]

Not surprised they submitted their negative results to the Journal. Oh is the Journal of Personality and Social Psychology. That's what it was.

[00:43:16]

And the Journal rejected their paper explaining that they never published studies that try to replicate other work. And then, of course, they tried to.

[00:43:23]

Bizarre. I know, right? So then they tried to submit their their negative replication to science and science, rejected the paper saying your results would be better received and appreciated by the audience of the journal where the Daraban research was published.

[00:43:35]

And of course, that was GFP right now.

[00:43:39]

So this is this is really, as you were saying earlier, what we what we can learn from looking at parapsychology and the promise of parapsychology is really a a microcosm of problems with scientific research in general, except that in this particular case, in the case of parapsychology, they tend to be much more prominent problems because the claims are so extraordinary. Now, I'd like to touch on one more point before we get to the end of the show, which is what about theory?

[00:44:06]

So one of the objections that is raised against against modern psychology is not only that the experimental results are far from being satisfactory, but that does not theory that, as you were saying earlier, pretty much any deviation from the path as counts as evidence of, say, even though it could be evidence of a bunch of other things, including, you know, all sorts of biases, contamination and so on and so forth.

[00:44:28]

That's why the reason for that is because, of course, nobody knows what it is and how it's supposed to work. I mean, I guess there's no theory other than, you know, occasional vague motions, the word quantum entanglement.

[00:44:39]

And, you know, as soon as people start putting forth the word quantum in a sort of a vague way, that means they really have no idea what the hell they're talking about now.

[00:44:51]

The fence usually by parapsychologist is look, it is true that science ultimately wants to get to an understanding of things. You know what? You need a theory, but you don't necessarily start that way that often. You don't start that way. You just start by demonstrating that there are some interesting patterns or some interesting experiments, Esperanto results. And then you figure out what the what the what's going on, what's going on. That is true.

[00:45:12]

But first of all, if we look at the history of science, it is also true that scientists really do want an account not only of the phenomenon, but how the phenomenon fits with the rest of the science that is known at that time.

[00:45:25]

The classic example is Richard Waggoner's Continental Drift Theory.

[00:45:31]

The evidence was there for a number of years before the geological community accepted the continents and really do move around and bump into each other and things of that sort. But the theory was resisted by the mainstream geological community because there was no mechanism. It seemed like it was an impossible thing. And finally, when people figured out that there is in fact a reasonable mechanism, then all the pieces came together. So it is actually reasonable to at least suspend judgment, especially if you're making, again, an extraordinary claim until you have a mechanism.

[00:46:01]

But the thing is, it's not like parapsychologist to study doing this thing. Yesterday, we got about a century, probably more, but at the very least a century of experiments and a century of thinking about parapsychology. And we still have absolutely no idea of what might cause these phenomena and especially how they fit with the rest of biology and physics.

[00:46:23]

I will wager that if, in fact, somebody were able to demonstrate convincingly telepathy or avoidance or psychokinesis, that would actually seriously undermine large chunks of our understanding of biology, physiology and physics that been the case. It really the onus is really on parapsychologist to not only produce experiments that are clearly and demonstrably well done and well replicated, but also some kind of theory where this all these things come from. Where does it fit and how do you make it into the fabric of science as we understand it?

[00:47:01]

Or if you don't, then you also have to further burden of providing an alternative to the scientific theories that you're knocking down. If you if it turns out that parapsychological phenomena violate quantum mechanics, for instance, then you also have the burden. Precisely why is it that people thought for about a more than a century that quantum mechanics was actually correct? The correct theory? Indeed.

[00:47:22]

OK, well, we we are well over time for now. But I would just like to let our listeners know that if you're interested in discussing this or any of our other rationally speaking topics further with Mary Snow, I encourage you to come see us live in all of our rational glory, because Mazama and I are both going to be at the Northeast Conference on Science and Skepticism, the fourth annual this coming April 21st and 22nd in New York, New York, Masimo and I will be taping a live episode of Rationally Speaking and as a nice extra bonus, in addition to the two of us, you've got a great lineup of fascinating speakers and panels and performances from people like James Randi Schostak, Joe Niccolò, the studio crew and many more go to NextG.

[00:48:09]

That's NBC Asphaug and get your ticket now because we do expect to sell out. So that said, let's move on to the rationally speaking. Welcome back. Every episode, Julie and I pick a couple of our favorite books, movies, websites or whatever tickles our rational fancy. Let's start as usual.

[00:48:42]

Would you just pick actually, Massimo? I have an unpick this time. Yeah. It's been a while since I griped about anything, so I was kind of do well.

[00:48:51]

So my my unpick is a TV show for kids called My Little Pony.

[00:48:56]

Friendship is magic and it's a cartoon. It's it was aimed originally at young girls, but it's actually achieved quite a cult following of adults, including lots of adult men. And it gets it gets written up really well, unlike the media critic sites that I like reading. And so, so intrigued by this that I actually signed up to go to. There's a My Little Pony convention in New York every season, and there is just an excuse for you to go to see ponies.

[00:49:25]

I did.

[00:49:25]

I will admit I collected My Little Pony when I was a kid, but I was really fascinated to see why are so many like educated, intelligent, adult men watching a cartoon for little girls?

[00:49:35]

You found out what? Well, no, I didn't go. Here's why. So I signed up. I contacted the convention organizer asking if I could come. He responded immediately, saying, oh, yes, I've read about your work in the rationalist and skeptic community.

[00:49:48]

That's very Twilight Sparkle of you, which she's the nerdy pony in the show.

[00:49:53]

And and so this this only further confirmed by my sense that this was like a very, you know, pro skepticism show. So I had high hopes. So I signed up to go in and I thought I should watch a couple of the episodes before going. And one of them had been recommended to me by by someone on the website. The episode was called Feeling Pinky Keen. So this one pony named Pinkie Pie has these premonitions of things that are going to happen.

[00:50:19]

Like you can tell that something bad is going to happen or something is going to fall or something like that.

[00:50:23]

And and so then the the nerd skeptic pony named Twilight Sparkle, who I was compared to see.

[00:50:30]

OK, so this whole in this whole episode, she doesn't believe that her that pinkie pie is premonitions are real. And of course she's really obnoxious and smug about it too, saying things like, I can't wait to see the look on your face when you're proven wrong, essentially by science.

[00:50:43]

And look the pony version of the X Files.

[00:50:45]

Yeah, actually very similar. Very similar. So, of course, Pinkie Pie's premonitions all come true. And Twilight Sparkle, the skeptic is proven wrong and humiliated. And and her like closing line in the episode is I've learned that there are some things in this world you just can't explain. But that doesn't mean they're not real.

[00:51:04]

You just have to choose to believe so.

[00:51:08]

So here we have rationality means being a smug, know it all, and some phenomena are impossible to explain. And of course, fate equals good and empiricism equals bad. All in one little twenty two minute package.

[00:51:21]

Wait a minute. And you said that smart, intelligent men follow these rules.

[00:51:26]

Yes. Including apparently a fan of the skeptics who organize the convention. It was really disappointing to me. Wow.

[00:51:32]

Well, my pick is or an article that came out in The New York Times by John Tierney, and it's called Be It Resolved. And it has nothing to do with form of resolution of a legal type. It has to do with New Year's resolutions.

[00:51:47]

So as it turns out, there's research that looked into how New Year's resolutions work, how much people actually stick with them, and what are the best strategies and that sort of stuff. And, you know, we all hear that most of the times these things fail and, you know, because people are very optimistic for some reason at this arbitrary point in the year, their optimism wanes very quickly. As it turns out, that's not true. In fact, a surprising number of surprising fraction of people with resolutions at least half the way through the year and, you know, something like 44 percent, which is not not bad.

[00:52:25]

Well, self reported, I assume.

[00:52:27]

Yes, I do think this is self reporting and actually a significant number makes it make it long term. But it turns out there are different strategies where you can maximize sticking to your resolutions. Whether you made a New Year's or not, it really doesn't matter. And so some of these are kind of interesting. The basic idea is that cognitive science is beginning to show that willpower is, in fact, a type of resource that the brain has. Depends on the availability of sugar, incidentally, which which which puts you in an interesting bind.

[00:53:03]

Right.

[00:53:03]

Because your resolution is exactly as it turns out, the more you exercise your willpower, the more sugar, of course, you need.

[00:53:12]

And there is there's an amount of amount of willpower that you have actually available is all the replenishable is limited, which means the first one and the first thing you should do is to pick your battles. You cannot fight a battle on multiple fronts because otherwise you're going to be weak on all those fronts. You're going to be likely to actually capitulate on all circumstance. But there are other strategies that you and his colleagues have discovered, for instance, you know, first of all, said a single clear goal as opposed to just say, for instance, don't don't think, oh, well, I'm going to eat healthier now.

[00:53:46]

Just decide how many pounds you want to lose in a particular month or something like that. You know, much more more simple and clear goal to commit to Nichol's this year. This is a strategy. You know, this is famously had himself strapped at the mast of his ship to resist the temptation. And the idea is to be preemptive about the invasion. So, for instance, again, if your problem is with dieting, then simply don't buy the stuff at the supermarket and put it in the refrigerator, because that way you have to resist the temptation only once while you're at the supermarket.

[00:54:21]

If you bring it home, then you have to resist it essentially every every second.

[00:54:24]

And then there's this little interesting thing about outsourcing. So there's a number of websites. And actually one of the writers for a Russian speaking coincidentally, wrote a very similar essay at the beginning of the year on base and similar addressing the same issue and on similar grounds, other bunch of websites that allow you to put to make bets against yourself. And what it turns out that people have these websites have analyzed actually tens of thousands of examples of people making commitments.

[00:55:00]

And it turns out that if you make the commitment and you ask for a referee as opposed to just making the commitment formally, your chances of sticking with the resolution increase, if you bet money on your own commitment, your chances increase further. And if you bet money against a situation in which if you lose the money, it goes to a to an outlet that you don't like. For instance, a Democrat sending money to the Republican Committee of Election Committee, that goes even further.

[00:55:34]

That increases even further the chances that you actually going to stick with it. In other words, if there is a penalty and if the penalty is clear, enforceable and in fact painful to you, then it turns out that you can stick with your resolutions better.

[00:55:48]

So there is hope. But and the attorney article gets into the into a bunch of other strategies. It's really an evidence based essentially approach to New Year's resolutions.

[00:55:58]

Now it's one of our more useful picks. I think this wraps up another episode of rationally speaking. Join us next time for more explorations on the borderlands between reason and nonsense.

[00:56:16]

The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benny Pollack and recorded in the heart of Greenwich Village, New York. Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.