Transcribe your podcast
[00:00:14]

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I'm your host and with me, as always, is my co-host, Julia Gillard. Julia, what's our topic today?

[00:00:47]

Masimo. Tonight, we are happy to be joined by Stephen Novela, the host of a little podcast that some of you may have heard of, called The Skeptics Guide to the Universe, as well as the author of the neurological blog and co-editor of the science based medicine blog. And in his free time, he's also an academic clinical neurologist at the Yale University School of Medicine. Steve, great to have you on the show.

[00:01:11]

Nice to be here, guys. Thanks for having me. I didn't know you had any free time, but you have just a little hobby called academic clinical neurology.

[00:01:19]

That's all right.

[00:01:21]

Steve, we're probably going to be all over the place with you tonight because there's so many interesting things to talk about. But I wanted to start with a little bit of controversy of which you are aware. Recently, The Atlantic in November of this year published an article by David Friedman entitled Lies, Damned Lies and Medical Science. And in this article, there are some serious things that emerge from recent meta studies of of the efficacy of medical research. You want to give us the background and tell us what you what you think about this?

[00:01:58]

Yeah, I mean, this is not news to anyone, at least not anyone familiar with science based medicine or the science of medicine. Essentially, the article focuses on the work of researcher John Ioannidis or your Naidus. I think you pronounce his name and he is a researcher who studies the the research itself, basically looking at the patterns in the literature and how reliable are studies and the kinds of problems that can occur with medical research. And he has published some seminal papers over the years showing, for example, that if you look at definitive studies, that where we could be pretty sure that we know what the answer is.

[00:02:44]

Then you look back over the previous 20 years that in fact, most of the previous studies that were published on that question came up with the wrong answer. So this is often presented as the notion that we are most published research is wrong, which is a bit misleading. But essentially the idea here is that there's a lot of inherent systematic problems with with research. There's just so many things that can go wrong and often it does. So that raises the bigger question as well.

[00:03:15]

How reliable is anything that we do in medicine anyway? How science based can it be? So again, what's interesting is that, you know, Friedman, the author of this article and, you know, many people who pick up on this theme point out all of the things that we point out, for example, on the science based medicine blog, about all of the pitfalls and all of the complexity and difficulty with medical research. But you can you can put two different spins on this story.

[00:03:46]

On the one hand, you could say here are all of the, you know, the problems of medical research and this is the way around them or to where to look out for them or to fix them, if you will. Or you could say, look at all the problems that medical research, it's broken and it doesn't work right now that I think that's that's very fair.

[00:04:05]

However, I was honestly a little bit surprised about some of the numbers. And I wonder if you think that the numbers are, in fact, at least approximately correct. For, for instance, the article mentions something like 80 percent of non randomized studies turned out to have something wrong with them. Twenty five percent of randomized trials and 10 percent of large randomized randomized trials. Now, those are fairly astounding numbers, especially the 80 percent. Do you think that those are actually accurate numbers, at least in the ballpark?

[00:04:37]

I think they're in the ballpark. But you can see what he's saying there is that the less control the study, the more unreliable the results. Yeah, we know that. That's not surprising, right? Yeah.

[00:04:48]

When you get up to the gold standard of randomized double blind placebo controlled trials, he's saying that 10 percent of those have serious problems. Well, that means that 90 percent are pretty good, but that's actually better than I that I would have thought, actually.

[00:05:03]

But do you have any sense of how frequently the 80 percent are used? I mean, are researchers and doctors when they're prescribing things and treating patients, are they drawing from that that large group of studies that are are wrong? Well, yeah.

[00:05:21]

I mean, they're drawing from that that research all the time. And again, that is what the evidence based medicine movement basically is about, which is let's actually systematically. And rigorously assess what the evidence base is for all the things that physicians do and, you know, in a very in a standardized, systematic way, characterize that or quantify that so so that it rather than sort of shooting from the hip right. Where you are using your judgment or your best guess or some some vague familiarity with the research, or you're just pulling from whatever studies you happen to see most recently, you know, rather than relying upon that sort of slipshod method of deciding what is a good thing to do in medicine.

[00:06:09]

Let's take a systematic approach and see what the actual evidence base is and quantify it and in which science based methods we go a little bit beyond that. We agree with pretty much everything that evidence based medicine does, but we want to reintroduce the notion of prior plausibility, you know, all that basic science research, not check that out and pretend like it doesn't exist.

[00:06:29]

So if you don't do that, then you basically are overwhelmed with bias. There is there's so many ways that bias can creep into the way you look at data. And again, you know, the medical community has applauded Unitas for pointing all these things out in a very rigorous way. So I think that says a lot. I mean, even the author, you know, Friedman acknowledges in the article that he's trying to put this very sensationalistic spin on it all.

[00:06:58]

But the fact is, you know, you know, you notice his work is accepted. It's yeah, this is it's applauded. And we know that we need to look very critically at the literature, at the way research is done and all the different ways biases can creep into the results. And we because by understanding that better, we get better at finding the actual truth, which is what we're after at the end of the day.

[00:07:23]

So you introduce the distinction there between, of course, evidence based and science based medicine. The way you just put it, it sounded to me like you were introducing some sort of Bayesian framework in science based medicine. Oh, absolutely.

[00:07:38]

Yes, absolutely. Can be very Bayesian. Yeah.

[00:07:40]

So so you're saying is that the major difference between the two approaches or would you characterize it also otherwise?

[00:07:47]

No, I think that's it. I mean, it's basically what I think happened is, you know, at the dawn of the evidence based movement, the big notion was that physicians were relying too heavily on plausibility. They were essentially using treatments because they made sense. But that's not always a sufficient guide to what works. There are things that make sense to which just don't, which turned out not to be true. The evidence doesn't support them. So I think they had the intention of leveling the playing field, if you will.

[00:08:19]

You know, they call it evidence based medicine because they want to based the treatment on what the evidence shows and not give a treatment an unfair advantage, if you will, just because it makes sense. But what they unwittingly did was they leveled the playing field for those therapies, which are highly implausible. Right. So plausibility doesn't count, but neither does implausibility. And when you apply an evidence based scheme to treatments that have a plausibility of zero, it kind of breaks down.

[00:08:50]

Can you give us an example of a treatment that has a really low probability, but that was still implemented? And why would it be implemented if it had such a low probability? Because we got results supporting it?

[00:09:02]

Well, this is comes up mainly with in the context of like so-called alternative medicine, like homeopathy, for example. So you can read a Cochrane evidence based review of the the literature on homeopathy for a specific indication. Now, when you do that, what you'll find is that the evidence is actually negative pretty much for everything, that homeopathy doesn't work for anything. But it's easy to look at a very specific question and to say, oh, there's something there's some interesting signal here.

[00:09:34]

The evidence doesn't quite show that it works, but it deserves further research. And maybe there is weak evidence that can support incorporating this into practice.

[00:09:47]

But actually, that ignores all of the things that Unitas talks about. What's interesting is that I quote his research saying, well, but you have to consider that if you're looking if you're basically doing research on a treatment that let's say hypothetically, we know the treatment has zero physiological effect, it doesn't work.

[00:10:08]

What would you expect the literature to look like?

[00:10:11]

That's a very important question. And they often ask people that what would what do you think the literature would look like if we studied a treatment that we know absolutely doesn't work? And sometimes I have you know, proponents of alternative medicine have told me, well, it should be pretty much 50 50. It should be like a bell curve, you know, hovering around a null effect or a zero effect, but. That's actually not what we see and Unitas work shows us that's not what we would expect to see, we would expect to see a huge bias towards the positive, partly because of publication bias, because of researcher bias, just because of all the ways in which you can bias the research by the types of questions that you ask, the outcomes that you look at, the statistics that you do.

[00:10:55]

So we actually expect to see even for a completely inactive treatment. Is this shift of bias to the right, to the positive? However, we also see that the better controlled and designed the studies, the smaller the effect size, and that the effect size diminishes to zero for the best designed studies. That's exactly what we see for homoeopathy. But if you take an evidence based scheme, you could look at look, you see this residue of positive studies that sometimes make them makes them portray it in a wishy washy or sometimes even weakly positive light, whereas the science based approach would say, well, hang on, let's back up.

[00:11:39]

Let's take a look at, first of all, everything we know about the biases in the literature. And second of all, the the prior plausibility of homoeopathy, which approaches zero. If you combine those two things together, we see, well, this is a null effect. This is what we exactly what we expect to see for a treatment that does not work in science based medicine. We advocate more of a Bayesian analysis which basically says have a prior probability and then you have a new piece of evidence.

[00:12:05]

And then you could do a calculation to say, well, how does that alter the prior probability to that so that you get you get a probability? What's the probability after this new bit of information, rather than saying which evidence based medicine does, is that we assume absolute neutrality prior to looking at the evidence in this case, the clinical evidence, and then so saying that the probability essentially is neutral, just as likely to be true is not not to be true.

[00:12:35]

So that's like a big assumption in and of itself, though. It is. It is. And what it's essentially an assumption of flat.

[00:12:43]

Pryors, Right? Exactly.

[00:12:45]

It assumes a completely flat or neutral prior probability. Essentially, I think the reason why that was done was because they didn't want to give an unfair advantage to treatments just because they made sense. Right. They didn't want physicians using a treatment just simply because it made some kind of a sense. They wanted to level the playing field. But when they when they did that, try to take away the advantage of plausible treatments, they also took away the disadvantage of extremely implausible treatments.

[00:13:14]

So I think that's a problem. So if you look at something like homeopathy, for example, which has a prior probability that approaches zero, and then you look at the you think, yeah, I mean, you know, violate some basic laws of physics and chemistry.

[00:13:27]

Yeah. Things like that. That's pretty bad.

[00:13:30]

And then you look at the clinical evidence based, strictly evidence based approach would say, well, OK, if we assume neutrality going in, then we see the sort of weakly positive result in the in the clinical data. Then it's not. Nothing is proven. There's not really good class. One evidence. We can't say that there's any indication for a homeopathic treatment that's proven. But for some things, you get this residue of weakly positive of results. So therefore, maybe there's something there and it deserves further research.

[00:14:01]

That's usually how they conclude. But if you take more of a science based approach, you say, well, based on the notion, first of all, this is highly implausible upfront, then you add the the clinical data. It actually doesn't change the prior plausibility very much the prior probability you end up with a post data probability that still is almost zero. So it hasn't really affected it very much. But also you see another pattern in the research with treatments that have an effect, and that is the better the design the study, the more like the smaller the effect size.

[00:14:36]

And until the best studies have essentially a flat or a zero effect side. So when you see that pattern coupled with a low prior probability, because they're just highly implausible claim and, you know, you have this weakly positive bias or residue in the published data that's consistent with a null effect or a treatment that has no actual clinical effect. And Unitas, his work comes in is he explains why we see this positive bias in the published literature, because of all the many, many things that that a researcher can do to create this positive bias and the research by the kind of variables that are selected, the outcome measures, how they're measured, how the patients are selected, how they're randomized.

[00:15:19]

There's all different ways to subtlely, whether consciously or unconsciously, affect the outcome of research. And so, you know, the implication is that, for example, pharmaceutical companies may be. Quite deliberately doing this in order to portray their drugs in as positive light as possible, but I think there's also just a lot of very honest researchers who are doing it very unconsciously, just it's really hard to systematically eliminate every possible source of bias in something as messy and complicated as clinical research.

[00:15:54]

That's why, in fact, we say we have to consider all the research, including the basic science research. You'd like to see that a treatment is plausible, that it makes sense that we have a mechanism, that it works when you look at observational studies in the real world, that it has efficacy. When you look at really rigorously controlled trials, experimental trials, if you'd like to see that, it all hangs together.

[00:16:18]

So this this reminds me of a problem that we encounter pretty often in ecology and evolutionary biology. And this has to do actually with the whole concept of meta analysis.

[00:16:29]

You know, some people are very much in favor of doing their analysis on the grounds, of course, that that way. So you distill the signal if there is any of a large number of studies and sort of smoothing over the the inequalities between studies, the different conditions and so on and so forth. But I often heard also the criticism going the other way around and saying essentially, in fact, you dilute the signal because you put together a bunch of studies that are really not that reliable.

[00:17:00]

The sample sizes are too small. An example that I have in mind came out a few years ago doing a math and analytical review of studies on natural selection, which, as you know, it's a fundamental concept in evolutionary biology. Well, it turns out that if you actually just simply look at it, a simple analysis of hundreds and hundreds of studies in experimental studies or empirical studies of natural selection turns out that there's no such thing as natural selection because the average selection coefficient is very close to zero.

[00:17:30]

The explanation for that is that the overwhelming majority of studies and natural selection are actually poorly done and mostly they are not repeated in time or space and they have small sample sizes, which means that on average, the effect size essentially becomes undetectable. So the argument that one would make in that case is that rather than doing a meta analysis, what you should do is to go carefully through the studies throughout 90 percent of them because they are clearly methodologically flawed or insufficient.

[00:17:59]

And then look at what the remaining 10 percent or so tells you. And I wonder if this is that part of the problem that we're having here. That is the studies that are mentioned, the conclusion that I mentioned in the Atlantic that say things like 80 percent of randomized trials are flawed or turned out wrong result. Well, is there a way that we can tell ahead of time once that the studies published? And you know what? This is really not a good standard, a good way of doing things.

[00:18:23]

So let's just throw it out and not counting at all. Yeah.

[00:18:28]

Let me talk about meta analysis. First of all, there actually was a study published a few years ago looking at again, this one of these meta questions is how reliable are meta analysis and data analysis?

[00:18:39]

It's a matter of meta analysis. And it showed that they only predict the outcome of later definitive trials about 60 percent of the time, which when you think 50 percent to coin flip, that's not that much better, right? It's only a little bit better. So meta analysis are actually not that reliable that essentially you're introducing another layer of bias because now you have biases in terms of how you choose which studies to include in the meta analysis and how do you choose to adjust for differences among these trials.

[00:19:08]

So you're introducing more bias, which I think probably does counteract the effect of adding more data points. Right, right. So there are very, very tricky to do well, when when done well, I do think they serve a role, but I agree with you that that systematic reviews are better than meta analysis. And a systematic review looks at all the published research and then looks for the pattern that you see in that research and analyses. Every individual study doesn't just lump the data together and it says, OK, so we see a bunch of crappy studies that are poorly controlled.

[00:19:40]

We can ignore those. Here are the really good studies and they and they're consistently showing this, but I wouldn't say that. So then the question comes up, what's what's the role then of these poor studies? Why are so many poor studies being requested? It is a good question. But, you know, the thing is, there is a role for every kind of research. The problem is not sometimes it's that that research is done poorly. It's supposed to serve a certain role and it's just not being done well.

[00:20:08]

But the bigger problem in my experience is just the evidence is used incorrectly, like you can't use observational data to make efficacy conclusions because you're not controlling for variables, for example.

[00:20:21]

But in this case, the part of the problem is that you're looking at what we call preliminary data, preliminary studies, and then trying to draw some kind of definitive conclusion from it or. Base your medical practice on it, but the role of preliminary studies is to is to help you design later definitive trials so they do serve a role.

[00:20:41]

You can't go right to the large definitive trial because you won't know how best to do it. You learn how to to research a very specific question by doing these preliminary studies. And that's what the later definitive studies emerge out of all of the argument about what these preliminary studies mean. The problem is, however, is is basing your practice on these preliminary studies rather than saying, all right, all these tell us is how basically to do more research to find out what the real answer is.

[00:21:15]

And again, you know, is work. Tell us. It tells us is if you rely on that preliminary data, most of the time you're going to be wrong.

[00:21:23]

So, Steve, one of our readers on the Russian speaking blog brought up this question of whether there some research that just is never going to be able to approach the standards of rigor that we would need to really be confident in it. And so he was talking about some of the studies that have been observational because they can't be otherwise like studies on the effect of various vitamins on health.

[00:21:47]

And, you know, we can't take a group of people and and make them take this vitamin over a long period of time and these other people and have them not. And so we're left just sort of trying to find patterns in the data. And so he was he was sort of expressing pessimism about whether we ever could do research on these areas that would that would be accurate.

[00:22:07]

Yeah. So that's a file that under difficult but not impossible. And you have to appreciate how difficult it is. But let me give you the classic example of that kind of data. That's the data linking smoking with lung cancer. Right. And he's basically taking the tobacco industry line that no one has ever done, the kinds of studies that would definitively definitively prove a cause and effect between smoking and certain kinds of lung cancer, because you can't randomize people to smoke.

[00:22:34]

You can't do that study. You can't randomised people to not get proven therapy. So there are certain things that are just unethical. You can't expose them to risks, expose them to disease or withhold care. So for that, for that reason, we are limited to observational studies. So then what do we do? Well, we try to pile up as much of the inferential data as we can and see if at all triangulates to the same answer so we could do animal studies.

[00:23:01]

You can make rats smoke over the years. You could expose them to cigarette smoke and see if they develop lung cancer. Then you could look at you could make a hypothesis. If smoking causes lung cancer, then we should see a dose response curve. The longer you smoke, the higher risk of lung cancer. If you smoke unfiltered, it should be higher than if you smoke filtered. If you quit, your risk should start to come down. If, you know, if we look at up for an underlying mechanism, we should see that there's got to be a carcinogen in there somewhere.

[00:23:29]

So you put together the basic science, the animal data and multiple different kinds of observational data to see if at all triangulates lines up on the one hypothesis to explain these correlations, and that is that smoking causes actually causes lung cancer. And you can get to a point where you're confident enough that you could make some actual medical recommendations based upon that, and you could do the same thing with vitamin studies or with other things. Again, the trick is not to give up early, not to look at one observational study or to observational studies and say, oh, OK, B 12 prevents Alzheimer's disease.

[00:24:06]

Now we know that rather than saying, OK, this is interesting observational data, there seems to be a signal here. There's some kind of correlation. Now, let's think of all the many ways in which this correlation could exist, where all the different kinds of biases that could be creeping into this observation and then look at the data frontwards and backwards and sideways and all these different ways and see if no matter how we choose to ask this question, to look at the data, we still get this signal that there is some beneficial effect to taking this vitamin.

[00:24:38]

And you said then you can you can get to the point where it's reliable enough that you could make health care recommendations based upon that.

[00:24:44]

So it sounds like you're actually a lot more optimistic than I've been. I've heard Unitas sound in describing his own research. He I read a quote from him in which he said, I'm not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. And so it sounds like you're saying, if I'm understanding you correctly, is that a large percentage of the research is actually barking up the wrong tree, but that it's all part it's all indirectly leading to definitive research in terms of accurate research down the road.

[00:25:18]

But it's a mistake to consider those preliminary findings somehow part of the problem when they're actually part of the process. Is that accurate?

[00:25:28]

Mostly, I wouldn't say all studies are part of the process, I think. Also, a lot of crappy research in there, too, you know, I think sometimes people just do a bad job at the research or, you know, academics are under a lot of pressure to publish. And I think sometimes people publish data just because they got to get you know, they got to get papers out there. Sometimes I shake my head and wonder, why did anyone bother to do this study?

[00:25:50]

You know, sometimes people go backwards. You know, you'd like to see the literature progress forward where you're learning something from the previous studies and then you're doing research to add some kind of a new information or new angle to the data. And sometimes I see people going backwards and doing studies which are worse than ones that have already been done, that they actually don't tell us anything. And you wonder why somebody even bothered to do that kind of a study.

[00:26:12]

So there's a lot of messiness. There's a lot of noise. There's a lot of crap in the research which sure, this is true of every discipline, but you have to sort of wade through that, pick out the good studies, look for the patterns in the research. Let's see that it all holds together. And we have many, many things for which that's the case. We just sort of take them for granted now. And the reason why scientists and physicians were not surprised by any of this, because if you have a sophisticated and mature approach to the literature, the kind of things that we like to teach our students, for example, this is what we've been saying for four years, for decades.

[00:26:45]

This is not news to anybody. We know this there's this is how you have to look at the whole research, the whole literature, not just pick out the studies that you want or some small subset of the studies or make the wrong kinds of inferences from the wrong kinds of data.

[00:27:02]

So some of what you were saying, actually, it reminds me, you know, all these the idea that there is a lot of studies out there, a significant portion of which is, let's say, subquality for a variety of reasons, including the sociological ones you were talking about, you know, the publish or perish attitude in academia. You get people have to get tenure and, you know, all that sort of stuff. There is a classic article in it was published in Science in nineteen sixty four by John Platt.

[00:27:30]

The title of the article is Strong in France. It's a it's a real classic, a commentary about the methodologies of different sciences, particularly comparing the social sciences with the sort of chemical and physical sciences. And at one point says that, you know, people often refer to science as building the edifice of knowledge brick by brick, but they keep forgetting that the majority of these bricks just lay around in the backyard unused. And that seems like a pretty good metaphor for for what is still going on today.

[00:28:02]

In fact, possibly worse than the situation was in nineteen sixty four, because we have a lot more people trying to get into scientific careers. The competence, the level of competition arguably has increased compared to what it was 30 or 40 years ago. And so we have a lot of other bricks out there that they're just lines on on the on somebody's TVs, but they actually don't serve any purpose. And they sort of clutter, in fact, and make it more difficult for us to understand how to discriminate the noise from from the signal.

[00:28:33]

Yeah, I agree.

[00:28:34]

There's a lot more noise. There's a lot more clutter, there's more journals. There's a lot of so-called throwaway journals that you can publish. There's a lot of online publishing now, which I don't think it's necessarily a bad thing. I think there's a good quality online journals, but it just means there's this explosion of information. There's so many articles to sort through that. Yeah, it's very, very challenging. But it's I kind of liken it to a little bit to taking pictures.

[00:29:05]

You know, you snap a lot of pictures and at the end of the day, it doesn't matter how many bad ones you have, but only if that really matters how many good ones you have right now. Good pictures from from your from your vacation. That's what matters.

[00:29:16]

Doesn't matter if you take a million bad ones, you can just delete them, especially if you have a digital camera that's very hard to do exact with medicine. It's not a perfect analogy because it does matter that you get the good data at the end of the day and you can at the end of the day, you can ignore the bad data. But there are limited resources out there for doing studies and there's a certain ethics involved in subjecting people. So I do think that, you know, we probably should raise the bar a little bit for the kind of research that people are doing and maybe remove a little bit of the noise and clutter and not do so many preliminary studies.

[00:29:51]

I think we should maybe move a little bit more efficiently to the definitive trials.

[00:29:55]

Steve, a little earlier, you were talking about the problem of publication bias as being a major contributing factor to the problem that United has been publicizing it, essentially just that if you do enough studies testing for some effect, then just by chance some of them are going to find a positive, a significant effect. And and those are the studies that the journals are going to want to publish. So so this does seem like a big problem.

[00:30:22]

And what I wanted to ask you about was a recent solution that I my understanding is has been tried for four. Clinical trials, at least, that now, at least in the US, there's a requirement that you have to register all your clinical trials, that clinical trials dot gov. So that essentially, if if you have 20 studies that search for an effect and only to found one, then, you know, it's no longer the case that you just see those two published and you have no idea what you know, what the larger pool looks like.

[00:30:54]

But you can actually go to the clinical trials that go up and see what percentage of these positive results represent out of the entirety.

[00:31:01]

So what is the status of that? Is that working? Is this going to solve our problems?

[00:31:06]

Well, it is a huge step in the right direction for clinical trials. And part of this was motivated by the fact that pharmaceutical companies are not just engaging in well-meaning publication bias, they're actually cherry picking. I mean, there there are cases where they'll do five studies and then publish the two that had a positive outcome and bury the three that had a negative outcome. If you look at all of the trials, it's actually the net outcome is zero or no effect.

[00:31:37]

So you're cherry picking is a serious problem. It's basically cheating. You know, you're looking at a subset of the data. And as long as you divide that data along more than one trial, you can just ignore those trials that that you don't have the results that you like. I think that what we're moving towards, which is that we absolutely need, is essentially transparency. It's rigorous transparency by registering every trial, essentially saying along with the the privilege of doing research on human beings comes the responsibility to make that data available to researchers and academics.

[00:32:14]

And you can't hide it after the fact. You don't own that data. The public owns that data because you were given the privilege of doing research on human beings. And therefore, we can get much more reliable and thorough results because that's that's what's necessary in order for science to work. I don't know that it will solve all the problems, but I think it's a it's a it's a necessity. It's definitely a requirement for the results to be reliable.

[00:32:39]

Now, speaking of Big Pharma, this reminds me of a paper that came out maybe within the last couple of years, possibly even more recently than this, which was comparing several papers, addressing the efficacy of the same kind of drugs and then dividing the papers into two groups, depending on the funding source, whether the funding was provided by the pharmaceutical industry or private sector, and whether it was provided by a federal agency, for instance. And perhaps not surprisingly, they did that again, sort of what they might call some kind of intelligent design, their own operation, which was that the the studies funded by the pharmaceutical industry, where several times more likely to find significant results, positive results of the drugs compared to those funded by the federal government.

[00:33:26]

Now, there was no implication there that there was actual cheating, at least not in the article that I'm referring to. The idea was that, you know, if you know where your money is coming from, there's all sorts of more or less subconscious biases that you can introduce in into your own research, your own conclusions, the way you present the data and so on and so forth. So there's no need to necessarily actually invoke conscious fraud by the researchers involved.

[00:33:52]

Now, that does, however, raise the question of the funding sources for this research. You know, is is there a general way to approach this kind of problem or we're just stuck with it because of the nature of medical research?

[00:34:08]

Well, let me give you two different answers there. First, I think that while there is while there is unconscious, just experiment or bias that creeps in, I think there's also some not so unconscious bias that creeps in with pharmaceutical research. I would say industry research is the supplement industry and every industry does this. If you have a vested interest in the outcome, it's hard not for for that not to creep in to the effects. And it's not just, you know, this sort of vague bias that we can't quantify.

[00:34:39]

Sometimes there are very specific choices that are made that we could say, yeah, you know, that choice was rigging the game in favor of a positive outcome by looking at it again, looking at the kinds of outcomes you choose to measure, how you choose to randomize or the inclusion or exclusionary criteria. Basically the people you choose to do the research on. You know, certainly pharmaceutical companies design studies to give their drugs the best chance of having a good outcome.

[00:35:05]

I think that probably explains most of that bias towards a better chance of having a good outcome there. They're essentially looking where the light is good or doing the studies that are more likely to be positive even when done rigorously.

[00:35:18]

Well, but. So I was going to make another point. Remind me what the what the question was. This latter part of the question was whether there was a general, you know, kind of approach or solution that we could that we could pursue about these these inherent bias that is generated by the funding. So you said you had two different answers to it. Yeah.

[00:35:38]

So the other part of that was that because we could identify specific behaviors, then that opens the door to fixing them. Right. To doing things that well, that will mitigate them. And I guess the other point I was going to make was what do we do about the funding? I think you were asking, are we stuck with a system in which the industry funds a lot of studies? Well, that's that's a that's a big question. You know, we have a capitalistic system.

[00:36:04]

We rely upon the pharmaceutical industry, for example, to to spend billions of dollars on research. We just don't have an infrastructure that can replace that funding. So so we want to radically change the structure of our society.

[00:36:22]

We're going to need to deal with industry funded research. I don't think it's it's a fatal problem, though. I don't I'm not nihilistic about it. I think it can work. We just need you know, we're actually pretty smart, pretty good at figuring out all the ways to manipulate data and and bias trials know we can make them better. And at the end of the day, there's there is a lot of good research that that eventually gets done.

[00:36:47]

I think it's hard it's actually pretty hard to create the the never ending illusion of a benefit where one doesn't actually exist eventually, you know, that they come home to roost. In other words, eventually, if something doesn't work, we'll see that in the literature. We'll see that in the research.

[00:37:07]

Now, let me go back for a second to a sentence from the Atlantic magazine and that and that simply because it made me think about what I do every every morning. And I'm going to give you the example of, you know, I think of myself as a reasonably informed person about science. And I go to my doctor and talk to my doctor to to a good degree about whether there is any problem or not. And then I read this thing that says, and I quote, We consume thousands of nutrients that act together as a sort of network and changing intake of just one of them is bound to cause ripples throughout the network that are far too complex for these studies to detect.

[00:37:44]

And that may be as likely to harm you as to help you. That's the end quote. Now, here's my problem. So I understand as a biologist that what he's saying is correct. There is, in fact, a lot of metabolism's is a very complex network. And it's certainly the case that you can't just intervene in one surgical plays and solve a problem. On the other hand, that raises a very practical question for me. You know, so my doctor told me a few months ago that I have a, you know, slight vitamin D deficiency.

[00:38:12]

So I'd be taking pills in the morning for for that. And I go to my cardiologist and McAdoo's does. You know, you're four for your age, your your heart. That is in pretty good condition. Your heart is in critical condition. On the other hand, because of family histories, it might not be a bad idea to start taking some, you know, low dose aspirin, that sort of stuff. Now, then you read as a as an informed person who talk to your doctor.

[00:38:35]

You read that kind of thing in the in the paper. And then you said, well, should it really bother taking this stuff in the morning, taking, you know, making spending money depending on how many supplements you do? Or should I just say, you know what, as far as I'm concerned, as long as I just try a reasonable diet and a little bit of exercise and nothing really major happens, don't bother. What would your take, as a practical matter, be once you read this kind of article with your.

[00:39:04]

Yeah, it's part of the question is where do you set the threshold for saying that, all right, there's enough evidence here to make a decision based upon that. But it's part partly it's worth pointing out that not doing something is a decision to and there really isn't it isn't always necessary to make that the default decision, although we do say first do no harm.

[00:39:29]

Right. We want to make sure we're not at least harming somebody or doing something that's going to make things worse. So there is a little bit of a bias towards the default of not doing something. But when you're talking about preventive measures, et cetera, not doing something is a decision. It is a choice. So, for example, with aspirin, that's a really good question. And there are situations in which we have really good data that taking a gentle blood thinner, like a certain dose of aspirin on a daily basis reduces the risks of heart attacks and strokes.

[00:40:02]

And then there is a population for which the data is in the grey zone where that may be beneficial. But there's also this increase in stomach problems and then bleeding. And we're not exactly sure where the lines cross. And you may be close to that line. You may be just as likely to give yourself an ulcer as as prevent a stroke or a heart attack.

[00:40:24]

You just made my life more complicated. Yeah, there's people who clearly shouldn't be taking it. So there's always going to be that gray zone. Yeah. And unless you are steeped in the literature where you could make a very well informed decision, you basically have to rely upon an expert to make that decision for you or to give you at least give you good, good advice. And you go like with anything. What's the consensus of expert opinion? Has a panel of disinterested scientists looked at this data and come up with recommendations?

[00:40:55]

And is your physician following those recommendations or are they shooting from the hip or are they saying things like, well, in my experience, or are they saying, well, the you know, the Academy of Physicians looked at this data and the official recommendations. So that's what we're following. It turns out that when you follow official recommendations that are rigorously evidence based and reviewed by expert panels, the outcomes actually are better than if you just go based upon your gut or what your sense of of the research is.

[00:41:24]

So that I wish there were I wish there was some algorithm or easy answer to all this. But there isn't. It's complicated and and there's no substitute for so.

[00:41:35]

So I wish there were I wish there were some algorithm or easy answer to all this. But there isn't. It's complicated and there's no substitute for going through a lot of complicated data and making a nuanced assessment.

[00:41:49]

Thanks for making it for him. Give me something else to think about tomorrow morning after breakfast. OK, Steve, before you give Mossimo even more complications to his daily routine, I'm going to cut you off now and we are going to move along to the rationally speaking PEX.

[00:42:22]

Welcome back. As usual, we finish our episode with a pick or two of books, websites, movies or anything else that might be interesting to our listeners when we have a guests. We asked them to do the honors. So, Steve, what is your pick this time?

[00:42:35]

Well, OK, well, I was given very short notice for this. I'm going to go with something that I've seen very recently. There's a new series on AMC called The Walking Dead. Have you guys heard about this?

[00:42:43]

Oh, no, that sounds interesting. Now, you know, I'm a big fan of the zombie genre. OK, and what what pleased me about I saw that the first episode of this what's good about this show is that it's not like a cartoony approach to the whole, like, zombie apocalypse question. It actually they're actually trying to have good writing and good character development, just really good storytelling that just happens to be taking place in a world where there's been a zombie apocalypse.

[00:43:14]

So I'm very excited after seeing the first episode. I love well written science fiction because there's so much crappy science fiction out there. You know, I think a lot of shows will rely upon special effects or rely upon, you know, the sort of scientific science fiction cliches when you get that magical combination of, you know, something that's taking place within a genre that you like, like zombies or sci fi. But also it's just great storytelling and great fiction, then then you have something really worthwhile, like like, for example, Battlestar Battlestar Galactica, Galactica, right up until the last episode.

[00:43:49]

Right.

[00:43:49]

Right. Yeah. So, Steve, I'm curious about your take on the science, specifically the epidemiology of the idea of a zombie pandemic.

[00:44:00]

So what I've heard in some places is that the mechanics of the zombie epidemic wouldn't actually work well, that the epidemic would quickly burn itself up. Is this the sort of thing that could, in theory, actually happen?

[00:44:15]

Well, you know, that that an epidemiologist wrote a paper about this looking at a zombie outbreak as a model infectious disease. Right. And he basically showed that it could work and it would wouldn't take too long for the world to be overrun with zombies, actually.

[00:44:34]

But yeah. So it depends on the variables. Right. There's lots of variables you have to manipulate in terms of how you conceive the zombies functioning. Is it first of all, is it an infection that always an infection? If it is, how is it passed from one person to the next? Is it airborne? Does it have to be a bite? Does there need to be saliva? Could it be any any bit of the of a zombie surface contact enough or do you need an open wound?

[00:45:03]

The harder it is to get infected, then the harder it will, the slower it will spread. You'd also look at the incubation period. How long does somebody harbor the zombie infection before they manifest signs? Good question. The patient period. The longer you'll get, the farther you'll you'll pass it around before it gets noticed.

[00:45:25]

So, yeah, that's why it was it was an interesting epidemiological paper because all the same variables are there that are there for a real infection. And, you know, it was sort of a whimsical way of modeling, a hypothetical outbreak or pandemic, just making it a zombie infection versus something else. But it's the same whole of the same rules, though, apply.

[00:45:50]

Well, that wasn't quite as reassuring as I'd hoped, but but interesting, at least.

[00:45:55]

So, unfortunately, we're now all out of time.

[00:45:58]

But it was such a pleasure having you on the show, Steve. It was a lot of fun. Thanks for having so much fun.

[00:46:04]

And so we are going to wrap up now. But I just wanted to, again, remind all of our listeners that the rationally speaking podcast is brought to you by the New York City skeptics. And we encourage you to go to the website, NYC Skeptic's dot org and check out all of our interesting material there on upcoming events and lectures and our upcoming conference, the Northeast Conference on Science and Skepticism and think about becoming a member. So join us next time for more explorations on the borderlands between reason and nonsense.

[00:46:43]

The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benny Tollin and recorded in the heart of Greenwich Village, New York. Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.