Transcribe your podcast
[00:00:14]

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense, I'm your host, Julia Gillard. And with me is today's guest, Professor John UNITIES. John is a professor of medicine, health research and policy and statistics at Stanford University. His research mostly focuses on the scientific process itself.

[00:00:55]

His most famous paper is why most published research findings are false. This is, I think, the most downloaded paper in its journal and has helped make John one of the most cited scientists, period. More recently, John has become the co-director of the Meta Research Innovation Centre, or Metrics at Stanford. So we're going to talk about metrics, talk about other work that he's done on on the scientific process and how it works or doesn't. It's a lot of ground to cover.

[00:01:27]

John, welcome to rationally speaking. Thank you.

[00:01:31]

I think the point where I want to jump in is actually an essay you wrote recently about evidence based medicine, which is a movement that essentially advocated for doctors using empirical evidence like academic studies to determine the treatments that they prescribe as opposed to, you know, using their their own intuition or personal experience or just the prevailing common wisdom and evidence based medicine originated, I think, in the 90s and has kind of gained traction since then. But, John, in your article, you basically argued that evidence based medicine has been co-opted or hijacked to a significant extent.

[00:02:07]

Can you talk a little about that?

[00:02:09]

So evidence based medicine has been a great idea and has been out there for about 25 years now. It was a wonderful opportunity to instil more quantitative scientific evidence into medicine and to try to link that to the patient physician relationship and decision making. So many people were excited about the possibilities of having numbers and strong data and evidence guide our discussions with patients and trying to rationally decide what to do and to offer the best for people who are sick or for people who want to to remain healthy.

[00:02:48]

However, even though the movement has been very successful in terms of how many people are adopting quantitative methods and seemingly more rigorous designs, trying to accumulate data for medical decision making like randomized trials and then combinations of trials and multiple studies and meta analysis and systematic reviews, somehow I think that the movement has been hijacked by many conflicted stakeholders who are trying to use evidence to advance their own causes rather than science and better health for people. So there's there's a lot of improvement.

[00:03:32]

I think that people now are more willing to try to understand quantitatively what it means to give it a new treatment or to try a new test or to do or not do something about your health. But at the same time, there's lots of stakeholders who are trying to co-opt, hijack, get the best out of this, which means that they will manipulate the evidence, they will distort the evidence, they will subvert the evidence. They will interpret the evidence in ways that is to their benefit.

[00:04:08]

It reminds me a little bit of this expression known as good law. But when you you have some useful metrics, some measure, but once you make that explicitly known, that that's the measure you're using to judge the quality of something, then people start treating the measure as a target and starts adapting what they're doing so that they score well in the metric. And then it ceases to be a useful metric as that kind of what's happening.

[00:04:30]

It's a normative response. When something acquires value, everybody wants to have that in their portfolio. So 30 years ago, we would be lamenting that we have no evidence, for example, about how to treat depression. And we have just little bits and pieces of fragments of small trials and no meta analysis to tell us what to do. And now, in an empirical evaluation that we did, we found that within six years there were one hundred and eighty five meta analysis of randomized trials that had been published, and about 80 percent of them had involvement by the industry that manufactures these drugs.

[00:05:08]

And whenever you had an industry employee involved, practically all of these meta analysis, with one exception, had absolutely no caveats about these drugs in their conclusions. When you had completely independent people, about 50, 60 percent of them had serious caveats about these drugs. But these were just the minority of meta analysis in that field.

[00:05:29]

So is industry like the pharmaceutical industry basically the. Main stakeholder in that set of stakeholders you were referring to who have co-opted IBM, it's one of the stakeholders and I don't say this in a way of accusing the industry or any of these stakeholders. There's many others involved. So, for example, scientists who have absolutely no conflict, at least financial conflict, we all have our own conflicts for our theories, for whatever we have proposed, for whatever has made us famous and consciously or subconsciously, we will try to defend it.

[00:06:07]

So it's very difficult for a scientist really to just kill his or her main achievements with with new evidence, even other entities that may seem to be totally and conflicted and seemingly just wishing the best for their research and for for outcomes in patients like, for example, physicians. We give an oath to try to help human beings. But nevertheless, we live in an environment where we need to make a living. And making a living for a physician nowadays means that they need to get more patients that they see and more tests that they order and more drugs that they prescribe and more hospital care that they offer.

[00:06:57]

This may not necessarily be towards improving the outcomes. The real important patient relevant outcomes in these people. So sometimes specialists, they have to decide if I get some evidence that shows that what I do and what I make a living from is not useful, it's not effective, and maybe it's even harmful. This will mean I will practically need to lose my job and maybe retrain in my 50s to do something completely different. Would they do that? So evidence is great and the information is great.

[00:07:31]

Rigourous information is great, evidence is fantastic, but evidence has implications and even seemingly and conflicted people and stakeholders may have their own conflicts eventually.

[00:07:44]

So do you think that the problem is that we just didn't define the standards in evidence based medicine strictly enough, such that there wasn't a way to score highly on that metric without actually being good, like a reliable piece of research? Or do you think there's just no way to to define standards that strictly.

[00:08:04]

I think that it is a process in the making. So I'm not a pessimist. I think that as we identify all these problems, hopefully we can come up with solutions. And one major issue, one major question, one major caveat, one major challenge is who is doing what and why. So randomized trials are important. Systematic reviews and meta analysis are important. Guidelines that are well informed and evidence based are important. Doing research is wonderful. Science is the best thing that has happened to humans, no doubt about that.

[00:08:40]

But who is the person? Who is the team? Who is the coalition? Who is the stakeholder? Who will do each one of these pieces I think remains largely unanswered question. And we know currently that some mix of stakeholder and item to be done are not really the best possible.

[00:08:57]

So currently we are forcing, for example, the industry to run the main trials that are giving us the evidence about a new drug and new biologic and new intervention works or not. Is this a good idea? In a way, it's like asking a painter you will have to judge the quality of your painting and give the award for the best painter it yourself. It's probably not going to work right.

[00:09:23]

When you say we're forcing the industry, you mean that it's sort of the law that they have to do their own trials so they are under regulatory pressure that they need to not only develop the the basic elements of the science beyond the new interventions and come up with the new interventions. But they also need to test these interventions with randomized trials and they need to have convincing results in these trials that a regulatory agency will approve eventually these new drugs or biologics or whatever intervention.

[00:09:56]

And there's very little funding for anyone else to do this type of research. So the industry has to do this work. They have to give the credit to their own products that they manufacture. So guess what? They will do their best to try to find ways to make sure that whatever studies they do, eventually the results which seem to be positive.

[00:10:19]

Got it. So this really isn't so much of a a sneaky, surreptitious, like hijacking of evidence based medicine, it's basically just all the stakeholders doing the very obvious, straightforward thing that they should do given the system that they are embedded in.

[00:10:38]

Exactly. And and I think that it's probably a misallocation of roles that is at the bottom of all this problem. For example, if the industry could spend all their investment towards developing a new drug or a new biologic, but then independent scientists would have to test it out to see whether it works or not. That would not be an issue if scientists who come up with a new idea are exploring different data sets or coming up with some weird thinking at three o'clock after midnight, you know, they they build that idea and they come forth and they say, here's a new concept, a new theory and new observation and new association congratulations that gets published.

[00:11:18]

But then someone who's completely independent is trying to validate this in a validation agenda. Again, we would not have that problem. It's an issue of how do you disentangle people who have a very strong conflict of getting a particular type of result or a particular interpretation of these results from the process of validating the observations and the claims.

[00:11:41]

One stakeholder that we haven't really touched on is alternative medicine or some what some might want to say, you know, quacks or quackery. I've heard some complaints that they have sort of actively seized on these standards of evidence based medicine as a way to game the system and make their alternative treatments like homeopathy seem more legit than they are, or at least blur the distinction between alternative medicine and conventional medicine by saying, like, well, look, you know, we have studies to look at our studies.

[00:12:15]

Is that something that you're worried about?

[00:12:18]

The problem that wags will just do anything and will just say anything to try to establish some seeming validity on what they do and what they propose and what they sell? So clearly, this is a huge problem, and especially in matters of health, there's so many people who make claims about health that are entirely unfounded. They have no evidence at all, or actually we do have that very strong evidence that they don't work. So, you know, homeopathy, I think we have extremely strong evidence that it does not work beyond what is the placebo effect.

[00:12:53]

Now, the placebo effect is not negligible, but at the same time, this doesn't mean that this is a vindication for homeopathy. I mean, you can do anything that will let you get the placebo effect. And it's just kind of a societal question of what is an acceptable placebo at that point. But I I think that the the bottom line is that we have to separate these non-scientific approaches that either have no evidence or have very strong evidence that they don't work from scientific approaches, that sometimes oftentimes have some evidence that they may work.

[00:13:34]

Other times it may be inflated. Sometimes it may still be wrong because there's just too many errors and biases in the process and try to clean that segment of the scientific enterprise that is likely to lead to progress. At the same time, we need to fight against fundamentalists. People with strong religious beliefs quacks homoeopathy, you just name it. And I think that the battle is is really getting worse over time. But science is about testing for reproducible results, for validating, for checking out errors and biases.

[00:14:12]

We cannot assume a position that we are scientists. Therefore, we cannot be judged. We have no errors, no biases. And and this is why you need to believe us. This is going to be adopting the recipe that all these non-scientific quacks are trying to adopt.

[00:14:28]

Indeed, I think that can just be a tough line to toe navigating between. On the one hand, yes, we're scientists. We're like our current scientific best guesses are not 100 percent definitively true. And then on the other hand, you know, being so sort of blindly open minded as to continue to test things like homeopathy that's, you know, don't have any sort of logical causal model behind them and have no evidence as well, which I mean, I think that is a line you can navigate.

[00:15:02]

But I think in practice, people often lean too hard on one side or the other.

[00:15:06]

I agree it is a constant challenge. And I think that at least my line of operation is to try to respect the scientific method. The scientific method allows for the possibility of error and for correcting error. I think this is what distinguishes us from homeopathy and from any other sort of non scientific approach. They don't really accept the possibility of error. They just believe that this is so and it has to be so. And there is no room for correction, there's no room for improvement.

[00:15:40]

There's just a textbook or a sacred book or something that was written at some point and it was correct, period.

[00:15:49]

So, you know, I've heard that sort of delineation between science and pseudo science before, but I'm actually a little worried about touting that standard because I fear that just like alternative medicine and other kinds of pseudo science have managed to sort of mimic the trappings of good science by doing Arktos and stuff like that. I worry they'll also learn how to mimic the trapping of revising and changing their mind, like never about the bottom line itself. You know, sort of like with so I'm I'm ethnically Jewish.

[00:16:22]

And so I'm sort of familiar with the way that Talmudic debates seems to encourage open mindedness and questioning assumptions and so on. But still, you're never supposed to actually end up concluding God doesn't exist. Right. And so I you can kind of put on this good show of, like. Changing your mind and and revising your opinions and so on, will, while never actually being willing to, you know, question the sacred cows and I worry that, you know, something like homoeopathy could do the same or astrology even like, oh, well, you know, we realized we were wrong.

[00:16:55]

Like, cancers are actually cancer. The astrological sign is actually more about extroversion and less about introversion. Like, look, we are scientific.

[00:17:04]

Well, I think that one could come up with lots of mental and pseudo intellectual twists to add to that. But basically, science requires that falsification is an option. If I start testing a hypothesis and I don't have the option of falsifying it versus verifying it, that that's not science. So if I have to reach a conclusion no matter what. This is not science. There's some things in science that we're like very, very close to one hundred percent certain about them.

[00:17:38]

It's like ninety nine point nine nine nine nine nine percent like climate change.

[00:17:42]

And the fact that humans are making a difference in that regard or smoking is killing people. It will kill a billion people in the next century unless we do something. It's ninety nine point nine nine nine nine percent. And I think that it makes a huge difference compared to pseudoscience claims that are 100 percent correct.

[00:18:05]

And there's no way that you can reach a different conclusion in that we're always open to evidence and open to understanding what that evidence means. I don't think that we need more evidence about smoking and about climate change. I think that we had enough. But if someone, let's say, were to bring evidence that is 10 times stronger than what we know currently, I think that we are open to revisiting what we know, even though we're so certain about those things.

[00:18:33]

I think that this is a major distinction between pseudoscience and science and pseudoscience. You always get the conclusion that you want in science. You're completely open to what the conclusion should be. And if you're a good scientist, somehow you're even happy if you destroy your initial conclusion and your initial theory, because this is how you make progress.

[00:18:55]

I mean, if you just testify once again what you already know, that's useful, but it's not really offering much information gain. It's it's not really adding much. So destroying your own theories and your own best bet is clearly something that science can and should do.

[00:19:16]

Yeah. And ideally, you know, science can reward people for. You know, it is kind of as you alluded to earlier in this episode, it's it's a difficult thing to do to be a 50 year old scientist and, you know, actively give the universe the opportunity to prove your entire body of work wrong. Right. That's it requires the kind of bravery. And I think it's not that you are implying this necessarily, but I think it's like a little unfair to expect to do sort of hold up this.

[00:19:47]

Beautiful principle of scientists should always be happy to disconfirm their results while ignoring the kind of human reality of the situation. Obviously, we can't sacrifice that principle because it's important how science works. But I think it is still important to acknowledge the fact that it's so difficult for people and to try to reward people with plaudits and social approval and respect and everything when they do actually take that step. Absolutely.

[00:20:11]

A science is done by humans, is not done by perfect beings who are eager to follow perfection against all odds. And the reward system is particularly important in that regard, trying to promote the best behavior among these humans. Scientists can be very well trained and they can be very strong experts, but they still have their personal preferences. And and obviously, as you said, it's not very easy to discredit your own hypothesis and your own theories. So there needs to be a reward system in place that really gives incentives to scientists to really follow the path to getting to the truth rather than just defending what they have done in the past.

[00:20:54]

If if we just get incentivized to just make sure that we find the same things that we as we found in the past, we're not going to make much progress. And unfortunately, this is happening to a large extent. Most of the funding mechanisms are asking scientists to come up with significant results, successes. Nothing would be wasted. And this is very unrealistic. Science is a very difficult process. We do a one hundred experiments and one of them works.

[00:21:22]

And this is perfectly fine. It doesn't mean that the other 99 were wasted and somehow the credits should be split across all these hundred teams who did these one hundred experiments, even though one of them was the only one that was so successful and led to the Nobel Prize type of discovery. It's that cumulative team effort that builds the cumulative science.

[00:21:46]

It reminds me of how, you know, people judge, judge probabilistic predictions by the outcome, as if the probabilistic prediction was, you know, one hundred percent or zero percent, like, you know, a prediction that something 70 percent likely, if it's actually well calibrated, well, will be correct 70 percent of the time. And when it is, people will say, great, the prediction was correct. And then the 30 percent of the time when it's not, they'll say, why?

[00:22:12]

Why are our polls so wrong? What's wrong with our models and so on? Absolutely.

[00:22:16]

I think that there's a lot of misunderstanding about both predictions and and probabilities and what they mean and what we should try to get out of our experiments. As I said earlier, there's a big push for trying to get statistically significant results, and that leads to a lot of misinformation in the literature. A few months ago, we published a paper in JAMA looking at the use of P values in the biomedical literature, looking at the entire PubMed database, which is practically the entire biomedical literature.

[00:22:49]

And over the twenty five years from 1990 to two thousand fifteen and also close to one million full text articles, we could look at all the abstracts and also close to one million full text articles. Ninety six percent of those who use P values, they claim statistically significant results. It's an amazing percentage. There's no way that this could reflect an unbiased universe. Most likely something like five percent, 10 percent, maybe 20 percent would have been realistic to expect.

[00:23:20]

But scientists are under tremendous pressure to deliver significant results. So this means that we will continue running analysis, manipulating, exploring data, dredging creatively, whatever we do until we get statistically significant results or could present statistically significant results. Right, getting back to the evidence based medicine issue for a moment. What do you think of science based medicine as as a kind of update to a corrective to the original formulation of evidence based medicine, where for the sake of our listeners, science based medicine is meant to be not necessarily a strict alternative to evidence based medicine, but more of a broadening where it's you know, it basically takes the stance that evidence based medicine is too focused on on trials themselves, on randomized controlled trials, and kind of understates the importance of the prior probability that that phenomenon is real.

[00:24:21]

And so this leads to problems that the SBM advocates claim. This leads to problems like there being a bunch of studies on homeopathy and some of them showing results and some of them not showing results. And the strict EBM advocate having to say, look, well, the evidence is inconclusive as opposed to, well, you know, our our total understanding of physics suggests that we should not expect homeopathy to be real.

[00:24:48]

So there's many different interpretations of science based medicine. And I think that the devil can be in the details as a term. It sounds wonderful science based upon it. Who would disagree with that and the premises that you described? I'm fully in line with them. So obviously the prior odds of something having biological support or other sort of scientific support is extremely important.

[00:25:15]

If you if you just start asking questions in random that have no prior indication that they would be useful or interesting. The yield is likely to be very low. Conversely, if you have other pieces of data, inferences, information, again, I will use the word evidence. You're at a better starting point. So whatever you do, if you get something that shows a signal, it's more likely to be true. I don't see that as a black and white.

[00:25:46]

I don't see that separate from evidence based medicine. Evidence based medicine does not say that we ignore prior knowledge. I think this would be a misinformation. Conversely, I would argue that evidence could be any sort of scientific information. And it doesn't have to be clinical trials. It could be observational data. It could be preclinical data. It could be animal studies. It could be cell culture data. It could be basic biochemistry and biology and cell biology.

[00:26:14]

And all of that is evidence. What we don't know very well in many disciplines is how exactly to translate these different pieces of information and science and evidence. You can pick the word that you like the most into proper base factors if we want to talk into what that means. So if I have a cell culture experiment, it gives me this type of results. How strong is that? I think that for many types of experimental data, basic science data, biological data, we don't know exactly how strong that inference is for taking it to the next step and deciding to do something more like do a randomized trial perhaps to to see if I can take that to to patients.

[00:26:59]

I don't see these as as contradictory movements. And I think that science benefits the most when we have multiple approaches, multiple techniques, multiple methods, trying to address questions of interest. There's some opportunity of really building on what we learn from each one of these techniques. And some of the problem is actually the dissociation of some of these domains or some of these disciplines. So we very often hear that basic scientists don't really talk with clinical scientists. Clinical scientists don't talk with statisticians.

[00:27:37]

Statisticians don't talk with anyone. So, you know, eventually this fragmentation doesn't help anyone. I think that we need to try to get all the information that we can possibly accumulate, try to accumulate it in in the most rigorous possible way and try to to combine it in a way that makes more sense.

[00:27:59]

Also, we didn't make this explicit when we were talking about EBM being hijacked, but it seems obvious that the world that we're in is worse than the counterfactual world in which evidence based medicine was not hijacked and and just sort of continued to be the gold standard that it was intended to be as a strong marker of of truth or reliability. But do you think that the world we're in is worse than the counterfactual world in which evidence based medicine had not become popularized, popular enough to be gamed?

[00:28:33]

This is a very difficult question because I can have a lot of trouble to to visualize that counterfactual world. I think that if I had to guess, I would say that, no, that counterfactual world would be worse. So evidence based medicine and the tools that it procured and promoted were helpful and and they can be even more helpful. I don't think that we should just go back to a time where just the experts who are getting to the podium and started coming up with their opinions and this is how science was seemingly making progress.

[00:29:10]

I don't think that there's any way to to get rid of evidence based medicine. And this is good. I think that all the progress that we have made with methods, with tools, with which statistics, with with understanding even of the biases that really erode the credibility of many of these processes. I think this is wonderful news. And I don't think that we are worse off compared to where we used to be. I think that science has made tremendous progress on all fronts.

[00:29:39]

In some areas, it's more visible than in others. And in some fields probably we're still struggling because the questions are very difficult. And maybe we just need to wait for even more input before we can make major strides to to progress. But but clearly, progress is being made. The issue is not whether science is moving in the right direction. The issue is whether we can make it more efficient, whether we can get there in in a more efficient way, faster with with less waste, with with less effort and with with fewer resources.

[00:30:17]

Get to the to get some wonderful discoveries and some wonderful implementation of these discoveries without really losing our way again and again.

[00:30:27]

I guess I'm a little surprised to hear you sound so confident that great strides are being made just because well, basically because of the most published research findings are false theses, along with some of your more recent work showing that the most common ways that people conduct meta analyses are are statistically flawed and that that that finding another sort of similar findings doesn't seem to have had a large impact on the way people are conducting their analyses. So I. I guess, yeah, I just want to ask about why you're confident that we're making progress despite the very severe flaws that you've pointed out in the process.

[00:31:13]

You can think of this as a machine that has a particular efficiency, so science is a machine that has a relatively low efficiency at the moment. We have estimated that there is a waste of about 85 percent of effort and resources.

[00:31:27]

And so this is in a series of papers that we published in The Lancet about three years ago. And we tried to look at the different steps in the chain of initiating some hypotheses all the way to implementing therapeutics. So very early stage signs to all all late stage scientific implementation.

[00:31:50]

And if you add the numbers up, it's about an 85 percent waste.

[00:31:54]

We have about 20 million scientists who are publishing in the scientific literature and they're putting effort trying to understand our world and trying to make a difference. And I think that it's very difficult to say that all these 20 million people are just moving in the opposite direction of where we should move. So clearly, progress is being made, but we are losing eighty five percent of the effort that we are putting into this.

[00:32:25]

I think if we can use all these resources and all these great talent, we have some of the greatest minds going into science. And I think this is something that hopefully should continue. I think we can get to major discoveries in their implementation much faster than we do now. But this doesn't mean that we're not making progress.

[00:32:44]

Great. OK, so let's talk about metrics now. This is the matter of Research Innovation Center at Stanford that you are co-director of. And could you just talk a little bit about the approach that you guys are taking to to make science more efficient in discovering truth? So metrics was started two and a half years ago, and our aim is to study science and research practices and to try to make them more efficient, more effective, more credible, more reproducible.

[00:33:17]

It's pretty much what we had just been talking about for this half hour or so. And this entails very different aspects of how research is being done. So there is lots of issues about how we conduct research. There's lots of issues of how we publish and disseminate research issues about how we review and evaluate, how we reward and incentivize and how we disseminate more broadly to the public what we do.

[00:33:49]

And obviously, there's lots of methods involved in science that can be optimized, improved, replaced by better ones, more efficient ones used properly or appropriately.

[00:34:03]

So it's a vast area. I think that the area that we try to cover is as broad as science itself. And this is why we tried to team with scientists from very different fields. These are questions that practically every scientist has come across during their experiments doing their thinking about research.

[00:34:26]

Most of the time they have tried to find a solution locally for what they do and maybe within their small discipline or subfield. But they don't recognize that some of these problems that we try to find a solution to actually are problems that many other scientists, even in very remote disciplines, are also coming across. So, for example, how do you make a scientist research more transparent and more open? How do you improve data sharing so that other scientists could really have access to that information and and see how things are done or have been done and and cross-check and combine information or validate or conduct new experiments.

[00:35:09]

A few weeks ago, I interviewed Brian Nosek from the Center for Open Science. And so many or most of my listeners will have heard Brian talk about open science. I'm curious whether your approach at metrics, whether you see it fitting squarely within the open science framework, or do you just ultimately you might say that there's like a lot of overlap, but you're also focused on things that you think are important for improving science that wouldn't count as open science.

[00:35:38]

I think that what Brian is doing fits very nicely within the metrics framework.

[00:35:43]

So Metrics is trying to address all of these different parameters, all of these different issues that arise and openness and transparency is one of them.

[00:35:53]

And as I said, there's scientists from very different disciplines who are trying to attack these questions. There's issues about data sharing in cosmology and astrophysics and genetics and psychology and clinical trials in neuroscience and fMRI studies. And each one of these disciplines has taken some steps or no steps towards improving data sharing and openness.

[00:36:20]

Could one field learn from another? Could they implement some recipes that have worked? Do we need some granularity? Do we need different standards for astrophysics versus genetics versus psychology? And all of these are very important questions.

[00:36:37]

Most of the time, until now, people have worked in silos of astrophysicists, have never really communicated with geneticist. Geneticists have not communicated with clinical trials, but they may have to learn from each other. And this is what we're trying to do at metrics.

[00:36:52]

We're trying to create a connector hub of scientists working in very different fields. Obviously, we have a strong interest in biomedicine, in particular because many of us have a background in biomedicine, but not restrictively in biomedicine, to share information, to share practices, to test out practices of improving what we do and how we do it.

[00:37:18]

There is something, a comment you made, I think, in another interview recently that I thought was really interesting you were talking about. How we should reframe the way that we not only conduct media analysis, but the way that we think of what the purpose of meta analysis even is, I don't know if this is something you're actively working on it metrics or if it's just something that you personally were thinking about. But you were talking about reframing meta analysis as prospective instead of retrospective, like, well, I'm going to stop trying to paraphrase ask you what you what you meant by that.

[00:37:53]

So currently what happens is that we have lots of people who try to be principal investigators. They run their small team and they work behind closed doors. They try to protect their, quote, unquote, privacy of their science and their competitiveness in a way, by not sharing information with others. Or you may have stakeholders like companies who run there one or two trials, again, to try to promote their own needs without being open to the whole world about what is being done and coordinating that effort with other stakeholders like other companies or other clinical researchers who want to look at similar questions.

[00:38:36]

What we get eventually is a fragmented universe of tons of mostly small, underpowered, biased studies that then you have a systematic review or a media analyst who comes forth and says, now I'm trying to piece this together and try to understand what that means. If I get 50 or one hundred or five hundred small, underpowered, biased studies together, let me see how that looks. And obviously that doesn't look very nice most of the time. So instead of that paradigm, what I have argued is that we should think more about cumulative agendas of team science where we are trying to attack interesting questions as a large scientific community.

[00:39:22]

Everybody who's working in that field or who wants to work in that field and has the credentials and the training and the expertise is welcome to join. Plans should be shared exchange, trying to understand what is the best way, what is the best message, what is the best transparency to try to address the questions at hand and design that research agenda with the prospect that whatever we do will be part of an ongoing living updated perspective analysis.

[00:39:51]

So it's not necessary that we run one study out of all these crap and desperate teams who try to to perform their own little pieces.

[00:40:03]

But at least there would be some central understanding that all of that agenda is being planned with the anticipation that it will be a perspective, the cumulative meta analysis that we'll have, some rules will have the best methods, will have the best principles of how this research is being done and the best possibility of integrating that information into meta analysis in a meaningful and unbiased way. There's many fields that have already done that and have even gone a step further. So if you look at high energy physics and particle physics, this is exactly what's happening.

[00:40:40]

You have thirty thousand scientists working at CERN and practically designing their experiments in common and having a common research agenda. And then they can come up with a discovery like the Higgs boson. If we had not done that and we had to follow the current paradigm, then what we would have done would be what is happening in current biomedical research, which means you have thirty thousand principal investigators. Each one of them has to send in a grant application, get reviewed, get funded.

[00:41:10]

They have to promise that they will find Higgs boson within four years, actually, because if they don't find it within four years, how they're going to renew their grant. If you do that, what you end up getting is thirty thousand Higgs boson discoveries. And none of them will be the real Higgs boson because people want to renew their grants. So they will come up with I found this or that, but it's not really going to work. And an effort to piece these fragments of information together is, again, not going to work because they have been done in a very haphazard manner.

[00:41:42]

So this is one field that has put its act together and see that in many other fields currently.

[00:41:48]

So this is my question. Why has physics, for example, managed to solve this coordination problem, but not other fields like in the social sciences?

[00:41:58]

I think that there was an opportunity cost in that people in physics recognized very quickly just running back of the envelope calculations that unless they were to join forces, they would just not go anywhere.

[00:42:10]

So the answer is that physicists are smarter. Is that it? I don't think that are necessarily smarter. I think that in a way they're more lucky because they they just hit upon a wall.

[00:42:21]

The wall is there for social sciences and for biomedicine as well.

[00:42:26]

But some. Now, it's not so easily visible and I think that each one of us is hitting on that wall, but we each hit on a different wall and we don't recognize that. Well, this is part of the same construction.

[00:42:41]

Great. Well, I think that's a good place to close. So let's wrap up the section of the podcast and we'll move on now to the rationally speaking pick.

[00:43:05]

Welcome back. Every episode, we invite our guest on, rationally speaking to introduce the pick of the episode that's a book or article or something that has influenced their thinking in some way. So, John, what's your pick for today's episode?

[00:43:18]

My pick would be Homer's Odyssey seemed Ulysses as the prototype of the ancestor of a scientist having to fight against adversity. Lots of difficulties, monsters, but eventually trying to get home. So it's it's it's a very difficult job, but I think we will get there. Have you found the Odyssey to be sort of formative for you in growing up and choosing to become a scientist?

[00:43:49]

Well, it's it's a very unique test and a very unique text. And I think that it may sound a bit weird that I see your license as a scientist because at that time, science was in its early making's. But I think it's a very nice depiction of how difficult science is. It's a very noble enterprise, a very difficult one. But there is a lot at stake and we need to make it work.

[00:44:18]

And maybe scientists can can lean on that kind of noble self image when, for example, they disprove their own work. And it's like it's like Odysseus is a ship capsizing, but he's still the hero.

[00:44:33]

So they can, of course, be ready to have your ship capsized all the time. All right.

[00:44:39]

Well, John, thank you so much for coming on the show. It's just been a pleasure having you. The pleasure has been mine. Thank you, Julia. This concludes another episode of rationally speaking. Join us next time for more explorations on the borderlands between reason and nonsense. The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog.

[00:45:14]

This podcast is produced by Benny Pollack and recorded in the heart of Greenwich Village, New York. Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.