Transcribe your podcast
[00:00:14]

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I'm your host, Julia Gillard. And with me is today's guest, Brian Nosek. Brian is a professor of psychology at the University of Virginia. He's also the co-founder and director of the Centre for Open Science.

[00:00:53]

You might have heard Brian mentioned on this show before or heard him in the news. He's famous for a number of things, but in part he's famous for setting up the reproducibility project, which we discussed on our episode with Uri Salmonsen few months ago that made a splash in the world of social science by trying and failing to reproduce the results of many psychological experiments in top journals. So Brian and I are going to talk today about open science and what that means for the field.

[00:01:26]

Brian, welcome to the show. Thanks for having me. So why don't you start by talking about what you mean by openness in this context.

[00:01:36]

So openness and for our purposes is two things. One is referring to transparency, the availability of not just the outcomes of the research, as in the reports that I write telling you what I found, but also in the content of that research, the data, the materials, the methods, the code, the protocols and in the workflow that produced those outcomes. So I had some process of data, generation of design of the analysing that data are coming to inference at the end and sharing that making that openly available makes it a lot easier for someone independent of me to evaluate the outcomes and decide whether they are credible or not.

[00:02:22]

So, yeah, that is a core part of openness. The other part of openness that we really care about at the centre is openness as inclusive, and that is so that anybody who has interests, motivation, time, can be involved in the research process and certainly anyone, even not a scientist.

[00:02:40]

Yeah. So why wouldn't they be a scientist? So, yeah. Anybody why shouldn't they have ways to access and be involved in the accumulation of knowledge for the public. Good.

[00:02:50]

Got it. OK, so I want to just see if I can disambiguate between the different purposes of openness. One like I think one context in which people are used to talking about openness is with respect to media or content in general or I guess access to programs in general. And that in that context, people often talk about openness as being about kind of justice, like people sort of have a right to this information. And I hear that rationale applied to openness in the context of science also, especially because taxpayers are indirectly funding a lot of the scientific research via NASA, NIH funding.

[00:03:32]

And so, you know, the argument goes it's only fair for taxpayers to be able to access that research without having to pay exorbitant subscription fee as to these academic journals. So that fairness is sort of one argument, but it sounds like another main argument we're making is about the quality of the scientific research, like your goal being to increase the total amount of reliable, true knowledge that science is producing via openness. Is that are you are you focused on both goals or just the one?

[00:04:00]

Yeah, no, it's it's a very good point. And I'm perfectly happy with the moral argument of if we pay for it, we should be able to access it. That that is a reasonable argument from my perspective. But the latter is really the focus of our attention, which is if we want actually science to produce knowledge efficiently, as efficiently as it can, then it's not just a good thing because it's fair to be open. It's a necessity to be open for science to be able to do that.

[00:04:32]

And the reason that openness is a necessity in science is that with a scientific claim doesn't become credible because a scientist makes it right. You don't believe me because I say I found this thing. So therefore it's true. And you say, oh, OK, well, he's a scientist. He must know a scientific claim becomes credible because you can look at how it is. I arrived at that clip. What is the approach I used? What's the evidence that I have?

[00:04:57]

How did I come to my own inference about that evidence? And in principle, you can come to agree or disagree, but it doesn't depend on me. The evaluation of the evidence is the evidence itself. And so without access to the evidence, to the generation process, to the outcomes, you can't evaluate it. You can't decide if it's a legitimate claim. And so for that perspective, openness is essential for just having science be science, irrespective of a moral argument about access.

[00:05:24]

Right. How do you feel about the current way that the incentives are structured in science? Like is there are there incentives for. Openings, are there incentives against openings? How much of a piece of a puzzle, piece of the puzzle do you think incentives are?

[00:05:40]

I think the incentives are really at the core of the challenge for open science. And that is because right now the primary incentive in science is publication. My career is advanced by the frequency of my publication and in the prestige of the outlets in which I publish. And openness is a value, but it is irrelevant in the incentive structures. And that is, it doesn't make any difference for publication right now for me, whether I was open with my data or my content or not.

[00:06:14]

It makes no difference for publication. If I show you my workflow, how it is I arrived at those conclusions. All I have to do is give you conclusions that are novel, that are positive results, finding evidence for a new claim and our beautiful and clean and tidy. If I can give you that, then I am rewarded in the current structure of right.

[00:06:39]

Yeah. When I've heard incentives discussed in this context before, people have made suggestions like, well, if we could just find some way to get like to get the tenure year process to reward people for for openness, for sharing their data, for publishing applications, that kind of thing, then maybe that could solve the problem. And that seems really hard. That's like a whole system you would have to change. But you actually I feel like one of your recent approaches kind of suggests a way forward that doesn't go via the traditional, you know, changing the traditional incentives route, which was you demonstrated that you can actually significantly boost people's adherence to the norms of openness by giving them an essentially meaningless badge or sticker.

[00:07:27]

This was both impressive and amusing to me. Can you talk about the success of the badge program?

[00:07:33]

Yeah, I'm happy to talk about that. And so I will agree with that point and then anticipate that I'll end up disagreeing that our own solution is sufficient and go back to the tenure and promotion of excellence.

[00:07:48]

So the idea of badges is that if we can provide some way to signal behaviors, then if those behaviors are valued, people will adopt them. This is very basic psychology, right, that there are things that people do and they're hard to communicate. But signals are really useful. Signals can be used to communicate people's beliefs, their mindsets, their behaviors, lots of things that they want to have other people understand very quickly. And so badges are a very simple instantiation of that is a large group of people work together in creating specifications for badges, for open data, open materials and preregistration, defining the study you're going to do before you actually do it.

[00:08:33]

And the journal Psychological Science was the first journal to adopt those badges, and they adopted them on January 1st, 2014. And about a year and a half after that, we started a evaluation of did the badges have any effect on increasing rates of sharing? Right. So the idea is the Journal adopts badges and then they give authors when their articles accepted an opportunity to get a badge. So if you'd like the open data badge, then you have to just meet these criteria, put your data in a repository, make it so that other people can read it.

[00:09:05]

And then you'll get a badge on your article saying that you did that thing. Now, that is trivial, right? It's just a little sticker, as you say, on the article. And so scientists are largely grown ups. Do they really actually need stickers to get credit? Well, no, in one sense. But in another sense, that sticker is simply a signal of a behavior that scientists already value. The notion of openness and transparency are values that are accepted by almost all scientists as values and science.

[00:09:37]

They're just not incentivized to do them. And so the arguments against openness are pragmatic ones that researchers raise, right. Don't get scooped. Other people might have concerns that people will attack me. There's lots of things that people worry about with openness, but that's because the culture isn't open now. And so the having an incentive, even a trivial one like that, is a way for researchers that already believe in it and want to do it. They aren't getting any credit to get some credit.

[00:10:08]

So psychological science adopts badges. January 1st, 2014. In the two years prior that the average rates of sharing data and psychological science was three percent of the articles. And then we had comparison journals that were about the same. Post adoption, those rates started to increase to the point that in the first half of twenty fifteen a year to a year and a half later, thirty nine percent of the articles had open data. Wow. So that's a 13 fold increase that led to that.

[00:10:40]

Right. And then no change in the other comparison journals. So the point isn't that badges actually are these huge motivators and they just force people into doing something crazy. The point is that the value is already there, that researchers recognize this is useful. They just don't have any reason to do it. But if I want to signal to the readers of my article that I have a lot of credibility, I have a lot of trust in my evidence, I am I value the core practices of science.

[00:11:11]

And the opportunity to earn a little signal of that might be sufficient for me to go ahead and do those behaviors. And clearly, that was that will had a big impact. Well, do you I mean, it's a huge impact, like even huger than I would have predicted, that logic and time is much bigger than I predicted, that's for sure.

[00:11:33]

It's so striking. Do you think that there's any reason to think that this particular context, this particular journal or time period or something was unrepresentative in any way? Yes.

[00:11:45]

There are good reasons to think that the impact when badges are adopted across journals and disciplines won't be quite as strong. And the particular reason to think that is that the concerns about reproducibility are at the forefront of researchers minds, particularly psychologists minds. And so there may be some degree of compensatory reaction that is sort of facilitating that. No, no, no, we're doing well here. And so I feel extra motivation to do it. Whereas if it's if it's in a community where no one is talking about those issues, they may say that, you know, stinky badge.

[00:12:26]

Oh, I'm so pleased that you found a way to work that line. Every time I talk about it, I have to find a way to mention this so that so that is unknown. And we are getting more journals, adopting badges. So we will have opportunity to see the extent and the the impact of that. But that that does sort of prompt the the earlier point, which is, is this enough? Right. Is it enough to just have badges and other simple signals where people will change their behaviors themselves?

[00:13:01]

Yeah, and I don't think that's the case despite being really positive about this particular intervention. And the reason is, is that the incentives for science are embedded in a very complex ecosystem of multiple stakeholders. So I am driven both by that journal and the other journals I try to publish, and I am shaped by the funders that decide whether my grants get funded or not. I'm shaped by the scientific societies of which I'm a member that established the norms and how the styles of how the community operates, what's the right way to behave.

[00:13:37]

And I am very strongly shaped by the institutions in which I am a member. I am employed by that decide whether I get a job and whether I keep that job. And all of those are both creating and reinforcing the incentives that drive the researchers behavior. So if, for example, the tenure committees never change their decision practices and it's all about impact factor and volume of publications, can you just explain what Impact Factor is?

[00:14:07]

Sure that the impact factor is the as the citation rate on average, basically of a journal? I'm not to say what any article in that journal has been cited, but an overall indicator of how many people are citing articles from that journal. And this is a blunt instrument approach to deciding how good someone's work is because it can be used just as a heuristic. Oh, it's in that famous journal that lots of people say to me, good research now.

[00:14:35]

No tenure committee or hiring committee would say that all we do is count the number of papers and count up how prestigious the journals are. But it's very hard to be a avoid influence of that one because the community now takes it very seriously. And two, because a lot of these cases, the people that are having to deal with the information are very busy. So we get one hundred fifty two hundred applications for a job in our department. Well, what are we going to use?

[00:15:06]

We can't read all of the articles of all the people that have submitted applications. And so there is an easy tendency to grab onto these heuristics. And so if if those don't shift to some degree, if we don't work on the incentives there as well as with funders as well as with publishers, then each of them will sort of push back on the attempts to shift on on one dimension.

[00:15:31]

Yeah, it also seems like a tough coordination problem. And that, you know, you you sort of it takes like time and effort. And you are you're putting yourself out there. You're taking on extra risk of, as you say, being scooped or or being disproven, being, you know, having someone try and fail to replicate your research. You're taking on all those risks. And that's worth it if you get rewarded. And it's it's also worth it if that's sort of what the culture expects.

[00:15:58]

And you would get punished for not doing that. But it's hard to shift from the equilibrium we're currently at or that's not considered obligatory are expected to, you know, the other equilibrium, right?

[00:16:08]

That's exactly right. This is a classic coordination problem. And you can hear this talking to researchers, which is that conversation often goes, yes, I want to do all the. Things I want to be open, I want to preregister, I want to do all that stuff, but I won't be able to keep my job or I won't be able to get the postdoc that I really need in order to get to the faculty position that I really want.

[00:16:33]

And so the that sense of risk, given the uncertainties and the lack of incentives directly for them, really makes it a harder one to change. But at the same time, this is a different situation than coordination problems where people don't agree on the solutions. Here we have a huge opportunity, and that is that the values are already shared and people already have. Not everybody, but there is a lot of shared sense of what a different reality could be and what it would be good.

[00:17:06]

It's just how do we get there? Yeah. And so this presents all kinds of opportunity, both for small scale interventions moving up to scale and for coordination solutions like precommitment. Right. So we can imagine a a coordinated effort to say I am willing to let's pick one thing, make all of my data openly available if 40 percent of the researchers in my field are also willing to do that. Yeah, right. And so everybody logs in to a service that records their commitments.

[00:17:42]

And at what point will that trigger my behavior? And so we define the universe of that. There's two thousand researchers in your field. As soon as it gets to eight hundred, then you'll get your email time to go open. And now you're now you're an open researcher.

[00:17:59]

Oh, I like that. It's sort of like leveraging a Kickstarter solution for this coordination problem in some ways.

[00:18:05]

Right, right. And there's been, you know, like changing the Constitution uses this right. You have to have so many states agree that this constitutional amendment should be changed and then it goes. And so it is just sort of it's reducing the risk that allows people that actually already hold those values to express. I hold those values. I'm willing to do it, but doesn't make them go along to get it done.

[00:18:30]

So I overall, I basically agree with you that we have a sort of consensus on what what would be best for the entire group, the entire endeavor of science. There's one piece of that question that's not as obvious to me, that, like, there's a clear answer to what the best approach would be, and that is the potential of free rider problem with data sharing. So like one of the arguments for not being open about data sets is that it takes a lot of effort and resources to collect data that you're going to use for your research.

[00:19:06]

And if if you know that everyone else gets to use the data that you collect, then that kind of reduces the incentive for people to collect data, much in the same way that intellectual property laws are designed to make it so that people have an incentive to put in the time and effort to invent something or discover something. Are you concerned about the free rider problem at all? Or like if it doesn't seem like a problem here, why?

[00:19:29]

Yeah, it's a great question and I think it is a problem to the extent that people don't get credit for data generation itself. And that's really at the core to me of this issue right now. All of the credit is for one thing, publication. But if we can diversify what one gets credit for, what are the scholarly contributions? While it is a scholarly contribution to write a beautiful set of code to analyze data, it is a scholarly contribution to design a brilliant study and to collect the data for that study, especially when it's a very hard data set to collect.

[00:20:06]

And so the if we can shift the model and there's already work, lots of people trying to think about ways to do this and progress being made to shift the model so that all of those become citable scholarly contributions, where I get credit for having generated a data set, then it turns into my interest is to have other people analyze that data and to use that data, because if they ignore it, then my scholarly contribution is ignored, just like my publication.

[00:20:34]

Getting ignored is not good for me either. So this this is understood well enough that even NSF and NIH have both moved to have their description of when you submit your bio sketch for what you've contributed as a researcher, it's no longer list. Your five most important publication is now explicitly list your most important research contributions. And they say that could include software, that could include patents, that could include data sets that or something else. So just say what those contributions are.

[00:21:11]

So that is a very nice step. One step towards diversifying. What is what are the rewards of science, not just the publication, but the other. Points to the process that is really interesting, do you think that researchers currently or or that it's plausible that researchers will soon in the near future actually have as much respect for that kind of contribution, that sort of non-traditional contribution as compared to the traditional published in high impact journal contribution?

[00:21:41]

There may be an age effect for speed of acceptance of such interventions, but. But there is because funders are already recognizing that that's a good thing. Journals are and publishers are trying to, in the sense of data publications and some journals that are really about sharing the products of other things. But of course, they're embedding that in a journal article because say we can't get our mind out of it. Has to be in an article. Right, to get credit for it.

[00:22:14]

Google scholar is moving toward, it seems, towards acknowledging data sets as scholarly contributions that you can search for. And then the many, many repositories like the one that we operate, the open science framework, make all scholarly objects, citable units. And so that being a citable unit where you can actually appear in the reference list, we almost don't need to persuade people that these are things that can be contributed. If they start getting cited, they will just become things that provide value for researchers.

[00:22:50]

So I think we can do it sort of naturalistically in some ways rather than trying to persuade the skeptics.

[00:22:58]

There's one aspect of openness we haven't really touched on yet, which is openness surrounding the process of peer review. So currently, for those listeners who don't know how it works, the scientists submits their article for a review to a journal. And the Journal sends that paper to several reviewers who are other scientists in the same or similar fields who presumably have the expertise to evaluate, you know, is this a good study should be published. And the author of that article is their name is visible to the reviewers, but the reviewers themselves are anonymous.

[00:23:32]

And so the the original author never finds out who reviews their paper, who decided whether it was good enough or not good enough. The public never finds out. And so there are some arguments that no openness should also include making the names of the reviewers public, not anonymizing them, partly because of the risk that researchers, reviewers, you know, can currently they can block new research just because it's you know, it undermines their own theory or it undermines research that is relevant to their theory is and that's bad for the scientific process, but also because not having your name public means there's little incentive for quality control.

[00:24:10]

Right. Like you can. Why go through this research with a fine tooth comb to make sure that it has a sound methodology? If, you know, no one's ever going to find out that you were the person who let it through if it turned out to be bad. Right. So is transparency of of peer review something that you think is promising or not?

[00:24:28]

I do think it is promising for the reasons that you describe the right now peer review is entirely a service. And by that I mean that the researcher gets no credit at all for doing it. The reviewer, the reviewer write the most credit that I get being a reviewer is I add a little line on my vita that says I once reviewed for this journal. And that's about basically nothing, but definitely doesn't give you incentive to review more than once.

[00:25:00]

I give you a telemarketer.

[00:25:02]

Right. So and peer review is super important, right. In the current model, it is the gatekeeper to what is published versus what is not. But even in alternative models, peer review is the means of evaluation, of deciding the worth of different scientific contributions. And so it plays a very important role in science, even though right now it's perhaps not done as efficiently as it could be. The and the the potential gain of transparency before talking about the risks is that if my reviews are known, then I get it.

[00:25:40]

Right now, only the editor forms an opinion of me in terms of my quality as a reviewer. But if they're known that I review them, then I have a whole new potential source of reputation. Right. And for people that are excellent reviewers and I've seen enough reviews now where I've been either the reviewer or the author, where it is amazing scholarship in some of these reviews, like, wow, this person really unpack this issue so brilliantly and identify these challenges and opportunities, et cetera.

[00:26:12]

That could be a scholarly contribution of its own. And you can imagine a world in which a. Searcher who's at a Satnam research intensive universities. They don't have the resources to do generate data and research, but they are brilliant evaluators of research and could achieve tenure based on being great, quittin critics of evaluating research so effectively that people say, you know what, you have to rely on that person for this field because they really understand the issues and they can they can point out things.

[00:26:46]

That is a huge service to provide. Yeah.

[00:26:49]

And why shouldn't it be a way to gain reputation?

[00:26:53]

I like that part of your job, that general theme that you've been hitting about broadening our conception of what counts as a contribution that people should be rewarded for. Exactly.

[00:27:03]

And it's an inclusive step. The other part of openness, right. Right now, so much of the resources, it goes to so few scientists who happen to be in institutions that have tons of resources devoted to research. Almost all the research output is from the top 100 universities. And there are so many really smart, really capable people at places where it's not possible to do a generative research program in with any sort of speed because of the resources it needs to do it.

[00:27:33]

But they have so much to contribute in terms of knowledge, skill sets and everything else. That review is one very obvious way to start to be more inclusive of how that is a real contribution to science.

[00:27:46]

So one thing that I like about your center is that you aren't just, you know, talking abstractly about the importance of openness and trying to sort of promote this idea. You're coming up with these pretty clever and innovative approaches to causing that to happen, some of which we've touched on already. But one recent example in this category is you've recruited people to participate in a prediction market to predict which studies are going to replicate. Can you talk a little bit about the motivation behind that project?

[00:28:22]

Yeah, yeah.

[00:28:23]

This is really a fun addition to the replication work that we've been doing. And it was started the idea was started by a couple of economists in Sweden and a driver and Magnus Hansen, who approached us to say, can we try this? And I thought, that's a great idea. And so we took a set of studies that we were doing for a large replication project and ran prediction markets on them, invited psychologists or other behavioral researchers to be involved in the markets, gave them one hundred dollars each and said so bet on the studies and the market price would go between zero and one hundred.

[00:29:08]

And, you know, if you were buying at a higher price, you're betting more that it's more likely to predict. One hundred would indicate one hundred percent confidence that this is going to replicate zero would be an indicator of the zero percent likelihood of a replication. And so we got a full range of predictions based on the energy market prices for these different replications that were ongoing. And then the incentive for the individual participants in the market is that once we got the returns on the results, it would pay out.

[00:29:42]

So if it was successfully replicated, anybody that was holding a share would get a dollar for each share that they had 100 cents. If you were if it failed to replicate the shares you held or worthless, you don't get any money for that. And what we found was that the market was quite well calibrated for anticipating the results that were observed in their applications, indicating on the substance level that that researchers have some knowledge about what's likely to replicate or not.

[00:30:15]

So that's useful to know that when people have priors, they say, oh, I'm not so sure about that result, that those are worth at least taken seriously. Whether or not they end up being true or not, we don't know. But but at least paying attention to that skepticism or not skepticism if people really believe it. But then other opportunities emerge. If prediction markets become quite effective at anticipating replication success, for example, prioritizing which things to replicate, we can't replicate everything.

[00:30:48]

Resources are limited, and the more we put resources into replication, the less we put resources into innovation. So we need to be as efficient as possible between the two. And so the opportunity with doing some markets is to identify those projects that are or those findings that are very important that the community feels very uncertain about and prioritize funding for those where it would be devastating for a field or or. Or actually very useful to know that this isn't actually a viable direction, so that the resources on innovation can be placed in other directions to really advance them more quickly.

[00:31:31]

So that's been the real success of that. And now we have a number of subsequent prediction markets ongoing for other projects that are replication projects to see how viable this is as an approach.

[00:31:44]

Did you get any pushback about the prediction market idea that, like I, I find that people often feel like betting is a little mean spirited, especially if you I mean, it's sort of it's a signal of low confidence, which is as it should be. But when you're talking about events that are sort of high stakes, high emotional investment for people, that I find that that can feel callous to people, especially if you're benefiting like financially benefiting from someone else's failure.

[00:32:19]

Right? Yeah. And I certainly can resonate with the reaction of that. Right. There's prediction markets for likelihood of death by getting people or how much damage will occur if if you have a war in this way or in this region. Right. Prediction markets to predict some really important and then like dramatic stuff. And you're like, wait a second, you're betting money on whether people are going to live or die or in this case, their work is going to succeed or fail to replicate.

[00:32:46]

And I totally understand that sort of feeling, that reaction. But the the whole point is to get the person that's making the prediction to be invested in getting it right. And that's really the core, right, that we bring to every prediction we make about the world. We bring a lot of our own motivations, what we want to be right, what our ideologies are, how we understand the world where money is on the line. The motivation is, is is more focused on the money.

[00:33:18]

I have a particular reward here. It's not it's not showing that my beliefs it's actually giving me some it's the money actually removes it from my beliefs. What I would like to be, of course, I want your study to replicate. I like you. I think you're so smart. I love the feeling. But if I have to put money down on it, oh, now I feel a little bit less certain about whether I'm going to bet on it.

[00:33:41]

So it actually is a way to sort of pull people back from all of those feelings and emotions and good things that connect people and make the world nice and shiny. But but I think it's something very important, a very important purpose, because it creates that investment in accuracy. Actually, before we continue on this thread, I'm wondering, did we did we like were you going to talk more about the potential risks of transparency, of peer review, or did I cut you off in the middle of a thread there?

[00:34:14]

I forgot about that part.

[00:34:16]

And that that's really, you know, it's easy to say, oh, we should be transparent, reward people for positive evaluations. The pushback that happens on transparency of peer review is the potential for retribution. If I am a junior researcher critiquing the famous researcher in my field, then that person might get angry with me and make it harder for me to get a job or keep a job or otherwise. Yeah, and part of that is, you know, we have to acknowledge there's there's unknowns there.

[00:34:45]

It's possible that they a transparent peer review process would be more risk for junior researchers than a closed peer review process. But I actually think it's the opposite. And that's because the the and it was actually related to the point you made introducing transparent peer review, which is that senior researcher is able to do a whole lot of things without accountability in a private system. When you submit your papers, they have a different point of view than that researcher. They can kill it and they can kill it in a really egregious ways because they're a senior person and no one knows that they did it except for the editor.

[00:35:25]

And of course, the editor is is the one that does know and is often more junior than that very senior person and can't stand up to the that senior researcher or risk offending them. So there's all kinds of bad behavior that's easier to do when there isn't transparency. Transparency at least allows you to detect it more easily. So if I am a junior researcher critiquing some senior research, other people can see if that person how that person responds and both of our reputations will be affected.

[00:35:53]

So it is a prediction, but my prediction is that transparency in the peer review process actually decreases the rate of misbehavior rather than increase.

[00:36:02]

Good. Yeah, I'm glad we touched on that. That seems like an important point. And I, I don't know if if this is a feasible experiment to do, but it would be interesting to cash out that prediction concretely and and put some money on it to see if we are correct about the effects.

[00:36:20]

Yeah.

[00:36:21]

So there is one experiment that I know about on transparency of peer review and then check about retribution. But it did check on our people when they have to sign their names. Are they less critical of the research at that? Be the other concerns? Everybody just says, oh, this is all great. And they didn't find that. So they did not find that. No, they found no difference in whether the extent of the critique from transparency or not.

[00:36:50]

So, you know, that is one study, but that was the one that's been done so far that I know there's probably if only they could also check rate of researchers avoiding each other at parties or something.

[00:37:02]

So, yeah, we now have all of our cell phones, so we should be able to do that automatically or has the right to attach attached to phones and then we'll know. I love it.

[00:37:14]

That's good.

[00:37:15]

I like to think in our last few minutes I wanted to continue down this thread of exploring the the critique about tone that like this whole open science thing is like, yes, it's you know, openness is valuable and virtuous and so on. But the the fact that, you know, the openness crowd has been pushing it so hard and and criticizing, like using openness to critique other researchers work, that that is, well, kind of mean spirited. And this has been I mean, this is not like the dominant response to the openness, rallying cry, but it is it's not uncommon, like the responses have ranged all the way from.

[00:38:05]

Yes, thank you. Thank God someone is finally like talking about this problem to you guys are like methodological terrorists, for example. So actually, there was one. Maybe we should give one one pretty striking example, which was the, um. Well, actually, I guess this wasn't about tone. I was I was thinking of the case of the power opposing research where you had these two researchers who both authored this this famous research showing that standing in a sort of powerful pose can make you feel more confident and powerful and has all these good effects and after that failed to replicate what the researchers just like when in both totally different directions, or one of them said, yeah, I you know, I accept this.

[00:38:52]

I no longer think that proposing is a thing. And the other researcher just like sort of stood her ground and was like, no, it's still a thing. And, you know, the. Openside side is wrong, so it's just been really striking to see the vast difference in how people responded to it. But I guess I'm I'm wondering if you have any thoughts about whether there is any validity to the critiques about about tone, about mean spiritedness or or anything else?

[00:39:22]

Yeah, this is an important issue in one way and sort of unimportant is not the right word, but a sort of an a side issue in another way. And the way that it's an aside issue is that people behave badly everywhere and especially on the Internet. And so the fact that there is bad behavior among scientists on the Internet is not news anymore than any comments section on any news website.

[00:39:46]

Yeah, I feel like you guys have been pretty, pretty polite and restrained in the grand scheme of things.

[00:39:51]

Honestly, some evidence and haven't, you know, some real nasties like the example you gave of power posing, you know, the the disagreement between Dana Carney, who has said, I no longer believe this research, and Amy Cuddy, who is still advancing some of the points, the theories about power posing that is a normal scientific disagreement. Right. They have two different views of what the state of the literature is, and they both have been very responsible, responsive, careful in how it is.

[00:40:21]

They talk about those issues, even though there is clearly a disagreement. But in the same time, in the same domain, Amy has been called lots of really nasty things and personal things, not just critiques of the research, but critiques of her as an individual in the in the field.

[00:40:40]

And that's that's just it's gross. Like, why are we doing that? And so there is a reality there that there there are people behind the science and one has to recognize that, of course, critique hurts. I've been critiqued my whole life on everything that I study and it doesn't feel great to get critiqued. But that doesn't mean critique is inappropriate. Right. But we do have to recognize that that we are human and we are going to respond to things in different ways.

[00:41:13]

And so that my feeling on the sort of the overall issue of tone is having been on the receiving end of real harsh critique and and on giving critique. And I hope I don't do it harshly. I hope I do it constructively. My overall aim is that I can't control my reputation, but I can control my integrity. And so the way that I can focus on how I give and respond to critique is to think about how I want to behave as a person.

[00:41:48]

And if other people are going to misbehave and talk nasty and do things that are inappropriate, well, that's ultimately on them. And if I spend my time worrying about, oh my God, they were nasty to me in the public eye and not lose my reputation. If I'm worrying all about that, then I'm not likely to maintain my own integrity for how I think I should behave. And I think in the long run, that's a much bigger benefit to me, being productively engaged in what is supposed to be a contentious, skeptical environment.

[00:42:20]

Right. Science is all about skepticism and critique and clashing of ideas and instead try to value each person as genuinely trying to just figure things out while also, of course, having their own personal ego investment in all of this. And so just just trying to tread lightly.

[00:42:42]

Yeah, yeah. That does sound like a pretty valuable mindset to have. Oh, but you know what else you could do? I just thought of this brand. You could add a fourth badge that's like the niceness badge. And you get that if all of your critics have been polite and respectful.

[00:42:59]

Yeah. So who gets to be the judge of the nice. I like it.

[00:43:07]

All right. Well, we're we're just about out of time for the section of the podcast. So I'll wrap up this conversation now and we'll move on to the rationally speaking pic. Welcome back. Every episode, we invite our guest to introduce the rationally speaking pick of the episode that's a book or a website or something that has influenced their thinking in some way. So, Brian, what's your pick for today's show? My pitch is from grad school, when I was learning about these issues of challenges of reproducibility and open science and realizing that my ideal of science from second grade is not actually how science is done.

[00:44:04]

And the what was stunning to me in learning about these issues is that they have been understood for a long time. There have been papers in nineteen fifties, sixties and seventies all detailing the challenges of low power, of lack of planning and research, of flexibility and analysis. And they also outline all the solutions that we're pursuing now. So what was amazing in grad school was to realize that the problems and solutions have been known for a long time. It's just that the culture change is so hard that they hadn't been implemented.

[00:44:40]

And so a paper that really inspired me at that time is one by Tony Greenwald, who actually is my academic grandfather, my most frequent collaborator. But he wrote a paper in nineteen seventy five called Prejudice Against the NULL Hypothesis. And the point of the paper was to show that people think that finding no relationship, a null result in a study means that it's less meaningful to study and we should ignore it. And he talked about what the consequence of that prejudice is in terms of decreasing the credibility of the published literature.

[00:45:20]

And you can find that paper just by Googling it, and we'll link to it on the right.

[00:45:25]

And it is a if you read it today and you could be reading it as if it was written yesterday rather than in nineteen seventy five. And so to me, that paper, along with a variety of others from the same time period, was just a revelation. And so it really has inspired me to do the work that I've been doing.

[00:45:46]

That's really cool in some ways that that people generations ago were saying this and also a little depressing that, you know, nothing well, not nothing, but little came of it. And we're sort of still having to tackle those issues today. Yeah.

[00:46:03]

Jacob Cohen, who is famous for introducing the concept of power, wrote in the nineteen nineties. He had written his initial book in the nineteen sixties. We've been talking about this for 30 years and nothing seems to have changed, you know, and just like I just feel like a grouse at this point. And and I think it really is that the methodologies did figure out what needed to be changed. What they didn't do was apply psychology to the practice of science in order to actually get the change to happen.

[00:46:34]

Yeah, interesting. That is a cool piece of added value. That seems like a like you're hitting an important mechanism there. That's that we were missing before a lot is happening.

[00:46:45]

So it's a very optimistic time, I think, for for science. Cool. Well, Brian, we'll link to that paper on the website as well as to the excellent Center for Open Science. And I just want to thank you so much for coming on the show. My pleasure. Thanks for having me. This concludes another episode of Rationally Speaking. Join us next time for more explorations on the borderlands between reason and nonsense. The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog.

[00:47:32]

This podcast is produced by Benny Pollack and recorded in the heart of Greenwich Village, New York. Our theme, true by Todd Rundgren, is used by permission. Thank you for listening.