Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense, I am your host, Masimo People YouTube. And with me, as always, is my co-host, Julia Gillard. Julia, what are we going to talk about today?
Masimo, our topic today is the peer review process, the process by which papers get approved for academic publication. It's pretty much a black box to most people. So we're going to try to demystify it a bit and talk about some of the potential problems with it and ways that that impacts the development of science and also some of the ways that the peer review process is changing or or being undermined in the new era of online science.
Yeah, so so should we start with the basic outline of the way it usually works? Yeah.
What I'm learning, why don't you give that outline? Because I'm sure you're more directly familiar with it than I am. Yeah. I'd say a little too much for me. You know, has another as a reviewer and and as an editor.
Actually, I've done all three of the major the major roads and the process is really pretty straightforward, at least in the basic idea.
So you are an author of a scholarly scholarly paper. And no matter what the discipline is, B, science, philosophy, you know, English literature or whatever you submit, you've read up your paper, of course, in a certain style, which it's usually pretty well established within a particular this science. Most science papers, for instance, have a canonical structure. They start out with the introductory introduction. They continue with the materials and methods so that people know what you actually did in practice and how to replicate it if they want to.
They have a there's a results section when you actually present the data and then find the discussion when you tell the people why these old thing is relevant and what it means.
Now the paper gets submitted to a journal and only one journal.
That's one of the things and we might want to talk about today, because there's some suggestions recently of doing things differently. But anyway, it's supposed to be similar to just one Jonel, and then you wait for the editor to let you know whether the paper is accepted, rejected or accepted with modification.
So typically three and how old or how long that usually takes to hear back?
It depends largely on both on the journal and on the field, different fields and different standards. But it typically, for instance, in evolutionary biology and ecology, where I publish most of my papers, it can take anywhere between three to four or five, sometimes six months.
That's just to hear back. So if you need to make revisions or if you need to resubmit to a different journal. Right. This process could drag on quite a while.
Definitely can go into a year sometime a year and a half. Right. And then, of course, if it gets rejected, you have to start over the best case scenario, a few months.
So from the point of view of the author, so you submit the paper and then you just wait. And then, as I said, the three possible outcomes are acceptance with no revisions, which almost never happens and has happened twice to me in my entire career, that the editor actually writes back and says, yep, we'll publish it.
And you sort of blinked and rubbed your eyes, right? If he read the letter.
I was I was actually it was in some sense I was spoiled because this happened with my second paper and I said, wow, if you weren't always like that great and never happen again for 30 or 40 years.
But so the most likely outcome is either rejection.
In fact, the large majority of the papers, especially those submitted to top like top level journals, get rejected. In fact, top level journals are proud and published there, except what they call it, the acceptance rate. And the lower the acceptance rate, the better they feel because they feel more and more like a more exclusive club.
Right. Like colleges. Yes, pretty much.
And accept the rates in top journals can be as low as less than 10 percent.
OK, so to get into nature or science or top level journals within a field like evolutionary ecology, those are the acceptance rates we're talking about anywhere between 10 and 20, 30 percent. Now, the most common the second most common outcome of the rejection, I think its acceptance with revision now, that means that you really have to go there and make some substantive modifications based on what the reviewers suggested.
And then you submit the paper again, which is sent out typically for review again, and not necessarily to the same people that if you did the first time via the editor, usually does make an effort to do that, to send it to the same people.
But, you know, sometimes the same people are not available or they're tired of the paper or whatever.
So more often than not, it does go. There's a chance at least one of the reviewers is different from the original one, which, of course, opens up the other to the possibility of being made a certain decision, certain modifications in a certain direction following the suggestions of reviewer. And then the new reviewer comes in and says, actually, I don't like this at all. You know, I like the other versions or something like that.
Um, usually this goes on for once, one or two rounds, no more. If the editor then decides that the paper is in good enough shape, then it goes into production.
Otherwise, occasionally the editor, even after one or two rounds, can decide that the paper is still not acceptable and therefore it's going to be permanently rejected.
So that's one of the other.
Then inside the inside, you know, the way the sausage is made. Is this the editor receives the the paper in theory at least.
I don't know how many editors actually do that. But in theory, the editor should first read the paper, the full paper, not just the abstract and skim through, but read the full paper. So it's a time consuming thing to be an editor, especially of a major journal.
And on the basis of that, you pick a minimum of two and a half to three or four or five reviewers that are people who are experts in the field. And they're not they don't have conflicts of interest with the author.
What would a conflict of interest consider to be a collaborator in the recent past? So it's tricky, though, because someone who's an expert in that subject, narrowly defined subject, is likely to have collaborated trust.
That can be tricky. And, you know, editors do ask about conflict of interest. Presumably most reviewers do disclose their conflict of interest. Sometimes it's easy to check.
Of course, you know, if these are confident, just because if you're being a co-author on a grant proposal or, you know, that has been funded or in a paper previously published, that's easy enough.
Typically, you don't ask people at the same institution either, regardless of whether they've been collaborators or not. So you're right that that can be tricky. In fact, journalists keep typically a database of reviewers with notes about how prompt reviewers are. I don't even know the most review that necessarily all interviewers know this. But, you know, you keep it that I have a spreadsheet, for instance, for the journal that I'm doing, chief of Philosophy and Theory in Biology, where we have names of reviewers, their email addresses.
The last time we we asked them to review something that we don't use the same we were too often and the subfield of expertise where they feel comfortable, you know, providing feedback.
So the editor then sends these things out, the manuscript out for review usually, although that depends on the field. And this is another thing that we might talk about. The on the reviewers are anonymous, but the author is not. You mean that the reviewers know who the author is? That's not true in all fields. For instance, in most of the humanities, the name of the author is removed by the editor before sending the paper out for review, not only the name of the author, but also any internal references to the two obvious references to the author himself.
So I can see some arguments, although I can also see counterarguments for leaving the Reviewers Anonymous. But what is what is the argument for for leaving the Reviewers Anonymous and publishing the name of the author?
Why that asymmetry?
Studies show that if a reviewer knows who the author is, there is bias that comes into the decision.
Right. That's an argument for not doing that. So that's a document that the authors is not anonymous generally. Right. So and then the piece of evidence you gave suggests that that is going to create a bias in the reviewers. Right.
So you were you were saying why would anybody not make the deal?
If you're going to be anonymizing, why not anonymize both sides of the direction?
Well, that's an interesting question. I never heard a particularly good answer to that other than its practice. And when I tried, I actually tried a number of years ago to change that practice when I was on the on the board of of a major professional society or the name of which I will not disclose this to society for the study of evolution.
And I try to change things whiplashed there. Yeah.
And now and the response that I got was, well, you know, we've always done it that way. I pointed out that was not an argument and.
Well, yeah, but, you know, people are going to figure it out anyway.
Who the author is. They're going to guess.
See, this is why I was asking why the asymmetry? If you think people are going to figure it out anyway, then I mean, I know you can figure out who your reviewers are frequently.
Depends on the field. Yeah, I guess not always. That's right, you can have reasonable guesses about it. But look, I don't actually think you can know for for obviously for certain who either of you were or the author, if it is Anonymise is.
And I think that the point is, in fact, studies, as I said, systematic studies of blind peer review do show that that's enough to reduce or almost eliminate the bias. If you're not certain that it is that particular person, then you sort of you know, there's several possibilities. Could be that person could be a colleague. It could be a student or a postdoc. Right. So I really don't see any reason not to.
So the the argument that I've heard for keeping the reviewers, not the author, the Reviewers Anonymous, is that they could feel pressure if they reject the paper that that will you know, maybe it's a paper by a, you know, more famous or a politically connected colleague of theirs and they, you know, reject it.
And so now they're blackballed or there's like a political tension or something like that, which may be true, but that doesn't actually preclude the option of anonymizing reviewers who reject the paper, but publishing the names of reviewers who approved a paper for publication.
And that would actually get us a lot of the benefits, I think, that we want, because a lot of the problem in the peer review process and in my eyes comes from the fact that reviewers don't really have a strong incentive to to do like a thorough job vetting the paper and really checking the analysis and and vouching for it. They're not really vouching for the paper, at least not publicly when they approve it, because they're anonymous. So if they approved the paper and their name gets put on it, you know, this paper was approved by such and such a person, then they now have a much stronger incentive to, you know, do their homework.
And we don't have the problem of them being, you know, blackballed by someone for rejecting a paper.
That is a good point. Now, however, there is a problem with that one, too, and there is actually a partial solution to many journals about. So the partial solution that many journals do implement is that they don't publish the names of reviewers associating them with individual papers that they have actually reviewed. But they publish at the end of the year a list of all of the people that I've reviewed, all the papers and that list, positive or negative, write, accepted, rejected.
That list is actually published as a, first of all, published and knowledge, you know, public acknowledgement that these people have actually worked.
And second of all, so that readers can actually know, oh, these are the people who've been deciding about this publishable not that's pretty fuzzy information.
You don't know what paper. But here's the problem with your suggestion that there is actually a positive bias as well. I know I actually know people who sign their reviews when they're positive and not when they're negative. And some editors let that go through, I don't, because the problem is that if you sign something everyone, it's positive.
It's the fallout for all effective purposes, asking a favor or or or signaling your name to the author and say, you know what, I accept that your paper this time next time you'll be nice to me or B, not you.
But that's the thing. If the person doesn't like if if the person that you you know, who's back you scratch, doesn't accept your paper, you won't know about it because they'll be anonymous. That's the beauty of my suggestion. Right.
But that person then also does not appear as a positive as a name in a positive acceptance of your paper.
So yeah, but you don't know whether it was even offered to them. Maybe they never offered the paper. But as you pointed out earlier, that was often these fields actually pretty small. So you can make reasonable.
Yeah, well, that's going to be a problem all the time, I guess. So sort of like operating against that background. What's the best we can do, you know?
No, that's that's that's true now.
So just to finish the usual modus operandi of these of these things. So the reviewers are typically are, you know, instructed by the editor to do this thing within a few weeks, then they need usually to be reminded several times.
The thing is, do a few months later you get the the comments back. Now, here's the part of the problem that some of these comments are actually, in fact, very useful.
You know, I've known I've seen reviewers doing a very thorough job, know writing two or three pages of report on on a particular paper indicating both the sort of the general problems with the paper as well, as, you know, indeed, almost like not line by line suggestions.
The other view is right. You know, five lines. And basically they either like the thing or they didn't like the thing.
That's not very useful for for the editor. So if that happens and the editor is serious about this thing, what happens often is that then the editor says, well, this review is useless and so I need another review to replace it.
That means another three or four months of delay for for the paper before the author actually hears back from from the editor.
So in the end, of course, as the process wraps up, usually, as I said, with two or three reviewers, although some very high profile journals like Science and Nature, whose turnaround time is much shorter, by the way, because they're they they operate in a very different way.
They actually send out a paper as a pre review after to members of their editorial board and they say, look, look quickly at this and tell me if it is the kind of paper that should make it into nature science.
And if the editor is agree.
Yeah, that's you know, this is sexy enough for you, is cool enough stuff, then they send it out for review. We're pretty strict guidelines in that case to get the review back and within two or three weeks. So science and nature kind of exceptional from that perspective. Hmm.
And finally, there are some journals that do things in a somewhat arcane way, like the Proceedings of the National Academy of Sciences USA.
Um, in that case, you actually can write the author can write directly to a member of the academy and ask that person to submit the paper on their behalf, as it used to be done in Victorian England.
Is that Proceedings of the National Academy of Sciences that are really old, is that one of the original journals?
It is well known. I don't know how old it is, but it is a very old and very high profile.
You know, it's considered, you know, the third ranked after science and nature in general science.
So how does the system work?
Well, so so you ask basically a buddy of yours to to to propose the paper and your buddy essentially controls the editorial a large extent, the editorial review process. Now, PNAS, which is the obligation for the proceedings, it's actually been changing this thing. They now went they recently went to a mixed system where you can do that or you can simply submit a paper in the standard way to a genetic editor and then they make the editorial decisions.
But still, that shows you if things actually change very slowly in this area.
And one more thing you mentioned earlier, the anonymous reviewers right now, it used to be that book reviews were anonymous. If you are no longer, you know, these days, you actually sign your book review. But if you read, for instance, the first review of the Origin of Species, which appeared in December of 1859 in the Times, London, it's anonymous.
We know it was written by Thomas Huxley. Oh, because he was also eventually become sort of became the standard knowledge that that, you know, oh, that's the guy that actually did that particular view. But officially, these were anonymous. And the point of it was interesting because exactly what you said a minute ago, you don't want the author to retaliate against the reviewer. And if the reviewer, of course, assigns his review and that if you happen to be.
Then the other is if he has a chance to retaliate in the future. Right, right. And with Butkovitz especially, there isn't this whole accept reject dichotomy where you can easily divide that up if you wanted to.
Interesting. So that's a state the stand, their state of the art pretty much with, you know, variations between different fields.
Can I ask a question about the review process? I've never understood what exactly is the incentive for professors to participate. They're not being paid by the general.
Right. And I also don't understand journals are for profit enterprises, right?
They're not operated by universities, which is another thing I don't really understand. But yeah. So journals are not up to universities. They're for profit.
Why aren't they paying? Why are they getting all this unpaid labor? What is especially given that it's an anonymous thing, you don't get prestige for being a reviewer. What is in it for the viewers?
That's an excellent question. And frankly, even for editors, because even though editors are known and it is somewhat prestigious to be an editor, especially of a high level journal, it's a lot of work, I guarantee you.
Yeah, especially for a high level journal. You can you literally can handle hundreds of manuscripts a year, which takes up pretty much all your time. I mean, science and nature, for instance, have full time paid editors. But that's not true for most journals.
Professional, professional journalists, it's all volunteer system.
So the answer to your first question then is why did people do it? It's a matter of the of the work ethics of the field. You're you're expected to use, you know, to donate essentially part of your time as a reviewer, editor, grant reviewer and so on and so forth.
And so within limits, pretty much everybody pitches in. And that is one of the reasons why, Ed, a very careful, careful not to overuse the same reviewer because, you know, and of course, it's always understood that a reviewer can decline to review a particular paper if if he feels that, you know, that I have done too much of this this year or, you know, for instance, I have a personal quota for grant reviews if when the National Science Foundation asks me to review things, the first two I accept.
But beyond that, it depends on what else I'm doing.
So it's really a matter of sort of work ethics that it's been entrenched in in the field.
The interesting thing for economists to pay attention to, yeah, it's interesting when there are exceptions to the whole you know, people only respond to, like, incentives.
Yeah. Especially as you pointed out, especially the reviewers. They certainly don't. Yeah. There's really pretty much no incentive.
Now, the the other part of your question, you know, had to do with the journals.
So what you pointed out is actually being one of the major motivations behind the open access. Right. The last few years.
And in fact, libraries have come out very strongly in these last few years, sort of taking over or attempting at least an alternative model for for publishing journals, because it used to be very initially.
Let's let's get a little bit of history here. Initially, journals, scientific journals were, in fact published by scientific societies.
So the very first journal ever published in Biology was to the Journal of the Linnean Society. That's where Darwin and Wallace published their their initial joint paper in 1858.
And then after that, you had the Royal Society and then the the National Academy of Science in the United States.
So those were owned and controlled by the societies.
Then what happened, especially after World War Two with, you know, a dramatic increase in the number of papers published, partly in response to the fact, of course, that there was now, you know, a lot of funding for Science and National Science Foundation was established during, you know, after World War Two, that sort of thing.
Um, then journals, societies retained often the formal ownership of the journal.
For instance, Society for the Study of Evolution owns evolution.
And the editor is a member of the elected officer of the society. But it's actually published by a commercial publisher in this case, I think currently Blackwall.
And you're right, this is this is giving Argentina a bizarre situation where for all effective purposes, the publisher gets the free work of the reviewers, the free work work of the editor, with very few exceptions, the research she's obviously paid usually by federal money or by public money or at the very least university.
But most scientific research in particular is based on the salaries of the people involved, the director.
So at any rate, in either case, the publisher certainly doesn't pay the salaries of researchers nor for the research.
And then they turn around and they charge a significant amount of money to the library. I don't understand, like they're not even publishing.
I guess they are publishing print journals, but not as much as they used to. And it's certainly not necessarily their profit margins must be huge. Right.
The only thing that they really provide is sort of added value is the formatting. OK, it's the typesetting.
Yeah, that's about it, I guess. OK, here's another thing that they provide, but they don't have to be the ones who provide it. They provide this the stamp of like the sort of certification of the quality. And that is a really useful thing to have certification of the quality of the paper.
That's useful thing to have. Just, you know, when you're trying to evaluate the work of a professor or, you know, the work on a certain topic and that sort of thing, but also what they say, that's not I'm sorry.
First of all, you're right that that is not in the certificate, doesn't have to be the case because, you know, the Society for the Evolution, it puts its own stamp of approval on the papers or the biological of the society puts its own approval.
And not only that, but it can actually backfire. The commercial publisher stamp approval can backfire. Just this past week, Springer, for instance, which is arguably the largest, certainly one of the largest academic publishers, got into trouble because, you know, one of the they they published they were thinking of published.
They didn't publish, but they were they were seriously considering publishing a book that was favourable. It was basically written by intelligent design proponents and they were publishing it under a science series of books. Why would they do that?
It's unclear whether they simply didn't realise what was going on. You know, the editors were not familiar with the situation. I don't know exactly why. But, of course, that God, you know, that provoked a big backlash against Springer a few years ago. Another publisher, which actually I think it was Springer again, but I'm not absolutely positive this. So don't quote me.
I think you're being quoted now by virtue of being on a British show, actually was speaking a bit, but I'm not absolutely positive.
A few years ago, they got into trouble again because it turned out that in a number of journals there were publishing where fake there were different journals that were actually paid for by the pharmaceutical industry to publish their own papers as if they were peer reviewed.
Scandalous. Exactly. Now, that also immediately was you know, as soon as they were caught, they got caught. They shut down the operation. But that's the problem with, you know, a commercial publisher. You don't you know, the the there's incentives there that are going beyond, shall we say, the academic credentials of.
Of the authors, right? Well, yes. That's a sort of separate problem from what I was thinking of. But what I was going to say is that the service even conceding it's a valuable service to provide this like measure of quality of a paper.
The Journal doesn't have to be the one publishing the papers like now that it's so cheap, storage is so cheap and, you know, publishing costs are so cheap to put papers online.
We could just have we could have people self publish and then we could have the what used to be the journals, you know, rape the papers or certify them or, you know, every it's it's this wonderful badge of it's like this wonderful merit badge to to have been published in Nature or whatever.
So Nature could just pick, you know, 20 papers every month or however many they published in their journal typically and say these are the papers that we would have published if we were still publishing papers.
And then you can still get that, you know, better writing on your on your CV. I've been published nation. You can write. I would have been I would have been published in Nature if Nature published papers actually are things like that.
So, for instance, there are rankings internal to the scientific community of systems that pick, you know, the highlight, the best, you know, thousand papers of the year or something, or the first of the top 100 papers of the year in a particular some. And if you get that distinction, it's actually is a serious thing. That means that your peers have looked around and they they usually refer to as something like the most influential papers of.
So that is in part done, although very little not it's certainly not systemic what is being done more systemically and it's increasing in in sort of impact over the last few years.
Is this sort of the revenge of the librarians where the libraries of. Always been in the forefront by complaining about the cost of academic journals, especially science journals, by the way, science journals are much, much more expensive than humanities journals, even though presumably the production costs are the same.
I guess that's just because science departments have bigger budgets. That's right, Jack Department. But right there, the market. Correct. But right there, that tells you that the publisher is actually saying the price is right.
It's nothing to do with the value. Exactly. You know, OK, we can get more money out of these suckers, so let's charge them more. Now, the libraries are always been complaining about this and often have they occasionally, in fact, boycotted certain publishers and that sort of stuff.
The situation became particularly awful. Once that more electronic databases started coming out, you would think that that would be an improvement.
But the thing is that a lot of the publishers, not just Springer, but several others, so they bundled together a bunch of their journals and then they give a bundle to the library, basically forcing them to pay for each individual title.
The library pays less, but the library now can no longer choose individual titles. So you end up buying a bunch of stuff that you really wouldn't have subscribed to before.
And now they come in a bundle and you sort of have to have them and the prices set accordingly, like how albums used to be sold or how music used to be sold and. Correct. Correct, yeah.
Now, the revenge of the librarians started a number of years ago when several libraries, together with several societies and even occasional independent independent editor of individual journals. So realizing that the new technology was beginning to make it possible for these things to be published in an open format and eventually online, only the first transition was open source, but still printed.
And then now a lot of open source journals like mine are online only now. Initially, the resistance, of course, was frankly just from the culture of the web of the community. You know, people started thinking about for some reason, they thought that online journals are not a serious you know, that maybe there is no peer review of the peer review is not as serious as it is. It's exactly the same process is just the medium of publication that that it's different.
But initially and still, some of my colleagues have this idea that, well, if it's in the end, it's on the Internet and then anybody can put it out there. Well, wait a minute.
This is done by a legacy of the world that that poetry is. Yeah.
Or that somehow it's easier to modify or to fake it or whatever it is, you know, it's not printed.
So it can't it's more not quite sure how to diagnose that mindset, actually. Yes.
That would be we should call a psychologist about that kind of reaction, but that is slowly going away.
I mean, I'm on tenure and promotion committees, for instance, and now I see that a lot of in all cases, these online publications and open access publications are accepted as being just as good. And who is doing this is the interesting question.
So a lot of the times is, is a library. So the marginal philosophy and theory in biology is published by the University of Michigan libraries. Hmm. Who got a grant a few years ago to do exactly this kind of thing? And then they started proposing to about a number of editors to publish their journals. And frankly, they do an excellent job. The University of Michigan Library now publishes something like about 50 different journals.
And basically what I do as an editor is I take care only of the actual scholarly part of the editorial process. Once we approve a paper, it goes to the University of Michigan libraries. There is a librarian there, Full-Time, who just formats the paper, puts it in the nice looking, you know, PDF format, and then publishes on their website. And it's available for free.
Hmm. So I have a couple of questions you might be able to help me with.
The first is why, like, I understand the importance of the peer review process, but it seems like there's a lot of information lost with the binary accept or reject decision.
So especially as I was saying in the online era where storage costs are cheap and you can basically post as many papers as you want online. Right.
Why not just post everything that was submitted, post the comments or, you know, post the overall decision about the paper.
And, you know, you can you can have some sort of, like, threshold marking if you want these papers, you know, above this threshold where we're publication worthy, you could even you could even have.
OK, these are the twenty papers that we published. These are the other say, you know, 50 or 60 papers that we thought were like above the threshold. But just, you know, not we decided not to publish for whatever reason.
And then here the other, you know, 200 papers or so that we didn't think were worth publishing. And here the reason that would actually be really educational. Why is that information not made public?
There are some attempts in that direction. Not they're not widespread yet. But let me give you two examples that actually work differently and maybe maybe that that can show this how these we are really in the middle of a sort of innovation.
Period. And probably if we're having this conversation again in, know, five or 10 years, the security landscape might look very different. So one of them is the PLO as serious as the Public Library of Science. It's actually one of the oldest, if not the oldest, open access journal.
It's not online only. It's also available as a printed version.
And they have different editions, resupplies, biology, physics and so on and so forth.
Anyway, the PLO method is this. They do have peer review and editorial decisions, but they are limited to basically deciding if the paper is of a sufficiently basic quality, meaning it's well-written, it's understandable. It looks like, you know, the data that the tables are put together correctly. It looks like there is no obvious flaw, but there is no judgment on the value of the research itself.
So one of the things the value is in how useful or important.
Yes. How important, how relevant. You know, that sort of stuff for that is left to do the viewership.
So the viewers go there and download and vote, in fact, and rank on papers and papers are then, you know, you can see the ranking and you can see the comments of the of the is there a there's a fixed O readers themselves are.
Yes. Oh, interesting. Well, that was actually going to be another question of mine. Like I've seen websites that work seem to work really well with this like self-monitoring community system where, you know, posts or comments will get upvoted or downvoted and they'll, you know, the ones with the most karma points like float up to the top.
And so you see those first. And you can see with each, you know, commentary, each member of this community, what they ranked highly and what they haven't. You can get a sense of their tastes and how much you trust them.
And it works really well. So that's the point.
The problem there is, of course, if we're talking about academic publications is how do you restrict or do you restrict the community? All right.
So if I use Yelp to find, you know, information or suggestions about restaurants, well, the community is the community at large.
I can easily imagine a paper, let's say, in that shows new evidence for climate change being, you know, inundated by negative comments from people who don't like that idea or a paper and evolution of humans being inundated by creationism or intelligent design people. We sort of put it down that. So that's one one issue. Of course, you can have people registered on the website and all that.
But but that gets more complicated. Here's another way of doing it. I discovered this recently through reading a small piece. We're going to put the link on on the website for for the podcast today so that people can look at the whole thing. But there is a journal called Chiros.
It's a journal in in it's an online journal on rhetoric and technology.
Don't ask me what they publish because I don't read them, but they have an interesting system. So when a paper gets submitted, the editor involves the entire editorial board. Everybody who is an editor for the Journal gets involved, reads the paper, and they have it open internal discussion back and forth on the paper, on the merits of the paper, where there is, you know, detailed notes that are kept by the editor about the entire process. They let this go for a while.
I don't know exactly what.
I guess that probably depends, but could be a matter of weeks until they reach an agreement that, OK, there is enough about these papers, these are merited, that it can make any obligation, not necessarily the current version.
At that point, the editor in chief appoints one of the editors who participated in the discussion, and that person is actually going to coach the authors on how to modify the paper, taking into account the results of the discussion and all that sort of stuff. So it's not just the accept or reject or, you know, accept modification. Here's our general ideas about modifications. There's somebody who actually coaches the order and says, OK, here's what we want to go.
I'm going to give you a hand. And in fact, apparently, in some cases, the involvement of the editor is so in-depth that the editor ends up being a co-author of the paper.
If there is enough, you know, if there's enough invested, most unorthodox, most unorthodox indeed.
I don't know. I mean, I actually like the idea. I don't expect it to become particularly popular for the simple reason that it's a huge investment of time on the part of the entire editorial board doesn't go well.
Yeah, I can't I cannot imagine even asking my editors to do anything like that.
There will be a revolution on my hands.
One thing that we haven't talked about that's really relevant is the astrophysics archive, which is I don't know if it's only I don't think it's only astrophysics now definitely started as just astrophysics, just a venue where people could post. They're essentially working papers and get, you know, feedback from people while they were in the process. I think most of them did end up being submitted to publication and standard journals eventually, but they were up on this archive for anyone to read or download or discuss.
You know, in the meantime and I've been I've been pretty pro open source this whole discussion.
But I do recognize the the potential pitfalls of having things like archive that are completely unscreened or unfiltered because stuff gets picked up by the news and stuff gets picked up by the blogs and then the news.
And, you know, the distinction is not always very clearly made when things are being reported that this article is just a working paper that someone posted to archive and has not been peer reviewed.
And so there's been a lot of kind of sketchy results that have been discussed a lot in online and in the news because they were posted to archive or to a similar kind of open source compendium.
So I was wondering if you have any thoughts about whether like that trade off between open source and I tend to agree.
I mean, I do favor open source as much as possible, but I do recognize the value added of a community of experts that is closed at least initially or limited at least initially, because otherwise, you're right. First of all, there is to the effect of leaking this stuff to media and media, making all sorts of mess out of it.
Oh, and by the way, sometimes that leakage has been actually caused by the authors themselves because it goes in there to.
Sure, sure. And so on in this era of more and more pressure for people to, you know, come up with high level. I mean, I know I actually know some colleagues, not a lot, but some colleagues who actually have their own press agent.
Oh, personal press agent. Not to the university department. That's right. Yeah. There also the university department universities and of course, press agencies push these kinds of things. And, you know, we've we've seen this kind of stuff occasionally turning into horrible things like the the whole debacle about cold fusion. Right. For instance. Right.
Yeah. In our last episode with with Howie Schneider, we talked about the problem of the press jumping on an issue before it's been properly sorted out.
And I think we talked also about how the whatever follow up ends up occurring or whatever retraction or, you know, or more like sober iteration of the issue just tends not to get nearly as much coverage. Right. So, yeah. So I guess the issue with open source papers that haven't been vetted sort of plays into that whole structural issue.
Now, that said, from the point of view, particularly of the public one, when really should realize that the peer review is essentially a never ending process.
I mean, we really should talk about prepublication screening in post publication, screening or or review, because, you know, as it turns out, there was a study, for instance, that came out now a number of years ago looking at the number of of citations, of papers published in top level scientific journals. And it turns out that two thirds of the papers published on top of the originals are never said.
Once before, during the following five years after publication, in other words, they're dead and these are the top journals now.
So so what that means, of course, is that the paper is good enough for some reason for publication.
You know, the editor and the reviewers decided that was good enough, but the rest of the community couldn't care less. They didn't pick up on it and never went anywhere.
You know, the famous phrase is that the edifice of knowledge is built of individual bricks, but most of those bricks lay unused in the backyard to do anything with them. So there's always that.
And not only that, but even when, you know, occasionally to set the community, for instance, people get upset about when, when, when, when a paper in favor of, say, paranormal, you know, research gets published in a peer review journal. This happened last year with a paper by IBM on on clairvoyance or when occasionally, again, a paper on intelligent design gets published. You know, this happened a few years ago in a very minor biology journal, but still got published in a peer reviewed journal.
Well, you know, those are things that happen. And they'd have absolutely no consequences from the point of view of of the academic community, because those papers are simply going to be ignored by the rest of the practitioners. The reason those are so upsetting when they happen is because then, of course, the paranormal community or the intelligent design community makes a big brouhaha of the fact about the fact that, oh, we published a paper.
And the reason that works is because the public does not understand does not have an appreciation of the fact that getting published is only step one. And there's a lot of other things that goes on continuously in terms of peer review within the academy. And so just because you got published, that doesn't mean you're you know, it's it's like getting published as an author, as a book.
You means you convince the publishing house to to publish your book. Now, whether the book is going to do well or not, that depends on the public. And, you know, just because you convince an editor it doesn't mean you're going to sell more than five copies to your friends and your mother.
So true understanding, sort of calming down the whole debate and understanding that, look, the peer review, the so-called peer review, it's only the first stage and it never really ends. It might go somewhere to sort of alleviating these you know, there's no sharp discontinuity between pre and post publication. Peer review.
Right. Excellent points. I think we've now pretty thoroughly unpacked demystified the black box or as thoroughly as we can be expected to do in a 45 minute podcast episode. So we will move on to the rationally speaking picks. But first, I want to remind all of our listeners that the Northeast Conference on Science and Skepticism, also known as Nexxus, is coming up very soon.
Masimo and I will be there recording a live episode of the rationally speaking podcast. And there's a great lineup of other speakers and panelists and performers. It's going to be held April 21st and 22nd in New York City. And hopefully there are still tickets left. I encourage you to rush over to our website, NextG. That's NextG to secure your tickets. Hope to see you there. And now we will wrap up and move on to the rationally speaking PEX.
Welcome back every episode, Julie, and I think a couple of our favorite books, movies, websites or whatever tickles our rational fancy. Let's start as usual, as usual, with Julius Spik. Thanks, Massimo.
Well, this time I'm going to actually take advantage of that phrase that you always say or whatever.
Tickles. Irrational fancy. Oh, yeah.
That's why I picked this up. It is not a book. It's not a website. It's not a movie. It's not an article.
It's a game. It's a game called Zendo. And it is possibly the best game that I've found for training, rationality, or at least a particular subset of rationality.
So here's how it goes. I'm there will be a link on our website so that you can, you know, see the rules and the pictures and the pieces more clearly and maybe buy it if you want.
I'm not being paid by them to pitch it anyway. It's called thenDo. So there's one player who's the master and the other two are the students. The playing pieces are these colored pyramids of different sizes and colors. And the master takes selects a few of the pyramids of his own, you know, color and size choice and arranges them in front of in front of him or her.
And the master has some rule in mind, like the rule might be something as specific as a large green pyramid next to a small yellow pyramid, or the rule might be something general like two pyramids.
And so but but the students can't tell what the rule is. The students can only see what, you know, arrangement of pyramids. The master is put in front of him. And so the game is the students trying to guess the master's rule. But the way they can guess it, the way that they're allowed to guess it, is by creating their own arrangements of pyramids and being told by the master, yes or no, like that fits my role or that doesn't.
And I think you can also at certain times like to venture a guess as to what the rule is. And, you know, you're either told you're right or you're shot down.
And so what this forces you to do is think about what like you you might have some sort of, you know, instinctive knee jerk guesses as to what the rules might be. And then you have you're forced to think about, well, you know, should I brought in the set of hypotheses I'm considering. You know, maybe it is something general, like two pyramids. That's not the first thing that comes to mind. When I see a yellow and a green pyramid, I assume it's something more specific, but it might not be.
And then it forces you to ask yourself what what arrangements could I could I set up to test my theory the most efficiently? Like what if I'm told, you know, such and such doesn't fit the rule?
I want my my such and such that I've just created to be the most efficient test to rule out as many possible hypotheses about the role, what the role might be at once as possible so that I can proceed as fast as possible towards narrowing in on the actual rule. So it's actually a model for doing science.
Like what experiments can I set out to figure out what what role the universe is actually following?
And and it's fun and it just sort of like intuitively hones your your empiricist inclinations.
So, yes, thendo I encourage you to check it out.
Sounds good. Well, my pick is a website called Download the Universe, which may sound ambitious mightly.
Yeah. This is because the website is in the universe. It probably also sounds logically incoherent.
It may very well be. I go on, but the website is entitled is subtitled The Science eBook Review. This is actually a joint effort by a bunch of people started by one a.m. by journalism.
Carl Zimmer, who has been actually a guest on our nexxus before, and a number of scientists, including, for instance, Sean Carroll, the physicist, collaborate with it. And basically the idea is to take seriously the expansion of ebook publishing. And they publish reviews of books about science that are available only in electronic format, so not books that get published. Also an interesting format. There is an increasing number of parents in your books that get that, skip the print altogether and go straight and only to electronic.
So this website is a pick of the best of the currently out or recently out electronic books.
Also the feature occasionally some general, more general articles about the whole phenomenon of electronic publishing and how it affects science communications in science education. So it's check it out. They have some really excellent reviews over the last few days. And despite the perhaps overly ambitious title Download the Universe, it's actually a very, very good resource for anybody who's interested, seriously interested in scientific publication.
Hey, I've been in journalism. I understand the importance of the catchy and somewhat overambitious title headline, so I approve. And that sounds fascinating and and very apropos to this episode.
We are all out of time, unfortunately. So this concludes another episode of rationally speaking. Join us next time for more explorations on the borderlands between reason and nonsense.
The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benny Pollack and recorded in the heart of Greenwich Village, New York. Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.