In peer review we (don't) trust: How peer review's filtering poses a systemic risk to science

    Conversation

    4 Comments

  1. Harry CraneSeptember 4th, 2020 at 07:14 pm

    Thanks for your input and my apology for the delayed response. I'll address a few of your points here, while we think further about how to incorporate these ideas in a revised version. 1. Your first concern seems to be that we do not provide data to back up our claims. While I don't believe data analysis has a role in our argument, it is worth noting that such an analysis would not be possible because submission information is kept confidential for most journals in most fields. When a paper is published in one journal, it is not generally known whether it has been submitted previously to any other journals. It is also not known what the reviewers' comments may have been or how the authors addressed those comments. So I'm afraid that even if one had data to perform such an analysis, that data would be rife with bias of its own, and the analysis would be prone to error. To the second half of this point, about whether peer review is successful in improving papers and filtering out *some* bad science. I understand your argument, but it completely misses the point of our article and the mission of Researchers.One. Of course, we agree that peer review is worthwhile in improving research quality. That is why we have created Researchers.One as a platform whose central focus is to encourage and facilitate peer review. However, it is a fallacy to believe that in order for peer review to be effective that it needs to also be tied to the accept/reject decision of a journal. Also, while the idea of peer review is to improve quality, it should not be to filter out "bad science". As we have written in this paper, it is exactly the perception that peer review filters out bad science that puts the current system on shaky ground. 2. Your comments on prestige versus quality reflect a very different experience than I have seen, both as author and reviewer. Perhaps referees don't focus on prestige as much as editors do, but referees are biased by other factors. Often, the referee is a competing researcher in the same field. They have an incentive to poke holes in a competing idea. And, due to anonymity, they face no consequences for being uncharitable and unfair in their criticism. To your point about "low-mid impact" journals. You might be right, but now I'd ask: if these are admittedly "low-mid impact" papers, why must they be put through such a drawn-out editorial process before they can be published. By all means, it is beneficial that the work is reviewed, but why does this have to happen *before* publication? It doesn't. If a paper is especially low impact and wrong, and nobody ever reads it, then there is no loss in publishing it. There is, however, a loss in publishing it as an official "peer-reviewed article of the XXX society". Once it attains that stamp of approval, it is given credibility, but its claims are no less wrong than they were beforehand. 3. Once again, our suggestion is not to do away with peer review. It is to do away with the filter. As a layperson, you should still have access to the peer review and comments of others in the field. That is the mechanism R.1 provides. In future versions of R.1 (coming soon), there will be a means for researchers to curate their own list of articles which they deem important and credible in their field. In this way, as a layperson, you could seek out the recommendations of renowned physicists (real people whose credentials can easily be checked) in the same way that you now rely on the recommendations of journals (bureaucratic and secretive institutions). Finally, I believe your argument about scientific literacy is confusing and and ill-founded. Just because a paper has been put through peer review does not make that paper any more understandable to someone with negligible knowledge of the field. If experts in a field of study, who understand the process by which work is published and research is performed, are taught to be skeptical of published results until they can verify it themselves, then why shouldn't the public, which has even less knowledge of the field, not share the same skepticism? The idea that we can provide a literature that is flawless, and whose results can be taken at face value, illustrates the point of our paper. It's a noble aim, and ultimately an unachievable one. Yes, many papers that are published in the literature are sound (to some degree). But it's impossible to ensure that the literature is perfect. The perception that the literature is reliable and can be taken on blind faith is what leads to the perpetuation of falsehoods in the general public, by journalists who can't assess the results for themselves but are led to believe that "peer reviewed" means "true".

  2. James Van DykeSeptember 4th, 2020 at 06:42 pm

    Full disclosure: I came across this article via a twitter conversation about the benefits of the pre-print publishing model over the traditional peer-review model. I think this essay on peer review is a good start, but in my view it is unconvincing for several reasons: 1. The focus on peer review letting poor science be published (section 2) is seemingly valid, but is only half of the story and is a clear example of survivorship bias. I think for this argument to be supported, you need to present data to test 1) how frequently papers are rejected by journals, 2) what proportion of those rejected papers are eventually published elsewhere, and 3) what proportion of those papers were not improved to the point that they were reasonable to publish (especially if methodological or analytical problems were identified in the first reviews). In other words, how many “bad” papers does peer review actually prevent vs allow? If peer review is failing as a selective filter on quality, then we need data on the bad papers that go unpublished in addition to the bad papers that are published. This is critical to judging the true effectiveness of peer review. By seeing only the papers that survive peer review, we are necessarily biasing our observation of the population of papers. A valid hypothesis could be that peer review IS filtering out the majority of bad science, but we just don’t see those papers (or their reviews) because they are (reasonably) confidential. 2. The statement that peer review focuses on prestige rather than quality (or scholarship) is a very narrow assertion focusing on the highest of high-impact journals, like Nature and Science. I think this is flawed for two reasons. First, these kinds of journals perform an editorial filter even before articles are offered to peer review, and that is the stage where the “prestige” of the result is most important. Once the paper is sent to peer review, the prestige matters less than the reviewers’ interpretation of the quality of the science. In other words, it is an editorial problem, rather than a problem with peer review, per se. Second, this statement certainly does not apply to the majority of traditional society-based journals, at least in my field of vertebrate biology. The majority of “low-middle impact“ journals (which is a problematic concept on its own) rely far more heavily on peer reviewers’ critical assessments of quality and scholarship than they do on prestige of the result. I think there IS a difference between these field-specific journals and the Science/Nature journals in that editors in these journals actually know the field an article is in, and are thus more likely to be able to know/find reviewers who are experts in that field. An editor at Science or Nature has to be much more broad, and to find experts in the tiny focus area a given article is in would be expected to be more inaccurate and imprecise. Thus, I would expect the quality of peer review to be higher in field-specific journals than it is in broader, high-impact journals. However, this again is not a criticism of “peer review” itself, but is a criticism of how journal impact/status drives problems in how papers are judged at the editorial level, rather than the peer review level. 3. I’d like to see comment on the importance of peer-review not as a filter to other scientists within a field, but to non-specialists who don’t know the details of that field. I feel this as a biologist. I know next to nothing about physics, so if I had to judge for myself whether a paper in physics was valid, I’d be pretty useless. The risk of the current pre-print model is that it isn’t always clear to the reader that an article is “just” a pre-print. When a news agency publicizes a new publication, they are giving it de facto credence to the broader community, including laypeople. Science recently did this with a fascinating pre-print article about gene editing in a lizard (https://www.sciencemag.org/news/2019/04/game-changing-gene-edit-turned-anole-lizard-albino). By publicizing a pre-print before peer-review, Science is basically telling the public that this is a totally valid result. Maybe it is, maybe it isn’t (my knowledge on CRISPR is also so poor I would make a bad reviewer), but the risk I see is that now the media promotes stories that haven’t gone through that bare minimum of quality filtering that peer review might represent. If this work turned out to be invalid for whatever reason that peer review would have detected, then the preprint model has just allowed it to become available for public consumption erroneously. I see this as a major problem in an era when science literacy is low in the general population. Without some mechanism for peer-review before an article becomes more available, how do the media, general public, or even scientists in other fields tell good science apart from the bombardment of pseudoscience they receive from the news, websites, blogs, politicians, documentaries, etc. every day? Democratization of science is a wonderful ideal, but it’s not likely to be successful unless the person reading the science has enough expertise to be able to judge it- and that is so hard that even scientists aren’t really in a position to do it until they’ve had 5+ years of specialized training. In my view, confidential peer review exists to provide that filter in the modern age. It’s something we can point to, to separate (mostly) good science from everything else, and give the public something that they can usually rely on as a metric. We should only think about abandoning it if we can show that it really isn’t working, and to do that, we need to test for survivorship bias.

  3. Robert RyleyMay 5th, 2020 at 06:43 pm

    I've had to learn this truth about "peer reviewed" literature the hard way.

    Anyone who has attempted to learn from the medical and health care scientific journals to the point of attempting a formal research synthesis, will have to grapple the effects of 1. poor reporting by primary authors, 2. poor understanding of statistics by authors and reviewers, and 3. even misunderstandings of logic, by the very peer reviewers that are supposed to prevent mistakes.

    Granting the argument that peer review creates a biased literature, what does that do for so called evidence based medicine -- especially the naive "evidence pyramid" that places retrospective systematic reviews at the top?

    For starters, most meta-analyses of patient reported outcomes (PRO) are almost certainly unreliable as published, in that they use parametric assumptions on inherently ordinal data.

    The Missing Medians: The Exclusion of Ordinal Data from Meta-Analysis

    https://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0145580

    Quote:

    In many cases, however, outcome measures are ordinal rather than continuous; a scale’s categories have a natural order, but it cannot be assumed that differences between the categories are equivalent ... Interpreting means and standard deviations in these conditions is problematic; medians and inter-quartile ranges are statistically more valid.

    These reporting considerations have important implications for meta-analysis. Where ordinal data are reported appropriately in individual studies, they are often excluded from meta-analysis due to the difficulty in pooling them. Alternatively, where study authors report means and standard deviations, often inappropriately, these data can be included in meta-analysis but the validity of the pooled results is questionable.

    These data aren't inherently difficult to pool if the log odds is used as the measure of effect.

    I don't consider myself a nihilist in the sense that I belielve nothing can be learned from the peer reviewed literature.  But the work needs to be taken with large heaps of salt before it should influence a clinician's decision making.

  4. Ignacio OliverasMarch 26th, 2019 at 07:57 am

    Good article, congratulations. Good luck with the project.

    Nevertheless, why using precisely the flu vaccine as an example when it is not even clear that it creates herd immunity?

    https://academic.oup.com/cid/article/56/10/1363/404283

    I would suggest using measles instead as a token of example.

Add to the Conversation

Abstract

This article describes how the filtering role played by peer review may actually be harmful rather than helpful to the quality of the scientific literature. We argue that, instead of trying to filter out the low-quality research, as is done by traditional journals, a better strategy is to let everything through but with an acknowledgment of the uncertain quality of what is published, as is done on the RESEARCHERS.ONE platform.  We refer to this as "scholarly mithridatism."  When researchers approach what they read with doubt rather than blind trust, they are more likely to identify errors, which protects the scientific community from the dangerous effects of error propagation, making the literature stronger rather than more fragile.  

Versions

➤  Version 1 (2018-09-14)

Citation

Harry Crane and Ryan Martin (2018). In peer review we (don't) trust: How peer review's filtering poses a systemic risk to science. Researchers.One, https://researchers.one/articles/in-peer-review-we-don-t-trust-how-peer-review-s-filtering-poses-a-systemic-risk-to-science/5f52699b36a3e45f17ae7d74/v1.

© 2018-2020 Researchers.One