Overcoming bias in peer review

Zia Mehrabi
5 min readAug 12, 2022

--

We need to make peer review more transparent.

Like any academic I get asked to review papers as part of a service to the academic community. I often have to turn these invitations down: I just get asked to do too many reviews — if I took all the invitations, I feel I would spend my whole academic career judging the work of others and doing none myself.

There are a number of reasons besides time why you might not review a paper. One might be that the journal in question is predatory. Another could be that the journal has policies that are not aligned with your values — many academics purposefully boycott Elsevier, Cell Press, or Nature-branded journals. Within many of the research communities I engage with there is an increasing sentiment to opt to support non-profit journals wherever possible.

Another reason you might decline to review could be you’ve done your fair share for the year — many authors employ a 1:1 policy, where you will review one paper for every one that you send to review. Sometimes that fair share quota is filled as reviews are requested; later editors must simply get in line. At other times it is more selective — you review from journals you publish in, with editors you know and have worked with in the past. If an editor sends your paper on to review, you are more likely to do a review for them in the future.

If you worry if and how your reviews will actually positively contribute to the scientific enterprise, you are probably not alone. In what scenarios will your opinion as a reviewer be useful, and in what scenarios will it distort and delay publication of results useful for society?

Statistics help to show how this distortion could happen. Suppose you receive, for review, a paper with 100 authors. If a two-thirds majority of scientists would agree that a paper should be published, three reviewers selected at random would disagree with and reject that field consensus over 25% of the time, and the paper would not get published. Four reviewers would fail to accept the paper more than 40% of the time. If 100 experts in their field wrote a paper is it really fair that three reviewers, chosen at random, stand with such strong odds in the way of publication?

Table 1. Probability of binary outcome of accept or reject on a paper from a population of scientists in which a two third majority think the paper should be published.

In a world where data generation and science communication upholds ideals of objectivity, basic statistics say it makes no sense to waste time reviewing large multi-authored papers who already have so many experts on them. Probability says that neither the paper nor science would benefit from the lopsided power ratio imparted on you by the editors over any one of the co-authors.

Is it so? The logic outlined above puts on demands research groups. Co-author teams need to be randomly sampled. They need to evaluate their own blind spots, biases and, more broadly their equity, diversity and inclusion processes. In papers like the large multi-authored paper above, we can all see how these demand are not met. But, why not request author teams to submit statements on the representativenesss and diversity of the collaborators and how the research process was undertaken in a way to overcome biases that exist?

Some efforts are being made in this direction by a number of journals, but arguably they are not being communicated to scientists widely and early enough in their research process so they can embed them in their work.

As a reviewer it would be useful if papers came submitted with evidence of how the writing and internal review processes was inclusive, as well documented as any peer review would be. I’ve rarely, if ever, seen anything like this in practice. But something like this, might make it far easier for editors and reviewers to better direct the review process to counter existing biases rather than exacerbating them.

Everyone knows there are biases in science and peer review. Coauthors can compromise objectivity, simply because they don’t want to hurt the feelings of the lead authors, cause them more work, or because pressure to publish means they care less about the content of the paper than whether it gets published with their name is on it. Undeclared competing interests are often held by reviewers, particularly over knowledge territory wars or grant acquisition battles.

And lets not forget biases on the editorial side too — as anyone who has worked for long enough in this game will tell you. With the unwritten contract between authors, editors and reviewers being built on non-transparent relationships: who you know, and who you trust, with decisions being made along the way in which individual career advancement for any of the parties involved is wide open to trade-off with scientific integrity.

Signs do suggest things are beginning to change for the better. Movements on better registering biases in both the scientific and peer review process, related to racial, ethnic and gender diversity are already in motion. This is a good start, but if we really want to address this problem, we might need to go even further and request, generate and institutionalize collection of data that will help us better assess and study the outcomes of the multitude biases in peer review on the advancement of science from everyone involved.

Doing something like this puts higher demands on research groups, journals and reviewers, but this is something universities and funders might support more readily. After all, any efforts to try and improve peer review will inherently improve the way science is conducted and disseminated, for research, and more broadly for society who we serve.

--

--

Zia Mehrabi
Zia Mehrabi

Written by Zia Mehrabi

Assist Prof, University of Colorado Boulder

No responses yet