Three common sources of error in peer review , and how to minimize them

Researchers have an odd love-hate relationship with peer review. Most regard it as agonizing, but at the same time, necessary. Peer review is of course a good thing when it provides the value that is expected of it: weeding out junk papers, and improving the rest. Unfortunately, however, the former often doesn't work particularly well, and when the latter works, it usually happens only after a lot of wasted time, hoop-jumping and wading through absurdity. Perhaps we put up with this simply because the toil and pain of it all has been sustained for so long that it has come to define the culture of academia—one that believes that no contribution can be taken seriously unless it has suffered and endured the pain, and thus earned the coveted badge of 'peer-reviewed publication'. Here, I argue that the painful route to endorsement payoff from peer review, and its common failure to provide the value expected of it, are routinely exacerbated by three sources of error in the peer-review process, all of which can be minimized with some changes in practice. Some interesting data for context are provided from a recent analysis of peer-review results from the journal, Functional Ecology. Like many journals now, Functional Ecology invites submitting authors to include a list of suggested reviewers for their manuscripts, and editors commonly invite some of their reviewers from this list. Fox et al. (2016) found that author-preferred reviewers rated papers much more positively than did editor-selected reviewers, and papers reviewed by author-preferred reviewers were much more likely to be invited for revision than were papers reviewed by editor-selected reviewers. Few will be surprised by these findings, and there is good reason to be concerned of course that the expected value from peer review here has missed the mark. This failure is undoubtedly not unique to Functional Ecology. It is, I suspect, likely to be a systemic feature of the traditional single-blind peer-review model— where reviewers know who the authors are, but not vice versa. The critical question is: what is the signal of failure here?— the fact that author-preferred reviewers rated papers more positively?— or the fact that editorselected reviewers rated papers more negatively? Either one could be a product of peer-review error, and at least three explanations could be involved:

Ecology.It is, I suspect, likely to be a systemic feature of the traditional single-blind peer-review modelwhere reviewers know who the authors are, but not vice versa.The critical question is: what is the signal of failure here?-thefact that author-preferred reviewers rated papers more positively?-or the fact that editorselected reviewers rated papers more negatively?
Either one could be a product of peer-review error, and at least three explanations could be involved: (1) In some cases, there will be 'author-imposed positive bias'-i.e.author-preferred reviewers are more likely to recommend acceptance because authors have an incentive to suggest reviewers that they have reason to expect would review their paper positively.
(2) Other cases, however, will suffer from 'editorimposed negative bias'-i.e.editor-selected reviewers are more likely to recommend rejection because editors have an incentive to impose high rejection rates to elevate and maintain the impact factors of their journals, and thus compete with other journals for impact factor status. Hence, to look like they are trying to meet the rejection rate quota imposed by their publisher or EIC, associate and subject editors are sometimes inclined to favour reviewers who they suspect are competitors, or even bitter rivals of the author, since they are more likely to recommend rejection and less likely to offer suggestions for improving the manuscript.To achieve this, some editors even select non-preferred reviewers identified by authors.I conducted an experiment to test for this a few years ago in a submission to a highend ecology journal, where I named a non-preferred reviewer who was in fact a good friend (and who This work is licensed under a Creative Commons Attributions 3.0 License.knew I was conducting the experiment).Sure enough, shortly after my submission, my friend contacted me to report that he had been invited to review my paper (to which he declined).
(3) Finally, in some cases there will be 'unintended reviewer mismatch'-i.e.editor-selected reviewers are more likely to recommend rejection because editor-selected reviewers are likely to be less equipped to understand the contribution of the manuscript, or to appreciate how or why it is interesting or important.In some cases, this occurs because of editor ignorance; after all, in spite of best intentions by editors, authors will generally know better who is most qualified to review their papers, and best equipped to recommend effective revisions that can bolster the quality and impact of the paper.In other cases (where authors choose not to name nonpreferred reviewers), editors may inadvertently invite reviewers who are competitors or likely to provide a 'retaliatory' review, without being aware of this conflict of interest (an error that is not risked with author-preferred reviewers).In still other cases, editors simply have little opportunity for quality control because they are forced to settle for whoever is willing to volunteer to provide a gratuitous review (and no one except the editor has knowledge of the reviewer's identity, credibility or track record of reviewing quality)."With many traditional journals-because of low reviewer incentiveeditors commonly end up sending a dozen or more requests before willing reviewers for a manuscript can be arranged, and so they are not the most 'preferred'-and hence not the best possiblereviewers for judging the quality of the manuscript" (Aarssen and Lortie 2012).

Minimizing errors
All three of these peer-reviewing errors can be minimized by open, author-directed, peer review that combines identification of reviewer names in accepted papers, together with published declarations of 'no conflict of interest' (from both authors and reviewers), and incentives for reviewers to work together with authors to improve their papers: Open review.Some researchers prefer to be anonymous reviewers because this enables them to voice criticism and recommend rejection of a paper without fear of later retaliation by the author.These concerns may be reasonable.But those who have them should abstain from peer review, because these concerns are vastly outweighed by the cost-of single-blind review-to the progress of science: by allowing reviewers to hide behind anonymity, there is no deterrent against biased and poor quality reviews with draconian recommendations for rejection.For many people, the reason why they volunteer their time to review is precisely because they can remain anonymous, not because they are nice people wanting to help advance science.Anonymous reviewing provides power over colleagues-power to approve manuscripts that support the reviewer's own research and reject those that conflict with it.Individual decisions to offer reviewing service are routinely laced with bias.Even when already busy and overworked, who would not accept (with eager anticipation) an invitation to review a paper that cites one's own work favourably, or disfavourably?-or a paper that is supportive or critical of one's most favourite theory, or least favourite theory?-or a paper authored by a research rival?
In contrast, with author-directed open peer review, "… authors can seek and arrange review of their papers from the best reviewers and most reputable researchers in their fields-and can also avoid reviewers that the author suspects might be a 'competitor' or likely to provide a 'retaliatory' review.[Editors, in contrast, are usually not sufficiently informed-nor as inclined-to avoid such biased reviewers].Having the endorsement of a top quality, unbiased reviewer/researcher in hand when submitting to a journal (and acknowledged in the published paper) represents strong evidence in support of the paper's merit.The quality/impact of an article therefore can be judged by who the acknowledged reviewers are (combined with the article's citation metrics), rather than relying on the usual inferior metric (the impact-factor of the publishing journal)" (Aarssen and Lortie 2012).
No-Conflict-Of-Interest (NCOI) declarations.A conflict of interest occurs in peer-review when the quality of a review is potentially compromised because circumstances exist that could limit the ability of the reviewer to be objective and unbiased.With NCOI declarations, signed by both authors (Figure 1) and reviewers (Figure 2) and published together with accepted papers, readers can be confident that the paper was peer reviewed and endorsed legitimately.Authors, in this case, will be disinclined to request reviews from close colleagues to avoid the perception of cronyism (and many editors and readers of published papers tend to know (or can easily discover) the identities of an author's previous collaborators and close associates).In addition, with reviewers' names so identified, their reputations will be 'on the line.'Most, therefore, are likely to be honest, fair and rigorous in their reviews.Reviewers will not want their names used as public endorsements for inferior papers, or for papers whose publication will benefit the reviewer's own research reputation-at least not reviewers that will be regarded as having integrity with journals, authors, and readers.With this model then, reviewers Collaboration of authors and reviewers.Most human efforts are better when people collaborate with a spirit of honesty and good will.This doesn't always come easy, but it is guaranteed to be virtually non-existent under the traditional single-blind peer-review model.
When authors and reviewers collaborate to improve the quality of a paper, this can sometimes result in production of a reviewer response commentary that can be published alongside the author's paper, if accepted.This can provide important inspiration for readers that exceeds that available from the reviewed paper on its own, and also gives credit to the reviewer for providing this contribution-thus, importantly, serving as incentive to participate productively in the dissemination of discovery that the author's paper represents.

Figure 1 .
Figure 1.Example of a No-Conflict-Of-Interest (NCOI) Declaration for authors.

Figure 2 .
Figure 2. Example of a No-Conflict-Of-Interest (NCOI) Declaration for reviewers.