Opinion: Reviewing Reviewers

Science needs a standard way to evaluate and reward journal reviewers.

By | July 19, 2013

WIKIMEDIA, AREYNRefereeing or reviewing manuscripts for scientific journals is at the heart of science, despite its occasional imperfections. Reviewing is a check of quality, originality, impact, and even honesty for papers submitted to scientific journals. Unfortunately, referees are sort of like sperm donors: they are anonymous and their pleasure, if any, is in the process, not the result. No one acknowledges their contributions, except perhaps in small print at the back of a journal at the end of the year. Who in their right mind would want to referee? It takes lots of time to do well and gets no credit. The result is a refereeing crisis.

Various methods have been suggested to improve the situation. Some of these suffer from an approach that punishes reviewers for poor performance, rather than rewarding them for their hard work. Others involve complicated systems of payment or reciprocal altruism where reviewers are rewarded with access to journals that they may or may not be interested in submitting papers to.

I believe being asked to referee reflects one’s true standing in a field. Journal editors will always try to get the most knowledgeable and competent referees possible. I would suggest we build on existing impact measurements to encourage enlightened self-interest.  For authors, we have a measure of impact, such as the commonly used h-index, a reflection of publications and citations. For journals, we have impact factors. While such measures for both journals and individual scientists can be misused and in any event should be taken with a grain of salt, I suggest something similar would be a more benevolent system for referee metrics.

Journals could produce an annual list of reviewers and the number of times each reviewed. The sum of the number of reviews by individual referees, multiplied by the impact factor of the journals for which they reviewed, should reflect their standing in the field. Reviewing in a top journal like Science or Nature would earn the highest scores, but such opportunities are necessarily less common. Reviewing manuscripts submitted to mid-rank but more focused journals would therefore be more likely drive individual scores. Finally, reviewing for low-ranking journals would not boost scores much but, as at present, could be considered a moral obligation to the scientific community. Additional indices could correct for performance through the years or proportion of reviews for top journals.

Assigning a specific score to evaluate a scientist’s contribution as a manuscript reviewer should encourage scientists to improve their standings, which can be done by more reviewing or by being asked to review by higher impact journals. Editors can then exploit this to improve their stable of referees. Academic deans and other administrators, obsessed with the quantitative, will latch on like flies onto road kill. The result would be competition for opportunities to review rather than competition among editors for a limited number of able and willing reviewers.

Of course, such a system could be gamed, but editors could choose not to count reviews unless they reach a certain standard of excellence, while at the same time taking care not to discourage researchers from reviewing at all for their journal.

We can continue to bemoan the state of reviewing, and dream up sticks with which to beat reviewers into helping, or we can come up with carrots. The system I suggest here is cheap and appeals to both our better and worse angels, motivating researchers with the carrot that matters most: recognition for their standing and for their contributions.

David Cameron Duffy is a frequent referee for a variety of journals and is the former editor of Waterbirds. He is an ecologist at University of Hawaii at Manoa who works on seabirds and on perturbations in natural ecosystems. An earlier version of this commentary appears on Ecolog-L.

Add a Comment

Avatar of: You



Sign In with your LabX Media Group Passport to leave a comment

Not a member? Register Now!

LabX Media Group Passport Logo


Avatar of: David Beebe

David Beebe

Posts: 4

July 22, 2013

This is an interesting concept. I agree that rewarding reviewers is the way to go, but, as editor of a mid-level journal and a reviewer for journals at all levels, I don't think the "R-score" suggested in this article is the best solution. Quality reviewing is needed at every level. Excellent reviews can increase the quality of manuscripts and elevate the standing of a lower-ranked journal. In fact, better reviews may be more valuable at lower- or mid-level journals than at the highest level. Speaking from my own experience, I am more likely to say "yes" to a request to review at a top-ranked journal, so there may be less reason to provide greater reward for this service.

At the journal for which I am the editor, I've instituted a rating system for "Exceptionally Good" reviews. We've always scored reviews (anonymously) as "adequate," "below average," or "review not returned." These ratings were available to editors when they selected reviewers, but they had no impact on the reviewers themselves. Now, when a reviewer goes "above and beyond" with their review, providing special insight or making especially thoughtful suggestions to improve the manuscript, they receive a special email from me thanking them for their service to the journal and to science in general. It is too early to know whether this policy encourages reviewers to agree to review the next time they are asked, as the policy is less than a year old. However, I received one of these special emails for a paper I reviewed for another editor at the journal and, even though it was signed by me, it was the high point of my week. 

Reviewers should be rewarded, but we don't need a new scoring system to do so. A "thank you" is always appropriate, but a sincere acknowledgment of exceptional service might influence a very good reviewer for years to come. 

Popular Now

  1. Unstructured Proteins Help Tardigrades Survive Desiccation
  2. What Budget Cuts Might Mean for US Science
    News Analysis What Budget Cuts Might Mean for US Science

    A look at the historical effects of downsized research funding suggests that the Trump administration’s proposed budget could hit early-career scientists the hardest.  

  3. Opinion: On “The Impact Factor Fallacy”
  4. Inflammation Drives Gut Bacteria Evolution
Business Birmingham