WIKIMEDIA, AREYNRefereeing or reviewing manuscripts for scientific journals is at the heart of science, despite its occasional imperfections. Reviewing is a check of quality, originality, impact, and even honesty for papers submitted to scientific journals. Unfortunately, referees are sort of like sperm donors: they are anonymous and their pleasure, if any, is in the process, not the result. No one acknowledges their contributions, except perhaps in small print at the back of a journal at the end of the year. Who in their right mind would want to referee? It takes lots of time to do well and gets no credit. The result is a refereeing crisis.
Various methods have been suggested to improve the situation. Some of these suffer from an approach that punishes reviewers for poor performance, rather than rewarding them for their hard work. Others involve complicated systems of payment or reciprocal altruism where reviewers...
I believe being asked to referee reflects one’s true standing in a field. Journal editors will always try to get the most knowledgeable and competent referees possible. I would suggest we build on existing impact measurements to encourage enlightened self-interest. For authors, we have a measure of impact, such as the commonly used h-index, a reflection of publications and citations. For journals, we have impact factors. While such measures for both journals and individual scientists can be misused and in any event should be taken with a grain of salt, I suggest something similar would be a more benevolent system for referee metrics.
Journals could produce an annual list of reviewers and the number of times each reviewed. The sum of the number of reviews by individual referees, multiplied by the impact factor of the journals for which they reviewed, should reflect their standing in the field. Reviewing in a top journal like Science or Nature would earn the highest scores, but such opportunities are necessarily less common. Reviewing manuscripts submitted to mid-rank but more focused journals would therefore be more likely drive individual scores. Finally, reviewing for low-ranking journals would not boost scores much but, as at present, could be considered a moral obligation to the scientific community. Additional indices could correct for performance through the years or proportion of reviews for top journals.
Assigning a specific score to evaluate a scientist’s contribution as a manuscript reviewer should encourage scientists to improve their standings, which can be done by more reviewing or by being asked to review by higher impact journals. Editors can then exploit this to improve their stable of referees. Academic deans and other administrators, obsessed with the quantitative, will latch on like flies onto road kill. The result would be competition for opportunities to review rather than competition among editors for a limited number of able and willing reviewers.
Of course, such a system could be gamed, but editors could choose not to count reviews unless they reach a certain standard of excellence, while at the same time taking care not to discourage researchers from reviewing at all for their journal.
We can continue to bemoan the state of reviewing, and dream up sticks with which to beat reviewers into helping, or we can come up with carrots. The system I suggest here is cheap and appeals to both our better and worse angels, motivating researchers with the carrot that matters most: recognition for their standing and for their contributions.
David Cameron Duffy is a frequent referee for a variety of journals and is the former editor of Waterbirds. He is an ecologist at University of Hawaii at Manoa who works on seabirds and on perturbations in natural ecosystems. An earlier version of this commentary appears on Ecolog-L.