California State University, Fresno, biologist Ulrike Müller received her worst peer review when she was a graduate student at the University of Groningen in the Netherlands. In the late 1990s, after submitting a paper about the dynamics of swimming fish to the Journal of Experimental Biology, she received an extremely short response—just a few lines long. “The person wrote that this paper was a missed opportunity because we didn’t invite him as a coauthor,” she says. “No suggestions. Just, ‘Sorry, this could’ve been a wonderful paper if only you’d asked me.’”
Müller’s PhD supervisor, John Videler, followed up, and the reviewer, who had hand-signed the review, asked Videler why he, the reviewer, hadn’t been invited to sit on Müller’s thesis committee. “For me it was just so shocking, making the peer review about professional rivalry when the main author is a junior scientist and [was] caught in this cockfight,” Müller says. “‘Could we please leave your egos out of this?’” she recalls thinking.
Most researchers remember a bad peer-review experience or two; issues range from reviewers who clearly did not read the manuscript to overly effusive, yet completely unhelpful praise. But there’s a growing desire in the scientific community for better, faster peer review. After all, receiving feedback from other researchers in an author’s field is one of the defining elements of scientific publication and key to ensuring quality in the scientific literature. “There is nothing like having your scientific argument tested by people who really know what’s going on to improve the way that you think about your science,” says Sarah Tegen, vice president of global journals development at the American Chemical Society (ACS).
Although the peer-review process involves multiple players—from authors to journal editors—there is now a range of efforts directed at improving the habits and skills of reviewers themselves, from changing the culture around reviewer anonymity and recognition, to training reviewers to provide better feedback.
Traditionally, peer reviewers are anonymous, meaning they are largely shielded from the consequences of writing negative or careless reviews. But in recent years, some journals have introduced alternative procedures that aim to make the whole process more transparent.
In June 2012, the biomedical and life sciences journal eLife opened for submissions, with cell biologist Randy Schekman of the University of California, Berkeley, as editor-in-chief. “We wanted to do something different,” he explains. “We wanted to take away the sometimes toxic atmosphere that surrounds the submission of anonymous peer reviews, where the reviewers are known to the editor who’s handling the paper, but are not known to each other.”
Unlike most reviewers, who see their fellow reviewers’ comments only after a paper’s publication, eLife’s reviewers join a private online forum, in which they learn the identities of their counterparts and can read and comment on one another’s reviews. At the end of the review process, published papers are accompanied by the initial decision letter, complete with excerpts of these reviews—individual reviewers are encouraged, but not obliged, to make their names public at this stage—plus responses from the author.
The advantage of this openness is twofold from a reviewing perspective. For a start, the public nature of the reviews throughout the process may help rein in bad behavior. “Because they know their name is going to be associated with it, I think it exerts a little more restraint in the sometimes very negative comments that people make,” Schekman says. “You can’t hide behind your anonymity here.”
What’s more, the option of collaboration among reviewers may improve the quality of the review itself. When a reviewer sees what other reviewers have said only after the decision has been rendered, “sometimes you think, ‘Well, that’s interesting. I hadn’t thought of that,’ or ‘No, he doesn’t know what he’s talking about, and I wish I’d had a chance to weigh in on this,’” says Schekman.
He acknowledges that breaches in confidentiality or power imbalances when junior and senior scientists are co-reviewers are possible. Nevertheless, the feedback from eLife peer reviewers has been positive overall. A 2016 survey of more than 1,000 scientists who served as reviewers for the journal found that 90 percent of respondents felt that reviewer openness in the consultation session is beneficial, and 95 percent said they believed that the process is valuable to authors.
Another issue influencing reviewer behavior is the lack of recognition of the huge amount of work that goes into peer reviewing, says Müller, who serves as an associate editor at Proceedings of the Royal Society B. “Peer reviewer fatigue is a real problem,” she explains. “I usually need nine names to get two to three reviews.” Without recognition for the work, positive incentives to take time out of busy schedules to serve as a peer reviewer may be minimal.
Publons, a New Zealand–based company focused on reviewer recognition, aims to address this issue. “We really see peer review as at the heart of the research ecosystem,” explains Jo Wilkinson, head of communications at Publons. “[We] work with researchers, publishers, and research institutions to turn peer review into a measurable research output.”
The company, acquired last summer by Philadelphia–based Clarivate Analytics, allows scientists to create a free online profile where they can maintain a record of their reviewing and editorial activities. Publons automatically verifies that researchers have completed reviews through partnerships with more than 1,400 journals or by contact with editorial staff and review receipts forwarded by users. From their profile, reviewers can download a customized record of their contributions for inclusion in job and funding applications, as well as promotion evaluations.
Publons also attempts to increase the motivation for, and the quality of, peer review through feedback. “Reviewers have actually told us that they want to improve, and that they crave feedback from editors about the quality of their work,” says Wilkinson. So the company created a feature where editors can rate the reviews they receive based on timeliness, thoroughness, clarity, and helpfulness. Top scoring reviews receive an “Excellent Review” designation, represented by a gold star on a user’s profile.
Many reviewers seem eager for the recognition that Publons offers. More than 240,000 users from all over the world have created profiles and added records for more than 1.3 million reviews. As to whether the company’s strategies have actually improved peer review, initial investigations are promising. In a pilot study where Publons collaborated with 15 journals, offering reviewers recognition on Publons led to speedier turnaround on reviews, from 18 days pre-pilot to 15 days during the pilot. And after a collaboration between Publons and the American Society for Microbiology (ASM), reviewers for ASM journals reported that they both appreciated receiving Publons recognition and were subsequently more willing to review for ASM.
“Publons [is] developing pathways that acknowledge the work of peer reviewers, and I think that’s very important,” says Müller. “We need to make the service that we’re doing for our professional community as peer reviewers part of professional recognition.”
Even with these incentives, some reviewers may simply lack skills needed to produce a constructive review. “Few researchers have received peer-review training, despite being called upon to review hundreds, if not thousands, of papers throughout their career,” says Wilkinson.
To address this problem, Publons launched a course in May 2017 called Publons Academy. Composed of 10 online modules, the course covers everything from peer-review ethics to evaluating a manuscript’s methodology. Participants also work with a supervisor, such as their graduate or postdoctoral advisor, to write postpublication peer reviews to include on their Publons profile. Upon completion of the course, Publons connects new reviewers with an editor in their field from one of the company’s partner journals.
Researchers also have other online options for peer-review training. Since September 2017, Nature Research, part of Springer Nature, has offered a free online master class called Focus on Peer Review. The course covers everything from the role of the peer reviewer to innovations in the peer-review process in lessons that take about three hours to complete. “It’s a course designed for anybody,” says Victoria Pavry, head of publishing for researcher training at Nature Research. “No matter what type of journal they want to peer review for, we think it would be for them.”
ACS is also throwing its hat in the ring. Last August, the organization launched a free four-hour course called ACS Reviewer Lab that is also open to all researchers. The program covers the ethics of peer review, how to assess the significance and quality of the research, and how to write a coherent review. “We don’t get into a lot of specifics for chemistry, so just about anyone who is engaged in the peer-review ecosystem would benefit from this course,” explains ACS’s Tegen, who oversaw the course’s development. Once participants start, they have a month to complete it, and more than 300 researchers have done so already, Tegen says.
Meanwhile, the Genetics Society of America (GSA) just launched a members-only program providing real-world peer- reviewing experience for early-career researchers. Scientists starting out “get very uneven experience and training in peer review,” says Genetics Editor-in-Chief Mark Johnston of the University of Colorado Denver. “We wanted to provide a training that was more uniform and give them something more concrete.”
Last September, course leaders selected 36 participants—most of whom were postdocs—from hundreds of applications. The researchers received seven hours of peer-review training via phone conferences in November and December and, throughout 2018, editors will invite them as reviewers for manuscripts submitted to Genetics. Participants will write one review per quarter, receive feedback from the assistant editor overseeing the submission, and read the other referees’ responses, as well as the editor’s decision letter.
“[Participants] directly interact with the editors at Genetics, and they get individualized feedback from the editor on what it is that they did well and where they still have room for growth,” says GSA director of engagement and development Sonia Hall, who helped develop the course. “It sends a loud and clear message that the leadership of the journal and the Genetics Society of America respect [these early career scientists] as professionals, and that we’re confident in their abilities, and they should be too.”
These programs are so new that their effectiveness remains to be assessed. And despite optimism among organizers, it’s worth noting that related efforts have had little success in the past, according to University of California, San Francisco, emergency physician Michael Callaham, editor-in-chief of Annals of Emergency Medicine. Over the past two decades, he has tried a variety of strategies—from in-person training to direct mentorship from more-senior reviewers—to make new Annals reviewers better. After these interventions, he says, there was no difference in the actual review quality as evaluated by the journal’s editors.
Moreover, with the lack of data on the effects of current practices, it is still not clear exactly how peer review should be changed, Callaham adds. “We’re in such an early, primitive stage of understanding the whole peer-review thought process, which is pretty ironic when you think about the fact that it is the foundation of everything that’s done in science,” he says. “I totally believe this will be addressed someday, and we will look back on our current practices [and say], ‘Wow, how historically quaint.’ I think it will happen; I just don’t know when.”
Abby Olena is a freelance science journalist based in Carrboro, North Carolina.
TIPS FOR REVIEWERS
Be a mentor: Good feedback from reviewers can help authors become better scientists, even if their paper doesn’t end up being published, says Michael Callaham of the University of California, San Francisco. “Our job is . . . to help improve the literature that we get that’s going to be published and to educate and help the people that we don’t publish.”