TWITTER, JOHN FLEISCHMANReviewing peer review
Scientists are asked to evaluate one another’s work in a variety of contexts, all of which can have lasting effects on both the reviewer’s and reviewee’s careers. From choosing which grants to fund to deciding which should be published (and where), peer review is integral to doing and communicating science. During a panel discussion on the future of the practice at the annual American Society for Cell Biology (ASCB) meeting in New Orleans this week, journal editors, representatives from federal funding agencies, and working scientists gathered to deconstruct some of the perceived problems with peer review and propose ways these issues could be overcome.
The panel agreed that, despite its problems, peer review is critical to protecting the independence of the scientific research enterprise. “We should not forget how important it is that the government lets us review each other,” said Anthony Hyman of...
Anonymity was a key talking point. The panelists argued for and against protecting author and reviewer identities at different points throughout the peer review process. Melina Schuh from the MRC Laboratory of Molecular Biology suggested that author anonymity could be useful for younger investigators who are not yet well established in their fields, giving them a fair shot of landing grants or publishing in high-profile journals. At the same time, the University of California, San Francisco’s Bruce Alberts proposed that asking reviewers to identify themselves might put an end to the trend of “writing reviews that they would complain about when they get them on their own papers.” And the session’s organizer, EMBO Director Maria Leptin, noted that accountability and professionalism are in direct conflict with author and reviewer anonymity.
“Peer review is a luxury that we are given to control ourselves,” said Hyman. “We have to hang on to peer review. We have to do a better job [rather] than abandoning it.”
Assessing research assessment
Scholarly journals are most often judged by a single metric determined by one company. While some publications have invariably held the coveted top spots, other titles live and die by Thomson Reuters’s annual Impact Factor rankings.
At last year’s ASCB meeting, a group of scientists and journal editors came together to form the San Francisco Declaration on Research Assessment (DORA), an initiative that aims “to improve the ways in which the outputs of scientific research are evaluated,” according to its website. Among DORA’s goals are to evaluate whether journal impact factors are an appropriate measure of research value.
During an ASCB panel dedicated to DORA, David Drubin from the University of California, Berkeley, said one of the group’s major concerns was the “influence of impact factor on where people want to submit their work.” Overall, he said, “we didn’t like the impact [that] impact factors were having on scientists,” particularly because this single metric does not alone “reflect the quality of product” set forth by journals. (Even Reuters agrees. “No one metric can fully capture the complex contributions scholars make to their disciplines, and many forms of scholarly achievement should be considered,” the company said in a statement.)
The panel stressed that where scientists publish their work now seems as—if not more—important than what they’re reporting. “Impact factor is a lousy tool for research assessment and should be abandoned,” said eLife Executive Director Mark Patterson.
Of course, not everyone agreed. Some said that because they are currently the only widely accepted metric, impact factors do play an important role. “It’s hard to argue that a journal with an impact factor of 30 isn’t doing something right,” said Tom Misteli from the National Cancer Institute’s Center for Cancer Research.