Many say the peer review system is broken. Here’s how some journals are trying to fix it.
Twenty years ago, David Kaplan of the Case Western Reserve University had a manuscript rejected, and with it came what he calls a “ridiculous” comment. “The comment was essentially that I should do an x-ray crystallography of the molecule before my study could be published,” he recalls, but the study was not about structure. The x-ray crystallography results, therefore, “had nothing to do with that,” he says. To him, the reviewer was making a completely unreasonable request to find an excuse to reject the paper.
Kaplan says these sorts of manuscript criticisms are a major problem with the current peer review system, particularly as it’s employed by higher-impact journals. Theoretically, peer review should “help [authors] make their manuscript better,” he says, but in reality, the cutthroat attitude that pervades the system results in ludicrous rejections for personal reasons—if the reviewer feels that the paper threatens his or her own research or contradicts his or her beliefs, for example—or simply for convenience, since top journals get too many submissions and it’s easier to just reject a paper than spend the time to improve it. Regardless of the motivation, the result is the same, and it’s a “problem,” Kaplan says, “that can very quickly become censorship.”
“It’s become adversarial,” agrees molecular biologist Keith Yamamoto of the University of California, San Francisco, who co-chaired the National Institutes of Health 2008 working group to revamp peer review at the agency. With the competition for shrinking funds and the ever-pervasive “publish or perish” mindset of science, “peer review has slipped into a situation in which reviewers seem to take the attitude that they are police, and if they find [a flaw in the paper], they can reject it from publication.”
“When it comes to journals and publications, I’m highly skeptical that [the peer review] process adds much value at all,” adds Richard Smith, former editor of the British Medical Journal, who has written extensively about peer review. “In fact, it detracts value because it wastes a lot of time of a lot of people,” he says. “There’s lots of evidence of the downside of peer review, and very limited evidence of the upside.”
Now, scientists and editors are taking alternative approaches to tackle some of the pervasive problems with traditional peer review and put the “scientific” back into scientific publishing. They include enabling authors to carry reviews from one journal to another, posting reviewer comments alongside the published paper, or running the traditional peer review process simultaneously with a public review.
“We thought that it’s time to change the atmosphere of how we communicate scientific knowledge,” says Idan Segev, a co-founder of Frontiers, one of a handful of journals cropping up that aim to better this system that most consider essential to the scientific community.
Download Flash player to listen to the Journal of Participatory Medicine podcast
Reviewers are biased by personal motives
Solution: Eliminate anonymous peer review ( Biology Direct, BMJ, BMC); run open peer review alongside traditional review (Atmospheric Chemistry and Physics); judge a paper based only on scientific soundness, not impact or scope (PLoS ONE)
One of the most hotly debated aspects of peer review is the anonymity of the reviewers. On the one hand, concealing the identity of the reviewers gives them the freedom to voice dissenting opinions about the work they are reviewing, but anonymity also “gives the reviewer latitude to say all sorts of nasty things,” says Kaplan. It also allows for the infiltration of inevitable personal biases—against the scientific ideas presented or even the authors themselves—into a judgment that should be based entirely on scientific merit.
“I believe strongly [that] in the end, all life is on the record,” Smith says—“you should stand by what you say, and you should put your own name on it. It makes me uncomfortable that science has moved away from that.”
But several journals, including Biology Direct, where Kaplan is an editor, have decided to eliminate anonymity from the peer review process altogether. “Under the Biology Direct model, everything is transparent, and everything is in the open,” says Eugene Koonin, one of the journal’s editors-in-chief. Authors are responsible for choosing their own reviewers from the journal’s extensive editorial board of more than 200 scientists, and must find three willing reviewers for their manuscript to be considered. This process eliminates “the potential for irresponsibility in the anonymous approach to peer review,” Koonin says, adding that upon acceptance of a paper “the reviews themselves are published alongside the paper for everyone to read.”
BMJ journals and many of BioMed Central’s publications also eliminate anonymity of their reviewers from the get-go. “We try to have a fully transparent, open peer review system,” says Melissa Norton, editor-in-chief of the BMC series journals, about 40 of which, like Biology Direct, reveal the names of the reviewers up front, as well as the reviews themselves upon publication. The lack of anonymity hasn’t appeared to hurt the journals—BMJ’s latest impact factor approaches 14, BMC Biology’s rose from 4.7 in 2008 to 5.6 in 2009, and Biology Direct scored a respectable 3.3 after just 3 years of operation.
But many argue that eliminating anonymity could have pitfalls of its own. “I think at some level there should be this transparency to the production process,” says Peter Binfield, publisher of PLoS ONE, where reviewers have the option of revealing their identities. “The question is whether it puts off reviewers from being as frank with their comments, or even [from] review[ing a manuscript] in the first place.”
Indeed, a 1999 study published in BMJ showed that asking reviewers to consent to being identified had no effect on the quality of the review, the recommendation regarding publication, or the time taken to review, but it did increase the likelihood that reviewers would decline to review. As such, even one of the journals “experimenting” with peer review has maintained optional anonymity. Atmospheric Chemistry and Physics runs an open public review alongside its traditional peer-review process, but reviewers can opt out of revealing their identity.
“We find this very important,” says Ulrich Pöschl of the Max Planck Institute for Chemistry in Germany, the journal’s initiator and chief executive editor. In addition to the “danger of losing out on critical opinion,” he says, there is the added concern that reviewers will be afraid to reveal their ignorance, making them hesitant to question the manuscript at all, even about valid issues. “As a referee, you cannot be expected to really work yourself through all the details and all the background literature” for each new manuscript you review, Pöschl says. “So when you pose critical questions, some of them may be stupid questions.” The option of anonymity therefore allows reviewers to ask questions without fear of embarrassment.
Frontiers journals are trying to find a balance by maintaining reviewer anonymity throughout the review process, allowing reviewers to freely voice dissenting opinions, but once the paper is accepted for publication, their names are revealed and published with the article. “[It] adds another layer of quality control,” says cardiovascular physiologist George Billman of The Ohio State University, who serves on the editorial board of Frontiers in Physiology. “Personally, I’d be reluctant to sign off on anything that I did not feel was scientifically sound.”
The scientific community appears to be accepting Frontiers’ approach: Frontiers in Neuroscience, which publishes reviews based on the manuscripts published in the specialty journals, published nearly 2,000 papers in the last year—second in the field only to the Journal of Neuroscience.
An alternative way to limit the influence of personal biases in peer review is to limit the power of the reviewers to reject a manuscript. “There are certain questions that are best asked before publication, and [then there are] questions that are best asked after publication,” says Binfield. At PLoS ONE, for example, the review process is void of any “subjective questions about impact or scope,” he says. “We’re literally using the peer review process to determine if the work is scientifically sound.” So, as long as the paper is judged to be “rigorous and properly reported,” Binfield says, the journal will accept it, regardless of its potential impact on the field, giving the journal a striking acceptance rate of about 70 percent.
“The peer review that matters is the peer review that happens after publication when the world decides [if] this is something that’s important,” says Smith. “It’s letting the market decide—the market of ideas.”
This approach has also proven successful, with PLoS ONE receiving their first ISI impact factor this June—an impressive 4.4, putting it in the top 25 percent of the Biology category. And with a 6-fold growth in publication volume since 2007, Binfield estimates that “in 2010, we will be the largest journal in the world.” Since its inception in December 2006, the online journal has received more than 12 million clicks and nearly 21,000 citations, according to ISI.
Peer review is too slow, affecting public health, grants, and credit for ideas
Solution: Shorten publication time to a few days (
Another common frustration among authors is the lengthy time delay between submission of a manuscript and its publication. It can take upwards of a year after submission to see one’s paper in print—and that’s if it’s accepted the first time around. “Now, let’s say you have to submit it 5 times,” Kaplan muses—“the delay is significant.” That can be a major problem, for example, when “you’re waiting to resubmit a grant, and you need that publication,” he adds.
It can also be a serious issue in fields that are advancing extremely rapidly, particularly when the results of such research hold sway on public health decisions. “When you’re dealing with a situation of public health, you really need research results to be communicated as rapidly as possible in order to accelerate the research process,” says Mark Patterson, director of publishing at PLoS. Inspired by last year’s H1N1 pandemic, the publisher launched PLoS Currents Influenza in August 2009. Thanks to the journal’s unique review process, it has reduced the lengthy time to publication from several months to just a few days, allowing it to publish 35 publications within its first 3 months (it has now published more than 60 overall).
The review process essentially amounts to moderation by expert researchers—the “gatekeepers of content,” Patterson says. “If they feel it is appropriate, it’s immediately published at Google Knol and archived in PubMed Central.” The journal is managed by a small group of just 20 to 25 people, he says, because “you’re dealing with a subject that’s reasonably well focused. [For] every contribution that comes in, there’s going to be at least one or two people in your group of moderators who are going to be able to assess the content properly, [and] make a decision about whether or not the work is publishable.”
This speedy process is also facilitated by the technology platform that PLoS Currents uses, Patterson adds—an authoring tool known as Google Knol. Not too dissimilar from blog-writing programs, such as WordPress or Blogspot, Knol allows authors to write, edit, and submit their articles directly to the journal online. This system also allows authors to go back after the paper is published to clarify a point or submit additional information. “It has to go through the moderators in order to be accepted,” Patterson says, but “it is a very easy process, [and readers] can see all the version history” with each revision accessible and able to be cited separately.
The Journal of Biology, now part of BMC Biology, has taken a less extreme approach to expediting the publishing process. While the journal still employs the peer review process as usual for the first round, after that, authors can bypass a second review, opting to publish their revised paper without the reviewers’ okay.
A handful of other journals have taken a different tactic altogether to tackle the problem of publication time lags—keep the traditional peer review process but first publish a preliminary version of a submitted paper. Atmospheric Chemistry and Physics, launched by the European Geosciences Union (EGU) in 2001, along with the 10 or so sister journals that have subsequently been launched by the EGU, employs a “two-stage” process of publication and peer review, concurrent with an interactive public discussion. After a quick prescreening by one of the journal’s expert editors, a submitted manuscript is immediately published on the journal’s website as a “discussion paper,” and is available for anyone to see and comment on for 8 weeks. At the same time, the manuscript is passed on to referees who are familiar with the subject, and their comments (for which they can claim authorship or remain anonymous) are also posted alongside the discussion paper, public comments, and authors’ replies. The manuscript can then be accepted for publication, at which point a revised paper is published in the main, open-access journal.
The goal is to find that balance between “rapid publication on the one hand [and] thorough review on the other hand,” says Martin Rasmussen, managing director at Copernicus, the publisher of ACP and its sister journals. In addition, he adds, “discussion in the traditional way—as commentary after the publication—comes too late, so the results will not influence the pending peer review.”
In less than 10 years, the journal has achieved the highest impact factor in the field of atmospheric sciences (4.9), and one of the highest in the fields of geosciences and environmental sciences, while having one of the lowest rejection rates—about 10 to 15 percent. “I really sincerely hope that this public peer review and interactive open accessing publishing will become a new standard of quality assurance because it will boost efficiency everywhere across the board,” says Chief Executive Editor Pöschl.
But not everyone is convinced. “Publishing first drafts eliminates the potential for censoring,” Kaplan says. At the same time, however, “it does not give a reader the confidence that the manuscript is worthy of a reader’s time.”
“Investigators are already overloaded with information, including published reports,” agrees Yamamoto. As a result, “discussion papers may be ignored by all except members of the community who are direct competitors of the authors, who then might submit unfavorable comments that influence disproportionally the final editor’s decision,” he says. “In my view, the pace and competitiveness of biological research make it unlikely that this system would be effective.”
Plus, with the paper, the reviews, and the public’s comments already available, “what exactly is the value of the journal?” wonders Smith, who advocates for a simpler process, such as posting an unpublished article to a Web site, and letting the world decide for itself—not unlike the arXiv electronic archive run by the Cornell University Library and used since 1991 to distribute new research in physics, mathematics, and other non–life science fields.
Too many papers to review
Solution: Recycle reviews from journals that have rejected the manuscript (Neuroscience Peer Review Consortium); wait for volunteers (
“The culture of having to publish means the burden of papers is just enormous,” Yamamoto says. And the burden of reviewing this glut of papers goes almost entirely unrewarded.
As a result, many reviewers may not put as much effort into the job as perhaps they should, especially with their own research and grant proposal deadlines looming. And once again, the high number of rejections that most papers go through prior to publication only makes the problem worse. “It’s pretty obvious to those on the editorial side that reviewers are getting overworked just because manuscripts are going to [multiple journals] before finally being accepted for publications,” says John Maunsell of Harvard Medical School, editor-in-chief of The Journal of Neuroscience.
In an effort to reduce the reviewer burden, Maunsell and several of his neuroscience colleagues launched the Neuroscience Peer Review Consortium in January 2008, which enables authors to submit reviews from one journal to another.
The consortium is based on the logic that if a paper is rejected simply because the work was not of high enough profile to justify its publication in a particular journal, then the evaluations generated during that review process are valid when the paper is resubmitted elsewhere. So, with more than 35 participating journals that range from Nature Neuroscience all the way down to highly specific, lower impact publications, the Neuroscience Peer Review Consortium enables authors and editors to reuse reviews from journals that have previously rejected a manuscript. Authors simply have to include a note in their cover letter that it was previously reviewed at another member journal, then contact that journal to send over the reviews.
Unfortunately, the consortium has only had “modest success” so far, says its chair Clifford Saper of Harvard Medical School, affecting just 5 percent or less of the papers published by its member journals. One reason may simply be that authors are not aware that the consortium exists. Alternatively, they may not be happy with the initial reviews they received and want a fresh start.
It’s not just the authors who hesitate to take advantage of the consortium—many of the consortium’s journals are also hesitant to use the reviews, especially those that are unsigned, Saper says. The Journal of Comparative Neurology, for example, where Saper is editor-in-chief, uses only about half of the reviews they receive through the consortium.
Still, 5 percent of the 4,000 papers submitted each year to The Journal of Neuroscience alone is 200 papers whose shared reviews could help shave valuable time off the publication delay and reduce the number of reviewers the journal must recruit each year.
With the number of member journals continuing to rise, the consortium’s founders are hopeful that its impact will increase over time. Eventually, Saper adds, if things really pick up, members could establish a unified submissions system, which would make it easier to transmit papers and reviews between journals. But “that is quite years away,” he admits, noting that they “can’t even get the [journal] editors to agree on a list of check boxes at the top [of the review forms].”
Another solution to the reviewing burden is to wait for reviewers to volunteer to vet a paper. Elsevier recently launched such a program at Chemical Physics Letters in an attempt to increase the efficiency and effectiveness of the review process. For 3 months starting this June, reviewers for the journal are choosing which articles they want to review, instead of having the editors choose for them.
The Journal of Medical Internet Research is also experimenting with a similar approach, allowing researchers to browse for manuscripts by authors that have agreed to open peer review, and add themselves as a reviewer for any paper that strikes their interest. The journal will then consider these reviews in addition to those from the author- and editor-selected reviewers when making their final decision on whether or not to publish the paper.
“The reviewer choice idea is interesting and could expand the pool of good reviewers, and potentially the turnaround time of review,” Yamamoto says. In addition, this type of system could have the benefit of “having somebody very close to the subject reviewing the article, [which] might lead to a better review,” Smith adds. On the other hand, he warns, “it might be that competitors seize the opportunity to delay publication or rubbish the study.”
Alternatively, rather than reduce the burden on reviewers, some journals are simply looking to give them credit for all their hard work. As the system works currently, reviewers have no motivation for putting any significant amount of time or effort into manuscripts they’ve been sent to review. But by publishing the reviews alongside the papers, as Biology Direct, some BMC journals, Frontiers journals, and ACP and its sister journals do, reviewers can claim authorship for their work, incentivizing a thorough and thoughtful review.
Of course, eliminating the anonymity raises the perennial concerns that reviewers will hesitate to be honest or review the paper altogether. But “by having open peer review and the prepublication history”—including the original manuscript, reviews, and revised versions of the article—“reviewers have visible credit for the work that they’ve done,” BMC’s Norton says.
As a result, reviewers may be more motivated to uphold the integrity of the peer review process. “When you do peer review, you’re doing it as a service to the community first and foremost, not as a service to your own research interests,” Kaplan says. “We should have a community where serving the common good—being an honest and reliable and prompt reviewer—[is] very valuable.”
Have a comment? Email us at email@example.com
The original version of this article incorrectly stated that Frontiers in Neuroendocrinology was among the approximately 30 journals in the Frontiers in Neuroscience series. The journal is in fact an Elsevier product, which does not reveal the names of its reviewers. The Scientist regrets the error.