Is Peer Review Broken?

FEATUREIs Peer Review Broken? Submissions are up, reviewers are overtaxed, and authors are lodging complaint after complaint about the process at top-tier journals. What's wrong with peer review? BY ALISON MCCOOK Peter Lawrence, a developmental biologist who is also an editor at the journal Development and former editorial board member at Cell, has been publishing papers in academic journals for 40 years. His f

By | February 1, 2006

FEATURE
Is Peer Review Broken?
Is Peer Review Broken?

Submissions are up, reviewers are overtaxed, and authors are lodging complaint after complaint about the process at top-tier journals. What's wrong with peer review?

Peter Lawrence, a developmental biologist who is also an editor at the journal Development and former editorial board member at Cell, has been publishing papers in academic journals for 40 years. His first 70 or so papers were "never rejected," he says, but that's all changed. Now, he has significantly more trouble getting articles into the first journal he submits them to.

"The rising [rejections] means an increase in angry authors."
-Drummond Rennie

Lawrence, based at the MRC Laboratory of Molecular Biology at Cambridge, UK, says his earlier papers were always published because he and his colleagues first submitted them to the journals they believed were most appropriate for the work. Now, because of the intense pressure to get into a handful of top journals, instead of sending less-than-groundbreaking work to second- or third-tier journals, more scientists are first sending their work to elite publications, where they often clearly don't belong.

Consequently, across the board, editors at top-tier journals say they are receiving more submissions every year, leading in many cases to more rejections, appeals, and complaints about the system overall. "We reject approximately 6,000 papers per year" before peer review, and submissions are steadily increasing, says Donald Kennedy, editor-in-chief of Science. "There's a lot of potential for complaints."

Everyone, it seems, has a problem with peer review at top-tier journals. The recent discrediting of stem cell work by Woo-Suk Hwang at Seoul National University sparked media debates about the system's failure to detect fraud. Authors, meanwhile, are lodging a range of complaints: Reviewers sabotage papers that compete with their own, strong papers are sent to sister journals to boost their profiles, and editors at commercial journals are too young and invariably make mistakes about which papers to reject or accept (see Truth or Myth?). Still, even senior scientists are reluctant to give speci. c examples of being shortchanged by peer review, worrying that the move could jeopardize their future publications.

So, do those complaints stem from valid concerns, or from the minds of disgruntled scientists who know they need to publish in Science or Nature to advance in their careers? "The rising [rejections] means an increase in angry authors," says Drummond Rennie, deputy editor at Journal of the American Medical Association (JAMA). The timing is right to take a good hard look at peer review, which, says Rennie, is "expensive, difficult, and blamed for everything."

What's wrong with the current system? What could make it better? Does it even work at all?

TOO MANY SUBMISSIONS

Editors at high-impact journals are reporting that the number of submissions is increasing every year (see "Facts and Figures", the table below). Researchers, it seems, want to get their data into a limited number of pages, sometimes taking extra measures to boost their success. Lately, academia seems to place a higher value on the quality of the journals that accept researchers' data, rather than the quality of the data itself. In many countries, scientists are judged by how many papers they have published in top-tier journals; the more publications they rack up, the more funding they receive.

Consequently, Lawrence says he believes more authors are going to desperate measures to get their results accepted by top journals. An increasing number of scientists are spending more time networking with editors, given that "it's quite hard to reject a paper by a friend of yours," says Lawrence. Overworked editors need something flashy to get their attention, and many authors are exaggerating their results, stuffing reports with findings, or stretching implications to human diseases, as those papers often rack up extra references. "I think that's happening more and more," Lawrence says. In fact, in a paper presented at the 2005 International Congress on Peer Review and Biomedical Publication, a prospective review of 1,107 manuscripts submitted to the Annals of Internal Medicine, British Medical Journal (BMJ), and The Lancet in 2003 showed that many major changes to the text demanded by peer review included toning down the manuscript's conclusions and highlighting the paper's limitations. This study suggests that boosting findings may cause more problems by overburdening reviewers even further.

Indeed, sorting through hype can make a reviewer's job at a top journal even more difficult than it already is. At high-impact journals, reviewers need to judge whether a paper belongs in the top one percent of submissions from a particular field - an impossible task, says Hemai Parthasarathy, managing editor at Public Library of Science (PLoS) Biology. Consequently, editors and reviewers sometimes make mistakes, she notes, perhaps publishing something that is really in the top 10%, or passing on a really strong paper. To an outsider, this pattern can look like "noise," where some relatively weak papers are accepted when others aren't, inspiring rejected authors to complain. But, it's an inevitable result of the system, she notes.

THE RELIGION OF PEER REVIEW

Despite a lack of evidence that peer review works, most scientists (by nature a skeptical lot) appear to believe in peer review. It's something that's held "absolutely sacred" in a field where people rarely accept anything with "blind faith," says Richard Smith, former editor of the BMJ and now CEO of UnitedHealth Europe and board member of PLoS. "It's very unscientific, really."

What's wrong with the current system?
What could make it better?
Does it even work at all?

Indeed, an abundance of data from a range of journals suggests peer review does little to improve papers. In one 1998 experiment designed to test what peer review uncovers, researchers intentionally introduced eight errors into a research paper. More than 200 reviewers identified an average of only two errors. That same year, a paper in the Annals of Emergency Medicine showed that reviewers couldn't spot two-thirds of the major errors in a fake manuscript. In July 2005, an article in JAMA showed that among recent clinical research articles published in major journals, 16% of the reports showing an intervention was effective were contradicted by later findings, suggesting reviewers may have missed major flaws.

Some critics argue that peer review is inherently biased, because reviewers favor studies with statistically significant results. Research also suggests that statistical results published in many top journals aren't even correct, again highlighting what reviewers often miss. "There's a lot of evidence to (peer review's) downside," says Smith. "Even the very best journals have published rubbish they wish they'd never published at all. Peer review doesn't stop that." Moreover, peer review can also err in the other direction, passing on promising work: Some of the most highly cited papers were rejected by the first journals to see them.

The literature is also full of reports highlighting reviewers' potential limitations and biases. An abstract presented at the 2005 Peer Review Congress, held in Chicago in September, suggested that reviewers were less likely to reject a paper if it cited their work, although the trend was not statistically significant. Another paper at the same meeting showed that many journals lack policies on reviewer conflicts of interest; less than half of 91 biomedical journals say they have a policy at all, and only three percent say they publish conflict disclosures from peer reviewers. Still another study demonstrated that only 37% of reviewers agreed on the manuscripts that should be published. Peer review is a "lottery to some extent," says Smith.

Facts and Figures
Statistics are from editors at Journal of the American Medical Association (JAMA), Public Library of Science (PLoS) Biology, Science, Nature, and the New England Journal of Medicine (NEJM). The Scientist also contacted editors at Cell, The Lancet, and the Proceedings of the National Academy of Sciences; all declined to comment.
Journal Submissions Acceptance Rate Workload Review Criteria Editor Demographics
JAMA 6,000 major manuscripts in 2005, a doubling since 2000. Approximately 6%.
Close to two-thirds are rejected before peer review.
All papers that are eventually accepted are first presented and discussed at a twice-weekly manuscript meeting, attended by the editor-in-chief, other decision-making editors, and statistical editors. In addition to scientific rigor, the journal triages submissions according to importance and to ensure subject has general medical interest. before review. There are 25 decisionmaking editors; the age range is 40-70.
PLoS Biology Doubled in the last six months. ~15%, this fluctuates wildly because publication is so new. Each paper has a hybrid team of one academic and one professional editor. Most reviewers are asked to complete reviews within seven working days. Editorial board contains ~120 members.
Science 12,000/yr, increasing "at a rate of growth rivaling the rate of Chinese economic growth," says editor Don Kennedy. <8%, about half are rejected before peer review. Papers reviewed by an editor and two members of the board of reviewing editors before peer review. Most reviewers are asked to return comments within one to two weeks. Editorial board contains ~120 members (26 PhD editors).
Median age: mid-40s. NCB
Nature Cell Biology Increasing by 10% each year. All Nature journals have an acceptance rate of less than 10%. Each editor sees an average of 470 papers per year. Besides scientific rigor, the journals look for general interest (especially at Nature), conceptual advance, and breadth/scope of study. NCB has four editors; Nature journals have no editorial boards.
Average age: mid-30s.
New England Journal of Medicine Received 5,000 submissions in 2005, as of press time. Submissions increase 10% to 15% each year. 6% of submissions are eventually published, approximately 50% of papers are rejected before peer review. A deputy editor must approve the assigned editor's decision to reject before review. Other than scientific rigor, editors judge submissions according to "suitability and editorial consistency," says editor Jeffrey Drazen. For instance, the journal does not publish animal studies. The average age of editors is in the mid-50s. The age range is 40-78. There are 10 deputy editors and 10 associate editors.


TRYING TO CHANGE

A number of editors are working to improve the system. In recent years, BMJ has required that all reviewers must sign their reviews. All comments go to the authors, excluding only "very confidential information," says Sara Schroter, research coordinator at BMJ, who has studied peer review.

Different studies have shown conflicting results about whether signed reviews improve the quality of what's sent back and detected only minor effects, Schroter notes. One report presented at this year's Peer Review Congress showed that, in a non-English-language journal, signed reviews were judged superior in a number of factors, including tone and constructiveness by two blinded editors. However, another study published in BMJ in 1999 found that signed reviews were not any better than anonymous comments, and asking reviewers to identify themselves only increased the chance they would decline to participate.

Still, Schroter says the journal decided to introduce its policy of signed reviews based on the logic that signed reviews might be more constructive and helpful, and anecdotally, the editors at BMJ say that is the case. JAMA's Rennie says he doesn't need research data to tell him that signing reviews makes them better. "I've always signed every review I've ever done," he says, "because I know if I sign something, I'm more accountable." Juries are not anonymous, he argues, and neither are people who write letters to the editor, so why are peer reviewers? "I think it'll be as quaint in 20 years' time to have anonymous reviewers as it would be to send anonymous letters to the editor," he predicts.

But not all editors agree. Lawrence, for one, says he believes anonymity helps reviewers stay objective. Others argue that junior reviewers might become hesitant to conduct honest reviews, fearing negative comments might spark repercussions from more seniorlevel authors. At Science, reviewers submit one set of comments to editors, and a separate, unsigned set of comments to authors - a system that's not going to change anytime soon, says Kennedy. "I think candor flourishes when referees know" that not all their comments will reach the authors, he notes. Indeed, in another study presented at this year's peer review congress, researchers found that reviewers hesitated to identify themselves to authors when recommending the study be rejected. Nature journals let reviewers sign reviews, says Bernd Pulverer, editor of Nature Cell Biology, but less than one percent does. "In principle" signed reviews should work, he says, but the competitive nature of biology interferes. "I would find it unlikely that a junior person would write a terse, critical review for a Nobel prize-winning author," he says.

However, since BMJ switched to a system of signed reviews, Smith says there have been no "serious problems." Only a handful of reviewers decided not to continue with the journal as a result, and the only "adverse effect" reported by authors and reviewers involved authors exposing reviewers' conflicts of interest, which is actually a "good thing," Smith notes.

Another option editors are exploring is open publishing, in which editors post papers on the Internet, allowing multiple experts to weigh in on the results and incrementally improve the study. Having more sets of eyes means more chances for improvement, and in some cases, the debate over the paper may be more óinteresting than the paper itself, says Smith. He argues that if everyone can read the exchange between authors and reviewers, this would return science to its original form, when experiments were presented at meetings and met with open debate. The transition could transform peer review from a slow, tedious process to a scienti . c discourse, Smith suggests. "The whole process could happen in front of your eyes."

However, there are concerns about the feasibility of open reviews. For instance, if each journal posted every submission it received, the Internet would be . ooded with data, some of which the media would report. If a journal ultimately passed on a paper, who else would accept it, given that the information's been made public? How could the journals make any money? There's an argument for both closed and open reviews, says Patrick Bateson, who led a Royal Society investigation into science and the public interest, "and it's not clear what should be done about it."

Many authors are now recommending that editors use (or avoid) particular reviewers for their manuscripts; and some research suggests this step may help authors get their papers published. An abstract at the last Peer Review Congress reported that papers were more likely to be accepted if authors recommended reviewers, or asked that certain reviewers not participate. Kennedy, for one, says he believes it's "perfectly respectable" for authors to bar reviewers, although he says he does not always adhere to authors' requests, such as occasions when authors in particularly narrow specialties submit an overly long list of reviewers to bar.

Lawrence suggests that, to ease the current publishing crunch, senior scientists should occasionally submit their studies to lesser journals. However, he says he's tried this tactic, and it "hasn't helped [his] career any." Consequently, there should be major changes in how work is evaluated, he says, so researchers are not penalized for publishing in second- or third-tier journals.

Anecdotally, Parthasarathy says this is already happening. In some cases, scientists who are being evaluated simply submit their top three papers, instead of counting the number of high-impact submissions. She adds that one of the purposes of open access (the founding principle of PLoS) is to change the all-importance of where people publish. If every scientist has access to papers, she says, they can judge the paper by its contents, not just its citation. "We have to get away from [the idea that] where the paper is published [is] the be all and end all," Parthasarathy says.

Despite the number of complaints lodged at peer review, and the lack of research to show that it works, it remains a valued system, says Rennie. Scientists sigh when they're asked to review a paper, but they get upset if they're not asked, he notes. Reviewing articles is a good exercise, Rennie says, and it enables reviewers to stay abreast of what's going on. Peer review "has many imperfections, but I think it's probably the best system we've got," says Bateson.

Experts also acknowledge that peer review is hardly ever to blame when fraud is published, since thoroughly checking data could take as much time as creating it in the . rst place. Still, Pulverer says he has seen reviewers work on papers to the point where they deserve to be listed as coauthors. "I think everyone in biology would agree that peer review is a good thing," he says. "I would challenge anyone to say it hasn't improved their papers."

Correction (posted February 9): When originally posted, this package of stories contained two errors. Due to a production error, the JAMA acceptance rate in "Facts and Figures" read approximately 55% rather than 5.5%. According to JAMA, the figure is "about 6%."

In addition, the related article "What about fast-track?" reported that the International Congress on Peer Review and Biomedical Publication happens every year. The Congress takes place every four years.

The Scientist regrets these errors.

Comments

Avatar of: anonymous poster

anonymous poster

Posts: 11

June 10, 2010

It is broken because as the old saying goes:\nToo many chiefs and not enough Indians or\nToo many cooks and not enough waitresses\nThe administration in almost all higher institutions is top-heavy. Someone's relative or friend needs a job and there is always someone in admin (top level) that can create a position and in turn fire/or reduction in force the staff that actually does the work. \nI predict that within a few years, UNL will lose it's land grant status if they don't stop pushing research more than education.
Avatar of: anonymous poster

anonymous poster

Posts: 19

June 10, 2010

This article makes some very good points. What scares me most of all - and I use that word in an intellectual sense - is how much hubris is really out there. Most reviewers are honest and try very hard to do the best job they can, but I have been outraged and saddened over the years with some of the truly ignorant comments they have made on my - and my colleagues' manuscripts and grant proposals. \n\nI know what I know and what I don't know, and am as intellectually honest as I can possibly be;when I'm reviewing a paper that has a technique etc. that I'm not an expert in, I make inquiries with colleagues who do know - without compromising the findings of the paper or the integrity and confidentiality of the review process. That way, I can judge the validity and limitations of the data. \n\nWhat kills me most of all is the stupid push to publish in the top-notch journals when the material really does not belong there. I, for one, am sending my material to a really good journal that has the audience that I need to see our research, and not to the one with the highest citation index in my field - because that journal is not read by the clinicians that I need to educate about our findings. I'm sticking to my guns about that, and I hope that many of you out there will, as well.\n\n\n\n
Avatar of: anonymous poster

anonymous poster

Posts: 1

June 10, 2010

There are problems with peer review, and problems without it. First, peer review adds credibility to papers that are used by the lay public. Without peer review, pure political crap can seep into the literature and represent science and scientists. This can cheapen the value of science as, for example, papers could be published showing that cigarettes are good for your health! This argues that some form of review is necessary so that papers representing science are indeed science. However, peer review clearly does not work well - thus, Hwang created a famous career based on purely junk science, and peer review did not save science from that embarrassment. Interestingly, junk science usually gets caught by the science community as research is shown not to be reproducible as was the case with Hwang. Thus, science is self correcting, and that argues that peer review is not necessary except as an ongoing process of evaluating that which is published. In other words, science would likely thrive if there was no peer review, as long as all rebuttals to published papers were accepted. However, two bad things come from this: the size of the literature would explode to huge proportions and only electronic literature could conceivably stand such an explosion, and secondly discreditable papers could be taken as being based in solid science before there is opportunity for other scientists to discredit the work published. Thus, if there is a loosening of peer review, there would need to be something protecting science from being cheapened by junk science or papers pretending to be science for political purposes. \n\nSigned reviews?? When I served as an editor for a highly-respected ecological journal, I found that reviews were signed largely to win political favor among peers, and objectivity was sacrificed. Signed reviews were often a form of pandering among colleagues rather than helpful or honestly objective evaluations of a paper. I prefer unsigned reviews as that is the only way to maintain objectivity. On the other hand, I'm aware that some reviewers will use the veil of anonymity to direct science in ways that are not objective. Thus, sometimes reviewers purposely try to harm authors by criticizing a manuscript simply because it does not agree with the reviewers work. Editors need to be careful to seek enough review information to evaluate when such repugnant behavior occurs and prevent the negative effects of such unscientific behavior.
Avatar of: anonymous poster

anonymous poster

Posts: 2

June 10, 2010

Unlike the fun, but watered down, science articles intended for the lay public, most scientific journals cater to informed readers. Could it be that, though we need peer review, we don't trust the ultimate reader on being the final arbiter of the quality of the information. Shouldn't peer review be more of a "quality control" measure to assure the work isn't obviously ridiculous, or that it meets the credo of the publication, etc? Isn't the current situation too supportive of what's "in vogue"? \n\n"Publish or perish" has been given too much importance to be left solely to a reviewer. Isn't succesful utilization, commercialization, and implimentation of ones work perhaps more important?
Avatar of: anonymous poster

anonymous poster

Posts: 2

June 10, 2010

Peer review and journals are not to blame. It is scientometry and namely its use for evaluation of funding which obscures the system. It brings the importance to easy-to-read papers and suppresses experimentally and theoretically complicated papers. see i.e. \n\nhttp://www.chimie.ens.fr/Resonance/bibliometrics_1.pdf\n\nhttp://www.chimie.ens.fr/Resonance/bibliometrics_2.pdf\n\nBut perhaps the comment "too many chiefs - too few indians" identifies the essence of the problem. In present research, there is a lot of incremental development and too few breakthrough scholarship. As a result of mediocracy promoted by prevalence of regulation and beaurocracy.
Avatar of: David Hill

David Hill

Posts: 41

June 10, 2010

When publishing was relatively expensive and could only handle a limited volume of work, it made sense to be more selective. At the same time, scientists never needed a 'contest' to determine which of their works was more 'important.' The 'scarcity of publishing resources' is no longer relevant in this era of electronic documents.\n\nAt the same time, input from credible reviewers with expertise in the area covered by a paper is valued by authors, and generally useful to improvement of the quality of published work. Screening by uninformed editorial staff, looking for the 'hype' value of published articles, is not at all useful, or needed.\n\nFurthermore, modern technology supports the issuance of corrected or updated versions that greatly facilitate the quality of published work.\n\nThe appropriate solution is to first drop the 'contest' mentality that is encouraged so fervently in our society. Then the appropriate solution, as supported by technology, will be implemented. What I envision is immediate author posting of original and subsequent versions of a document, and the ability of reviewers to append their comments to these documents in an appropriate forum, at any time after they are posted. This would allow an author to incorporate suggestions into subsequent versions of a document, to the extent that this is useful.\n\nFor those who want to run a 'contest' to identify what they think are the 'best' papers in an area that they feel competent to judge, I recommend operation of a web site that links to original works for this purpose. The screening (or search) function does not have to tied to content associated with a specific 'journal' name. So you can start, for example, a web page called 'Entomology Research' that links to the works that you think are most important. I can guarantee that no two review services would have the same list!
Avatar of: Ellen Hunt

Ellen Hunt

Posts: 199

June 11, 2010

I have had a recent good to excellent experience with peer review in a BMC journal. One of the reviewers in particular was simply the best I have ever had. He didn't work on the paper for me, but his suggestions were excellent, his comments were clear and it was obvious he had done a painstaking review. I think it helped that the manuscript was one in which I stumbled upon something really interesting. I felt like I should have put this nameless angel in my acknowledgments, because the paper was so much better because of him. I don't always write as clearly as I think I have. \n\nI have only had one really bad experience with peer review of publications. And in that case, I successfully appealed to the executive editor who looked things over and decided I was right, that the negative review was not warranted and probably motivated by conflict of interest. \n\nI also had a couple of lousy experiences in recent years with two mainline journals. The reviewers hadn't read well, which was obvious from comments and questions that did not make sense. One of those, the editor (whom a knew a bit) wrote me expressing embarrassment at the poor quality. \n\nI will also confess to doing something that I feel a little guilty about. Sometimes I submit something to a journal, but I don't intend to publish with them. I am doing it just to gather reviewer comments, so I can better gauge the waters when I do actually submit to my intended journal. I feel guilty about using them this way. It seems like a bit of an abuse of the system. I wonder how many others do this sometimes. In my defense, I have only done it when I have trouble getting colleagues to give me pre-submission informal reviews.
Avatar of: Steven Brenner

Steven Brenner

Posts: 14

June 11, 2010

Peer review can be constructive but it tends to slow things down in fields of rapid research, and for truly innovative ideas, there aren't really any peers. \n Original ideas may be so original they are not accepted by peers or peer reviewers, and I don't mean stupid ideas but original thinking. This is one place where peer review breaks down. \n Another is the concept of "evidence based" which is one of the newest ideas to come out of academia. Evidence based works in maintaining the status quo, but research is supposed to be about original and innovative investigation. Looking at Evidence based research for a lot of research and treatment for neurodegenerative diseases such as Alzheimer dementia simply doesn't work since the real evidence is there probably is not anything too all the research, otherwise people with dementia would be out of the nursing homes and living on their own or working. Sophisticated statistical analysis is resorted to show benefits for treatment which don't actually have any practical value in real world environments. \n Probably the best solution to publication is open access, publish everything electronically which seems to have merit, and perhaps open access by reviewers could assess value or credibility of information. \n A lot of publications are intended eventually for development of products or processes which have market value. \n This is one of the drawbacks of open access, essentially potentially proprietary information which individuals and companies are trying to keep quiet during economic development for future patent or market share. \n For publicly funded research, I think results such be published either as information is developed or as soon practically after due processing and analysis. \n Information obtained through the public grant process inherently would have some responsibility to the public for access to the information. \n Right now with much of the publication process, research is conducted through the public grant process, then the information is used for patent application or business development for individuals and companies for profit purposes, a practice which inherently steals from the public. \n I think open publication is much desirable in this regard. \n Open publication can speed research, and disseminate information more rapidly for the public good. \nThanks\n
Avatar of: RC Sihag

RC Sihag

Posts: 12

June 16, 2010

Peer review process seems to be offending to those whose papers are rejected. There can be some abrasions; nonetheless, the reviewers are not always biased. The reviewers take great pains in suggesting the short comings in the research to be published. And in majority of the cases, the opinions of the reviewers are upheld. But, it has also been observed that the Editor of the journal can reverse the negative recommendations of the reviewers if the paper has real merit. I do not think that the peer review has broken. In 99.99 per cent cases, it is working perfectly. We must have confidence in our scientific fraternity. Peer review is urgently required and this must continue.

Follow The Scientist

icon-facebook icon-linkedin icon-twitter icon-vimeo icon-youtube
Advertisement

Stay Connected with The Scientist

  • icon-facebook The Scientist Magazine
  • icon-facebook The Scientist Careers
  • icon-facebook Neuroscience Research Techniques
  • icon-facebook Genetic Research Techniques
  • icon-facebook Cell Culture Techniques
  • icon-facebook Microbiology and Immunology
  • icon-facebook Cancer Research and Technology
  • icon-facebook Stem Cell and Regenerative Science
Advertisement
Enzo Life Sciences
Enzo Life Sciences
Advertisement
NeuroScientistNews
NeuroScientistNews