Peer Review Isn't Perfect...

...But it's not a conspiracy designed to maintain the status quo.

By | November 1, 2008

During a break at an NIH review panel a few years ago, I was scanning the list of grant applications that were not being scored because they were considered uncompetitive (usually about 50% of all applications). One caught my eye because it was a resubmission from a famous scientist that I knew. I wondered why this accomplished scientist would have his grant summarily rejected twice.

First, I read the reviews of the proposal from its initial submission, which revolved around the technical feasibility of an approach he was implementing. All three reviewers mentioned the same central issue. I then read the response of the applicant, which began: "It is so typical of the status quo, that when their sacred cow is gored, they circle the wagons in defense…." Ouch! I immediately guessed why the grant was rejected the second time.

Although such an emotional response from a well-regarded scientist was surprising, it also made me uncomfortable. It reminded me of my own similar response a decade earlier to what I took as slights by a review panel. In my revised proposal, I was circumspect enough to try to cover my opinions, but I did not take their criticisms seriously, much to my detriment (I was rejected again). Recently, I dug up these old reviews and was chagrined to find nothing insulting from reviewers in them. Time has given me the emotional distance that I had sorely needed.

Serving on multiple review panels has also given me a better perspective. Rather than being self-serving ogres who are part of an elaborate conspiracy to thwart the ambitions of their fellow scientists and maintain the status quo, the reviewers I know are usually motivated by a desire to serve the community and to help fix a system that we all see as inherently flawed. Although some of these reviewers do inadvertently contribute to problems with peer review, it is usually by being too nice rather than too critical. Instead of telling an applicant that their proposal is hopeless, they are far more likely to suggest ways to make it better.

During peer review, grant applications are not judged in isolation, but as a group. By definition, half of all applications are below average. Although most scientific reviewers can agree on why a proposal is important and exciting, it is far more difficult to explain why we don't like the others. The communication problems that plague many applicants in trying to describe the importance of their research also afflict many reviewers in trying to explain the converse.

Good reviewers will take the time and effort to analyze bad proposals and try to provide constructive criticism that can be used to make them better. Some reviewers struggle with describing problems with an overall poor proposal - such as the organization, focus or significance. Somehow, it sounds unscientific to criticize a grant for being unoriginal, vague and boring. Reviewers also know that blunt and specific criticisms will be interpreted as words from Satan. Thus, they usually target obvious technical issues or problems with scope.

Any system will seem unfair when a trivial technical point becomes a major criticism. Fortunately, NIH's proposed reforms in peer review should help by providing a much more explicit and uniform process for scoring grant applications.

But until then, does this mean that trying to interpret the critiques of rejected grants is hopeless? Not at all, but it does help to take a scientific approach when reading them. Just as results from lab experiments provide clues to an underlying biological process, reviewer comments are also clues to an underlying reality (they did not like your grant for some reason). For example, if all reviewers mention the same point, then it is a good bet that it is important and real.

It's okay to get indignant about our ideas being rejected. We should feel passionate about our proposals. But it is counterproductive to consider them personal assaults. I now let negative reviews sit for a couple of weeks, then pretend that they were written by my best friends. This helps me see the truly useful comments that will help my proposal the next time around.

Steven Wiley is a Pacific Northwest National Laboratory Fellow and director of PNNL's Biomolecular Systems Initiative.

Comments

Avatar of: Joan Roughgarden

Joan Roughgarden

Posts: 1

November 4, 2008

This article is not accurate. Reviews and even panel summaries from the NSF, with which I am more familiar than the NIH, frequently contain ad hominem personal comments. Research that is funded must be conservative in its methods and extensional relative to existing knowledge. Risky methodology or destabilizing hypotheses don't stand a chance, despite recent interest from NIH and NSF leadership in "transformative" research.
Avatar of: Gopinathan menon

Gopinathan menon

Posts: 7

November 4, 2008

Impact of human factor ( who the reviewer is, and occasionally, whose paper they are reviewing)on peer-review process cannot be underestimated. Reading some insightful comments such as " On Reviewmanship" written by Albert Kligman a few years back( J.American Academy of Dermatology) has helped me .His points: look for the positive , tell them how they can improve it, and above all, Enjoy the Science.
Avatar of: DH Stevans

DH Stevans

Posts: 18

November 4, 2008

Any scientist who may be interested in experiencing first-hand the anguish of peer review should volunteer to judge a regional or national science fair. (eg. see www.njrsf.org or ISEF) After 20+ years of observing panels with many highly credentialed judges from University and industry, the criteria for fairness are still elusive. Of course, the most important criterion is the double-blind feature: any knowledge of the experimenter(s) name or affiliated organization(s) immediately disqualifies a judge. Obviously, judges cannot advise on their projects, or know anyone who has done so.\n\n Even if this self-imposed vetting is perfectly successful, and even if all personal bias is checked at the door, it is often difficult to convince most highly qualified (and equally opinionated) professionals to give a fair chance to a young experimenter whose work or method is unconventional.\n\n In professional peer review, on the other hand, the panelists must have a deep understanding of the areas that they review! It's highly likely that they know either the experimenters or at least their affiliated organizations. How can objectivity be maintained here, even in the most altruistic panelist, when the subject probably touches on a lifetime passion for advancing the field? In my experience, this is a fatal design flaw in the peer review system, and those who experience bias have a perfect right to complain loudly.\n\n Isn't it more honest to admit the probability of bias? Are bias complaints routinely submitted - without prejudice - to an ombudsman or other disinterested third party? I beleive that this procedure has been proposed fairly regularly in the journals. Is it being implemented properly or just given lip service?\n\n--DH Stevans
Avatar of: Roland Nardone

Roland Nardone

Posts: 2

November 4, 2008

As the cell line authentication momentum increases peer reviewers will be asked to determine if the cell lines are authenticated or are among the 20% of cell lines that are cross-contaminated or misidentified. For example, Notice NOT-OD-08-017, added to the NIH Guidelines for Research in Nov. 2007, calls on intermural and extramural researchers and peer reviewers to be more diligent regarding the authenticity of cell lines. The journals, BioTechniques and Cell Biochemistry and Biophysics, now require an author's statement. Most peer reviewers are unaware of the crisis and the identity of the "bad actors". A support system to assist/advise peer reviewers of cross-contamination and methods for cell line authentication needs to be established ASAP. An ad hoc group of concerned scientists (rlnard@verizon.net)is prepared to assist.
Avatar of: Ruth Rosin

Ruth Rosin

Posts: 117

November 4, 2008

If a scientific journal is a refereed journal, this means that articles submitted to the journal for publication, are subjected to peer reviews. The reviewers are usually scientists well-recognized in their field, which means that they naturally support the accepted paradigms in that field. Consequently, they are bound to be the last ones to recognize the validity of an attempted revolution against any of their revered paradigms.\n\nIn that case, even though the reviewers may mean well, an article that supports a fully justified attempted revolution, hardly stand a chance of passing through their "barrier".\n\nI write this as a veteran of the opposition to v. Frisch's famous honeybee "dance language" (DL) hypothesis, which he first published in a scientific journal in 1946, as presumably already fully properly experimentally confirmed. His sensational DL hypothesis met with initial skepticism, but soon became a revered ruling paradigm, which earned him very many tokens of admiration and prizes, including the Nobel Prize in 1973, 6 years after Wenner & his team launched their first, fully justified opposition to the hypothesis, in Science, which is definitely a respectable, refereed journal. They were soon "rewarded" by being turned into pariahs! (Wenner's grants for honeybee-research were not extended; which obliged him to close his whole honeybee research. You can read about it in the 1990 book by Wenner & Wells: "Anatomy of a Controversy: The Question of a "Language" Among Bees.)\n\nThe honeybee DL controversy is still going on, even though many, including serious scientists, "know" that the controversy had been resolved long ago in favor of DL supporters, and that DL opponents are "beating a dead horse".\n\nNothing is further from the truth! A careful analysis, including an examination of v. Frisch's earliest work on honeybee-recruitment, suffices to show that his sensational DL hypothesis was a stillborn hypothesis, rooted in outright scientific fraud, committed with the noble, but utterly misguided belief, that the DL v. Frisch attributed to honeybees simply had to exist, to provide an adaptive value to honeybee-dances, and thus avoid a severe crisis in The Theory of Evolution. \n\nThe results v. Frisch obtained in his first study on honeybee-recruitment (published in a very extensive German summary in 1923, and in a brief English summary in 1937) fully justified his initial conclusion that honeybee-recruits use only odor, and no information about the location of any food! Moreover, the results already grossly contradicted his expectations from his later, sensational DL hypothesis, long before its inception, (in terms of the expectations from round dances). After the inception of that sensational DL hypothesis v. Frisch did not fail to mention that he had initially believed that recruits use only odor, but he "eliminated" the results that justified that conclusion. In his 1967 definitive book on the honeybee DL (translated from the original German edition f 1965), he substituted, instead, the results of new tests with round dances (and a drastically different experimental design), actually one in 1962. This time the results fit the expectations from his sensational DL hypothesis.\n\nThe "elimination" of the already repeatedly published, early results, fully qualifies as an act of outright scientific fraud. The substitution of results obtained with a drastically different experimental design, fully qualifies as an attempted "cover up" for the earlier act of scientific fraud.\n\nThe "cover up" succeeded for several different reason. I already noted above the misguided scientific reason. On top of everything, the world-shaking of WWII intervened between the publication of v. Frisch's first study on honeybee-recruitment, and the publication of his sensational "discovery' of the honeybee DL.\n\nBut getting through the "barrier" of peer reviewers of the prestigious scientific journals, is still almost impossible for DL opponents! \n\nThe honeybee DL controversy is not simply a controversy about honeybee behavior. Instead, it constitutes the most important reflection of a much more basic controversy, that concerns the very foundations of the whole field of Behavioral Science, and even Biology itself, i.e. the problem whether genetically predetermined individual traits (known as "instincts" in behavior), exist at all, and what we should incorporate, instead, if we discard the whole "instinct"-concept.\n\nMajor scientific revolutions may not be very common in science. But, in terms of the depth and extent of their effects, they undoubtedly constitute the most important events in science.And her, well meaning reviewers can pose an almost detrimental obstacle to the proper progress of science!\n\n
Avatar of: Nejat Duzgunes

Nejat Duzgunes

Posts: 10

November 26, 2008

In an Opinion article in April 12, 1999 issue of The Scientist, I had listed some of the problems with NIH peer review, given below. Not much has changed since then.\n\n "Members of NIH study sections are likely to be competitors in the same field as the grant applicant. They are unlikely to give the benefit of the doubt to an innovative research proposal that has not already been substantially pursued, particularly when they are struggling to procure research grants themselves.\n \n When there are no experts on the review panel in the field of the proposal, reviewers are compelled to come up with some critique, regardless of scientific rigor or accuracy. Since such evaluations are not made by actual peers of the applicant, they are not proper "peer review."\n \n Review panels have the freedom to criticize an application in any way they wish, with no requirement to provide specific published references to substantiate their claims. The panels and their members, however, are not accountable for their critique. Revised applications addressing the criticism entail the loss of a complete funding cycle, and can then be criticized on entirely different aspects.\n \n Grant reviews at NIH are partial to projects that are favorites of study-section members and to areas of research that are "in vogue." Reviewers trained in a narrow area are often blind to alternative approaches and different fields of research, which may produce highly significant results.\n \n Study sections use criteria including "probability of success" or "level of enthusiasm" when making funding decisions. The former criterion would tend to select projects proposing only incremental advances and reject exploratory research. The latter criterion is highly subjective and unscientific.\n \n Review panels expect so much preliminary data to ensure the feasibility of the proposed project that the major part of a discovery needs already to have been made. This implies that NIH is not funding actual discoveries, but merely their further characterization. Feasibility studies must often be conducted with the support of previous grants that were awarded for other purposes. This countermands the detailed description of experiments required by NIH grant applications, since principal investigators are pressured to channel their efforts toward generating preliminary data in addition to, or instead of, performing the funded experiments.\n \n Investigators spend a large portion of their time preparing grant applications. This is time not spent on research per se. In the case of currently funded scientists, much of this time is paid for by NIH grants in the form of salary support.\n \n The period between application and earliest funding is an unacceptable 10 months, during which time many fields advance rapidly.\n \n The review process consumes a significant portion of the reviewers' time, which is likely to lead to resentment and loss of objectivity, since this is time taken away from their own research activities. NIH officials publicly admit the lack of quality time devoted by the reviewers to this process. The administration of study sections and travel expenses for study-section members cost NIH a nontrivial sum that could be used for actual research.\n \n Science progresses via the vision and dedication of individual scientists, as well as chance observations made by the prepared observer. The tedious description of what a scientist is going to do three or five years from now, as required by NIH grant applications, is an unrealistic exercise in bureaucracy and is contrary to the true nature of scientific research."\n\nAfter more than a year of discussions and 3 conferences across the US, the NIH peer review system has not changed much. Biomedical scientists are stressed out over grants and some are closing their labs, when they should be focusing on their research. NIH and all the "peers" are burdened with reviewing 80,000 grant applications a year. What a waste of our resources!\n\nDo we really want this? \n\n
Avatar of: Richard Bentley

Richard Bentley

Posts: 6

November 26, 2008

I can understand the process currently in place being used for grants, but in the case of submitted papers, I have always felt that, particularly with theoretical papers, there should be a two-track scheme; one track for accepted papers based on peer review, and one track for conditionally accepted papers, which would then stay in that track unless they pass the reviewing process. Once the paper is posted, it allows priority for the submitter regardless of any bias on the part of the original reviewers.
Avatar of: anonymous poster

anonymous poster

Posts: 1

December 2, 2008

I understand that authors of grant proposals get emotional while having their work reviewed/rejected, but what happens when grant reviewers get emotional/irrational as well, while hiding behind the veil of their anonymity? Sadly this article touches none of the points Nejat mentions in his posted message.
Avatar of: Fred Schaufele

Fred Schaufele

Posts: 52

December 3, 2008

Peer review is what the title suggests: an evaluation of the proposed research by other researchers. I much prefer this over any system that blindly 'grants' funds to an individual or institution to distribute. \n\nAs most reviewers will attest, it is exceptionally easy to spot the obviously outstanding grant application or the obviously flawed. The outstanding applications commonly represent some 5-10% of all applications and consist of a novel, intriguing hypothesis in an area of significant interest for which the applicant has pulled together sufficient preliminary data to make an argument that is convincing for all three reviewers. The common complaint that applications with novel hypotheses are destined for failure is a myth. In today's funding climate, the competent applications that add incremental knowledge to existing knowledge are the ones that suffer the most and have difficulty reaching the 10-15 percentile threshold currently necessary for funding. I agree with the writer of the article that this feedback (good application, but it will never be funded unless you broaden your horizons) is often missing from the review.\n\nIt is equally true that 'concept' applications will fail. An application that asks for $2 million dollars to spend over five years on a concept for which there is no supporting evidence certainly, and justifiably, will be scored poorly. There are simply far better applications in the competition. Yes, this is a competition and the disappointed researcher should keep that in mind. \n\nThe lack of seed money to pursue intriguing, yet-to-be substantiated concepts always will be a problem facing science. However, even those desiring to pursue those concepts should be aware that the vast majority of their ideas are likely to be wrong. If they are not aware of this, then I could argue they are unaware of the discovery process and likely would be poor stewards of the public funds kindly provided to them. One also has to be cognizant of the surprising number of technically questionable applications that are received. \n\nIt is up to society to establish how much money is made available for research. But, when there are more competent applications than research money available, then I still can not envisage a better way of distributing those funds than by convening a broad panel of peers. The recent peer review commission supported by the NIH actively solicited comments from the population that it serves. I attended one of those sessions and found the discussion to be free-flowing and interesting. My own suggestions were heard (and rejected!). I have to acknowledge that the process was transparent and that changes now being implemented are based upon a breadth of worthwhile suggestions made by many. Those changes are intended to improve some issues surrounding peer review. Ultimately, only unlimited funding would solve the problem that the majority of applicants will be disappointed.\n\n \n\n
Avatar of: null null

null null

Posts: 97

December 7, 2008

Peer Review And Innovation In Science\n\n\nA. "The new face of peer review", in "Funding Opportunities and Advice" forum, at \nhttp://www.the-scientist.com/community/posts/list/298.page\n\nrefers to "changes to the peer review process".\n\n\nB. However, "peer review process" is the least disturbing aspect of "peer review" in science\n\nSamples of factual observations of other negative aspects of peer review in science: \n\n- http://www.digibio.com/archive/SomethingRotten.htm\n"A U.S. Supreme Court decision and an analysis of the peer review system substantiate complaints about this fundamental aspect of scientific research. Far from filtering out junk science, peer review may be blocking the flow of innovation, and corrupting public support of science."\n\n- "Peer review stifles innovation, perpetuates the status quo, and rewards the prominent. Peer review tends to block work that is either innovative or contrary to the reviewers' perspective."\n\n\nC. "Peer Review" is, factually, a tool of a "Subversive Activities Control Board"\n\nThe most revolting corrupt aspect of peer review in science is its exploitation by the Science Establishment to tightly clamp its political and financial omni-everything rule and control, including stifling of any shred of scientific innovation. \n\n\nD. The corruption is not inherent in the tool, but in the nature of the Science Establishment \n\n"Implications Of Science And Technology Evolution"\n http://blog.360.yahoo.com/blog-P81pQcU1dLBbHgtjQjxG_Q--?cq=1&p=419\n\nThe peer review process is but a tool of the Establishment. The corruption is not inherent in the tool, but in the nature of the Science Establishment.\n\nAs long as Science and Technologhy are considered and handled, conceptually and administratively, as one realm and one faculty this corruption cannot and will not be overcome. This conception and attitude is THE CORRUPTION OF SCIENCE BY THE 21st CENTURY TECHNOLOGY CULTURE.\n\n\nDov Henis\n\n(A DH Comment From The 22nd Century)\n http://blog.360.yahoo.com/blog-P81pQcU1dLBbHgtjQjxG_Q--?cq=1

Follow The Scientist

icon-facebook icon-linkedin icon-twitter icon-vimeo icon-youtube
Advertisement

Stay Connected with The Scientist

  • icon-facebook The Scientist Magazine
  • icon-facebook The Scientist Careers
  • icon-facebook Neuroscience Research Techniques
  • icon-facebook Genetic Research Techniques
  • icon-facebook Cell Culture Techniques
  • icon-facebook Microbiology and Immunology
  • icon-facebook Cancer Research and Technology
  • icon-facebook Stem Cell and Regenerative Science
Advertisement
Mettler Toledo
Mettler Toledo
Advertisement
PITTCON
PITTCON
Life Technologies