Advertisement

NIH reviewers praise new rules

While the transition to the new shortened grant applications at the linkurl:National Institutes of Health;http://www.nih.gov/ (NIH) and the corresponding review guidelines hasn't been completely smooth, reviewers who have participated in the first few rounds of funding under the new system generally support the changes. Image: Wikimedia commons"I think it's an improvement over the old system," said linkurl:Karin Rodland,;http://www.pnl.gov/biology/staff/staff_info.asp?staff_num=5747 a researche

By | February 23, 2010

While the transition to the new shortened grant applications at the linkurl:National Institutes of Health;http://www.nih.gov/ (NIH) and the corresponding review guidelines hasn't been completely smooth, reviewers who have participated in the first few rounds of funding under the new system generally support the changes.
Image: Wikimedia commons
"I think it's an improvement over the old system," said linkurl:Karin Rodland,;http://www.pnl.gov/biology/staff/staff_info.asp?staff_num=5747 a researcher at the Pacific Northwest National Laboratory (PNNL) in Washington State and an NIH reviewer since 1998, "but I think there is a learning curve and until everyone recalibrates there may be a period of confusion." Reviewers still assess applications on the same five criteria (significance, investigator, innovation, approach, and environment), but the final judgment of a grant proposal is based on an "overall impact" rating that is related to, but not completely dictated by, those subscores. "I think that's where most of the confusion comes from for most people," said linkurl:Tony Hazbun;http://www.mcmp.purdue.edu/faculty/?uid=hazbun of Purdue University in Indiana, who experienced the new system in his first ever study section. Because there is no algorithm for calculating the overall impact score, "different people will weight different components differently," Rodland said. Under the old system, the subcriteria weren't rated, so there was only one score -- the priority score -- assigned to an application. Furthermore, one of the subcriteria is "significance," which some reviewers found difficult to distinguish from "overall impact," said linkurl:Steven Wiley,;http://emslbios.pnl.gov/id/wiley_h lead biologist for the Environmental Molecular Sciences Laboratory at PNNL and a member of The Scientist's editorial board. But in fact, the NIH intends the two to be quite different, Wiley said. Whereas "significance" refers to the importance of a project if every specific aim were completed successfully, "overall impact" is weighted by the likelihood of success, which will be influenced by the other subcriteria, such as approach, investigator, or environment. "The impact is basically [when you] take the significance and then say, 'Can they actually do it, and are they the right people to do it?'" Wiley explained. "Everything is kind of tied together and it depends on each case," Hazbun said. "You have to look at the whole application." Thus, even if a grant proposal has a really high significance score, if it is unlikely to be successful, either because the investigator is ill-equipped to complete the study or his or her proposed approach seems unfeasible, the impact would actually get a much lower score. Wiley noted that the NIH has made an effort to clarify these points, providing several examples of how specific grant applications should be scored, which has helped lift some of the confusion. "I've been on three different study sections using the new review criteria. The first time through was a little confusing. The second time through it was a little better. The third time I think we got it." Apart from such issues, reviewers say, the changes to the reviewing guidelines have actually increased the validity and utility of the reviews. For example, in the new system scoring is limited to whole numbers (1 through 9), whereas before, a reviewer could give a proposal a priority score anywhere from 1 to 5 in increments of tenths. But such a fine scale was counterproductive, Wiley noted. "[Y]ou cannot possibly discriminate grants on that kind of level," he said, adding that the process was effectively "a crapshoot after you pick the top 25%." While some may find this adjustment difficult, overall "it forces [reviewers] more to just look at the score [matrix]" and pick a number that is most appropriate, Hazbun said. In addition, Wiley added, this new system eliminates the "priority score games" that reviewers could play, bumping up scores slightly to increase the likelihood of funding. With the new 9 point scale, Wiley said, "there's no room to play games." Another benefit of the new system is that reviewers are now required to justify their scores by listing strengths and weaknesses for each subcriterion, Wiley said. And if there are no weaknesses to name, that category must get a 1 -- the highest score possible. "You can't just say the environment is weak; you have to say why and be very specific about it," said Hazbun. "What they're trying to do is really tie in evaluative comments that are going to be constructive for the investigator." This specific feedback can help investigators improve unfunded applications in the next round, he said, adding that the process also helps him make his judgments as a reviewer. Other changes to the review process specifically aim to cut down on the amount of time the process takes. For example, the written evaluations no longer include -- indeed, specifically exclude -- a written summary of the grant proposal. "To me, that was a waste of time and a waste of paper," Rodland said. The template now provided by the NIH gives bullet points where reviewers are to write a couple of sentences summarizing the strengths and weakness of each category, limiting them to just half a page. "[Some] people used to write 3-page reviews," Rodland said. But the new system and template "encourages succinctness," which is a good thing, she added. Finally, rather than reviewing the grant proposals in random order, the study section starts with the highest scored applications (based on preliminary scores) and works their way down the list. In addition to cutting the total number of grants the study section will review orally -- low-ranked applications with no chance of getting funded won't even be discussed -- it also helps the reviewers to "recalibrate" their scores, Wiley said, by providing an excellent standard against which the others can be judged. "It was a very clever idea," he said. "I found this has been very, very helpful." Overall, "I was impressed with the process," Hazbun said of his grant reviewing experience. "It seemed to be working fairly well for me."
**__Related stories:__***linkurl:New NIH forms raise concerns;http://www.the-scientist.com/blog/display/56209/
[8th December 2009]*linkurl:How to change NIH peer review?;http://www.the-scientist.com/news/display/54009/
[12th December 2007]*linkurl:A New Paradigm for NIH Grants;http://www.the-scientist.com/article/display/53412/
[August 2007]

Comments

Avatar of: anonymous poster

anonymous poster

Posts: 6

February 23, 2010

I have done two rounds of reviews with the new system- obviously not yet with the shortened grants. As a reviewer I found it difficult to provide any real guidance to applicant- The older system sans the bullet point review was much friendly to providing real information. That said I don't think it will much change the outcomes, except in resubmissions that will now really be a crap shoot, without much of a target to fix.
Avatar of: anonymous poster

anonymous poster

Posts: 10

February 23, 2010

What left unsaid in this piece is the new bullet-point critiques the applicants will get back. The new format makes the critique more diffuclt to decipher and the applicant will have a hard time to get a handle on how to revise the applicantion to get a better score. On top of that, there is only one chance to move the score up into the funding range. Taken together, the new system is hardly an improvement over the old one.
Avatar of: anonymous poster

anonymous poster

Posts: 10

February 23, 2010

The new review order is truely a clever idea that helps to calibrate the scores and make the whole process more consistent. I delibrately avoid the wording "fairer" because I also see the downside of this new order. The more contentious and contraversial applications are more likely being discussed near the lunch breaks or late in the day and people tend to get really tired by that point. The debate on those contraversial grants may not be as vigorous as it used to be.
Avatar of: anonymous poster

anonymous poster

Posts: 1

February 23, 2010

the comments are so vague-- you just feel they don't like the idea or the investigator--that help little for revision. The scale is like randomly given but not based on critiques.
Avatar of: anonymous poster

anonymous poster

Posts: 1

February 23, 2010

I am getting ready to submit a grant with the new shorter version. what became apparent is that if you are lacking strong publications to support the methods in the new proposal, you may be out of luck. no way there is room for much in the way of methods. I wonder if the reviewers will really be able to concede that an individual has the technical ability to do the work without some exhaustive methods section. In the past, areas that I am strong in and had publications, reviewers would still come up with ridiculous methods questions. time will tell. The bottom line, with such tight funding resources, many good projects will continue to go un-funded. The review process, no matter how much it is revised probably only works efficiently when funding is at 25 percentile or higher.
Avatar of: anonymous poster

anonymous poster

Posts: 1

February 23, 2010

I agree with the comments that, from the applicant end, the new review system is not useful. The comments are vague and provide little meaningful guideline for improvement. Of course, that is not the primary goal-to be helpful. I think this is mainly a mechanism to more efficiently winnow down the stack of applications in this time of tight paylines.
Avatar of: anonymous poster

anonymous poster

Posts: 85

February 24, 2010

\n\nIf the reviewers' comments are "vague," whose fault is that? Surely the NIH is not instructing reviewers to write vague comments. To the contrary, the impression I got from the article's quotes is that the NIH is instructing reviewers to be specific!
Avatar of: PAUL ERNSBERGER

PAUL ERNSBERGER

Posts: 2

February 24, 2010

As my review administrator pointed out, part of the goal was to get reviewers away from judging on trivial technical points such "what buffer was used". The new process succeeds very well with this. I think a lot of the helpful comments for the applicant are exchanged verbally at the meeting. The administrator is expected to incorporate these into the critique for the applicant but it impossible for one person to do all this and run the meeting at the same time. Maybe there should be a recording secretary?
Avatar of: anonymous poster

anonymous poster

Posts: 10

February 24, 2010

The summer round will be the first time we will be dealing with the shorter application. It is hard to predict how things will play out in this first round and I would suggest to wait a round two to let the dust settle.\nBecause we have to evaluate the feasibility, there is no way a reviewer will give a blank check to the investigator who lacks publications to demonstrate his expertises. So the obvious answer is to publish :-).
Avatar of: anonymous poster

anonymous poster

Posts: 6

February 24, 2010

Funding too many basic science (or even translational) won't help at clinical level. NIH needs to figure out how to recognize applications that will bring the "real" clinical impact to our patients. Study sections that just bring people together and critique methods, environment and so on...that won't do what clinic needs. It is not the review process, don't you understand NIH? It is about getting reviewers together and that section truly understands whether that particular proposal will bring clinical impact to help our patients. None of these scoring system matters when applications that you fund don't help patients as the end point. NIH really needs to wake up.
Avatar of: Brenda Guhl

Brenda Guhl

Posts: 5

February 24, 2010

Did somebody honestly just figure this out and call it clever (to start with the best and work down)?? Doesn't everybody who grades do this with their students' papers??? I always have, for the same reasons they give.
Avatar of: anonymous poster

anonymous poster

Posts: 15

February 24, 2010

Zeeman didn?t think about patients when he described the physics of Zeeman splitting in a magnetic field and laid the foundation for magnetic resonance imaging (MRI) to come into clinical use one day \n\nSydney Brenner and Robert Horvitz werent thinking of how cells die in human disease when they worked out the apoptotic cell death pathways in the worm C. elegans\n\nKary Mullis wasn?t thinking how useful polymerase chair reaction would become to genetic research and the many high-sensitivity clinical diagnostic assays used today\n\nCraig Mello, when he discovered RNA interference (RNAi) in C. elegans, wasn?thinking about the vastness of this strategy as a therapeutic agent to modulate gene expression or its applications to understanding of the genetics of human disease\n\nThe ignorant resurrect the debate of - what does basic science research do for the patient - every decade of so. \n\nThe answer is a hell of a lot more than meets the eye and the past is the best predictor of the future. \n\nNIH reviewers would be wise to not succumb to such short-sighted mantra.\n
Avatar of: anonymous poster

anonymous poster

Posts: 2

February 25, 2010

I don't see what the reviewers are so gungho about! For the most part, reviewers today are looking at the leaves rather than the tree or the forest. They get lost in the smallest of method problems and don't know how to get out of it. What we need are reviewers with expertise in the biology of specific diseases in order to get the maximal benefit from the funded research. \nI am not sure what innovation really means: is it the next new molecule discovered by genomic technology? The reviewer believes that every new molecule, however, irelevant they are, just because they are new are novel and hence worthy of funding. I know of people who found four new molecules by gene array and have got an R01 for each of those molecules, basically with identical research plans.\nThe NIH review system needs a complete overhaul. Otherwise please explain to me how a triaged grant got into the funding payline at the next revision, and a grant that just missed the payline got triaged the next round? Is that all the difference there is between potentially good science and bad science, or are the reviewers that bad?\n

February 25, 2010

\n\n**I know of people who found four new molecules by gene array and have got an R01 for each of those molecules, basically with identical research plans**\n\nHello anonymous,\n\nI don?t think that the situation you describe reflects necessarily a problem with the reviewers. What if these people sent **basically 4 identical research plans*, except for the protein to study, to 4 different study sections?. How are the reviewers supposed to know it ?. I think that this reflects a problem of oversight (supervision) by the Institutes in not having a better look at investigators with multiple grants and making sure that there is no overlapping and/or repetitions. \n\nIf the payline now is 10% (frightening!), there is another 10% of very good proposals being left out. The payline should be, at least, 20% for NIH to make sure they?re funding the best science.\n\nThe problem with funding ?basically identical? or overlapping proposals to the same investigator(s) should not be difficult to correct with a better coordination between Institutes and the Center for Referral (CSR). There is a database. So, when the third grant from the same investigator gets to NIH, there should be the appropriate mechanism to thoroughly evaluate it (for distinctiveness and impact in relationship with the existing grants), before committing it to a formal review by a Study Section.\n\nThe NIH budget is what it is and to increase the payline **extraordinary measures** might be needed.\n\n
Avatar of: anonymous poster

anonymous poster

Posts: 10

March 16, 2010

Its curious. I've noted that "The Scientist" consistently shades an Institutional bias in its spin on issues like this.\n\nIs this an Editorial policy?\n\nI have experience of review under the new directive.\n\nSome of its strengths are listed in the article, including its tendency to prompt reviewers to consider the contribution to overall score from the 5 categories.\n\nHowever, important weaknesses include-\n\n1. Superficial feedback to the grant writer.\n2. Encouragement of use of generic criticisms.\n3. Loss of resolution on the overall impact score range.\n\nMany feel the last of these issues is the most important. We're told only give a 1 for grants that "walk on water" and virtually no one hands out a 9 - its just considered too mean. \n\nThe upshot is that the new score range has 7 increments in practice - effectively a resolution an order of magnitude lower than that previously available to reviewers.\n\nWhat happens when 10 grants are clustered at 2 at council and payline will only fund 5. Many reviewers I've talked to suspect that the net effect of the new review policy is to hand more power to NIH program people and away from peer reviewers.
Avatar of: DONALD FORSDYKE

DONALD FORSDYKE

Posts: 4

March 16, 2010

Cosmetic improvement only. The need is for radical reform such as that I proposed decades ago: see \n\nhttp://post.queensu.ca/~forsdyke/peerrev.htm\n\n"Tomorrow's Cures Today? How to Reform the Health Research System." Harwood Academic, Amsterdam (2000).\n\nDonald Forsdyke, Kingston, Canada
Avatar of: anonymous poster

anonymous poster

Posts: 1

March 16, 2010

My "review" of some of these bloggers is that they need to work on their grammar!
Avatar of: anonymous poster

anonymous poster

Posts: 1

March 16, 2010

The article gave comments from only 3 reviewers, 2 of whom are at the same institution and one of whom is a regular contributor to The Scientist. I would consider such a small and selected sample as being anecdotal and not worthy of the article title.
Avatar of: MARK WEBER

MARK WEBER

Posts: 19

March 16, 2010

This makes me think about how few scientists there are left in America, and how little effort is going into supporting the ones we have left. The diminishing budget for science has had the effect of having scientists fight among themselves. What we end up with are good fighters, when what we need are good scientists.

Follow The Scientist

icon-facebook icon-linkedin icon-twitter icon-vimeo icon-youtube

Stay Connected with The Scientist

  • icon-facebook The Scientist Magazine
  • icon-facebook The Scientist Careers
  • icon-facebook Neuroscience Research Techniques
  • icon-facebook Genetic Research Techniques
  • icon-facebook Cell Culture Techniques
  • icon-facebook Microbiology and Immunology
  • icon-facebook Cancer Research and Technology
  • icon-facebook Stem Cell and Regenerative Science
Advertisement
Mirus Bio
Mirus Bio
Advertisement
PITTCON
PITTCON
Life Technologies