Study: Peer Review Predicts Success

Scientists who evaluate National Institutes of Health grant applications often identify the projects that will have the biggest scientific impact, according to an analysis.

By | April 23, 2015

WIKIMEDIA, AREYNThe National Institutes of Health (NIH) peer-review scoring system, which is used to select grant proposals for funding, is an accurate predictor of how impactful proposed research will ultimately become, according to an analysis published today (April 23) in Science. Overall, applicants with the highest-scoring grants published the most papers, garnered the most citations, and earned the most patents, researchers have found.

“This is the most important science policy paper in a long time,” said Pierre Azoulay of the MIT Sloan School of Management who was not involved in the research. When it comes to peer review, “most of the pontifications that you hear—most of the anger, editorials, suggestions for reform—have been remarkably data-free. So this paper, as far as I am concerned, is really a breath of fresh air.”

“[As] it turns out,” he added, “the NIH is doing a pretty good job.”

The process by which NIH grants are applied for, reviewed, and awarded has come under scrutiny in recent years. Among the concerns is that the large investment of time and effort by both the applicants and reviewers reduces the time both can spend doing research.

Another concern is whether the peer-review process actually works. “There is very little prior research on how effective peer-review committees are at deciding which grant applications to fund, and yet that is the major mechanism by which science funding is allocated in the United States and internationally,” said study coauthor Leila Agha of the Boston University Questrom School of Business.

To evaluate the efficacy of NIH’s peer-review process, Agha and Danielle Li of Harvard Business School tallied the scientific impact of more than 130,000 projects funded by R01 grants from across all of the agency’s institutes with the scores these projects received during review. Agha and Li assessed scientific impact according to the number of publications that acknowledged funding by the grant, citations to those publications, and patents that cited either the grant itself or a grant-funded publication. Overall, they found “the better the score that the [peer-review] committee had assigned, the more likely the grant was to result in a high number of publications, or in publications that are highly cited, or even . . . in research that ultimately gets patented,” Agha told The Scientist. “The results are suggestive that the committees are successfully discriminating even amongst very strong applications.”

This correlation persisted even after Agha and Li accounted for differences across applications, such as the year the grant was funded, or the principal investigator’s credentials—including publication history, institution, and prior funding history. This showed “that the intrinsic merit of a scientific idea is more valuable than the actual person,” said Aruni Bhatnagar, a professor of medicine at the University of Louisville who was not involved in the work.

The results “illustrate the ability of the NIH reviewers to identify which projects are going to be the most promising,” said Brian Jacob, a professor of education policy and economics at the University of Michigan. “It would have been a little worrying,” he added, “if they weren’t getting it right.”

This study wasn’t the first attempt to assess the federal agency’s funding approach. NIH’s own Michael Lauer, director of the division of cardiovascular sciences at the National Heart Lung and Blood Institute (NHLBI), previously found no correlation between peer-review percentile score and subsequent scientific impact in an analysis of funded NHLBI grants. Lauer suggested this was because his analysis looked only at new grant applications, whereas Agha and Li examined all grants—new and renewed.

“The literature suggests that experts do a much better job of assessing past and present performance than predicting what is going to happen in the future,” Lauer told The Scientist. Agha noted, however, that even when she and her colleague separated new grants from renewed ones, they saw similar results.

Agha and Li, who included awards from all NIH institutes, analyzed a greater number of grants than Lauer (137,215 versus 1,492) over a longer time period. This, too, may have contributed to apparent discrepancies between the studies. Differences aside, “the critical message,” said Lauer, “is that the conversation about peer-review of grants is moving into this sphere of rigorous science. Instead of having arguments about opinions . . . we’re debating actual data.”

Even if peer review is a satisfactory predictor of future scientific success, that “doesn’t mean the process can’t be shortened and improved,” said Jacob.

The results of this latest analysis indicate that such improvements can be built from a solid foundation, said Azoulay. “We’re not starting from a situation where peer-review is a complete disaster and where we might as well be picking projects out of a hat or throwing darts,” he said. In refining and reforming the process, he added, “there’ll be no need to throw out the baby with the bathwater.”

D. Li and L. Agha, “Big names or big ideas: Do peer-review panels select the best science proposals?” Science, 348:434-438, 2015.

Add a Comment

Avatar of: You

You

Processing...
Processing...

Sign In with your LabX Media Group Passport to leave a comment

Not a member? Register Now!

LabX Media Group Passport Logo

Comments

Avatar of: LeeH

LeeH

Posts: 35

April 24, 2015

That assumes patents and citations are the measures of success.  However, in 30 years of looking at University patents, >95% have no real value nor do they translate to any product or service (ever), so I would say that patents along are not a very good measure. As for citations, there is some value there, but alone it would be insufficient as a measure of success. 

Avatar of: EvMedDr

EvMedDr

Posts: 14

April 24, 2015

This analysis of the NIH peer-review process is a tautology. It is those PIs with the highest publication rates who are funded- it's formulaic. But are the funded PIs fostering new ideas, or merely reinforcing the existing paradigms? Given the paucity of new ideas and treatments in the wake of the Human Genome Project I have to conclude that the NIH peer-review system is a mechanism for the same old same old. Oh for the days when PIs were independent thinkers and do-ers. We in the biomedical research community are being constrained by forces that inhibit freedom of thought and creativity.

Avatar of: G Hellekant

G Hellekant

Posts: 2

April 24, 2015

Several points can be made in this context:

1. Having money allows you to do and publish research. Consequently you will produce more publication, especially in the present climate when you can buy yourself publication at many journals.

2. Have we ever seen an investigation of a large existing colossus that did not come out with thumb up?

3. In my now 55 years as researcher with several NIH grants in the past, I can seriously w/o statistics claim that my best projects were not funded. To be funded you have to combine stealth (have the results but not reveal that you have them) with fiction, be in the mainstream of present ideas and techniques and be part of the scientific maffia of your area, including the administrator of your section at NIH and by him/her picked members of the panels..

4. Seen from a larger perspective. No matter what this report concludes. To have a system that does not fund 9 out of 10 proposals (we can argue over the exact %) when an average grant proposal now has been  forced to grow to close to 100 pages of compliance, assurances, checks and other administrative forms (that have nothing with the scientific idea to do), but forces several months of concentrated efforts to even be submitted, is a terrible waste of national resources.

5. I guess that employing grant administrators, grant writing instructors, grant checking people, sponsored research office personnel all over USA institutions keeps unemployment figures down, but it does not improve science.

6. A complete change on how medical research should be funded is necessary. The system has outlived it usefulness.  Many suggestions could be ventured,, but that is another story as H.C. Andersen used to say in his tales!

Avatar of: Irakli Loladze

Irakli Loladze

Posts: 6

Replied to a comment from EvMedDr made on April 24, 2015

April 24, 2015

EvMedDr is spot on saying that the analysis is a tautology. Furthermore, according to the study "one-standard deviation worse peer-review score among awarded grants is associated with 15% fewer citations and 7% fewer publications"

Does this modest improvement really justify the enormous waste of time, money and passion resulting from the peer-review of proposals?

It appears that the biggest beneficiaries of the peer-review money distribution system are the very bureaucracies that administer and perpetuate it.  

Avatar of: monkijohnni

monkijohnni

Posts: 1

April 24, 2015

Halfway through this article, I had to check that I wasn't reading something from The Onion. 

Paraphrased: "By our own metric, we're doing a great job, so it's not the system that is broken, YOU are."

Avatar of: dumbdumb

dumbdumb

Posts: 94

April 26, 2015

that is pretty moronic.

People who have good ideas and are not funded will have less chance of patenting, publisjing, etc.

When I was working for VIB, they praised themselves for their high quality publications and achievements. Accidentally, to be selected to be part of VIB you had to have high IF publications and achievements. Duh!

It is like the dictator/king/politician/rockstar asking to their supporters if they are the best

 

Avatar of: White&Black

White&Black

Posts: 1

April 26, 2015

Again, lets keep funding the same very successful investigators in that way we will be always successful...

as the number one country with chronic disease problems and the highest hourly cost of health benefits in the manufacturing industry than other developed countries.

Bravo we are doing great!!!!

Avatar of: Krugel

Krugel

Posts: 1

April 27, 2015

Really? What a news flash - those with more funding publish more papers...  How about folks like Douglas Prasher - couldn't get funded, but just happened to clone the gene for GFP ... a relatively unnoticed discovery with little import to cell biology....

Avatar of: C G-S

C G-S

Posts: 1

April 30, 2015

There are multiple reports that a significant number of scientific papers are not reproducible by other scientist.  Does reproducibility of the science correlates with high success funding?  One would hope that those selected for the best science proposals would also be the ones that their science is most reproducible.

Avatar of: Paul Stein

Paul Stein

Posts: 237

April 30, 2015

Thanks to all for jumping on the stupidity of this.  Gee, for once I don't need to be the one trumpeting into the darkness.  One suggestion to everyone...please, keep it up.  TheScientist provides many fantastic opportunities to comment. 

Popular Now

  1. Can Young Stem Cells Make Older People Stronger?
  2. Thousands of Mutations Accumulate in the Human Brain Over a Lifetime
  3. Two Dozen House Republicans Do an About-Face on Tuition Tax
  4. CRISPR to Debut in Clinical Trials