In academic science, it’s no news that prestige matters. High-profile universities typically have their pick of the top candidates on the job market and garner the lion's share of federal grants—but reputation is no indicator of the efficiency or influence of research at an institution, according to a preprint posted on bioRxiv on July 13. The author finds that, compared to the highest-ranking institutions, lower-ranking ones produce more publications and with more significance to their fields per dollar of funding.
“A more egalitarian distribution of funding among institutions would yield greater collective gains for the biomedical research enterprise and the taxpayers who support it,” argues Wayne Wahls, a molecular biologist at the University of Arkansas for Medical Sciences and the author of the study, in the preprint.
The National Institutes of Health’s...
We must recognize that talent and creative ideas are broadly distributed, that subconscious biases and social prestige mechanisms affect allocations of funding, and there are measurable consequences of such favoritism.
—Wayne Wahls,University of Arkansas for Medical Sciences
To investigate whether diminishing returns also apply to institutions, Wahls obtained success rates (the percentage of reviewed applications that receive funding in a given year) and funding rates (the percentage of investigators seeking funding that are awarded grants in a given year) through a Freedom of Information Act request to the NIH for a set of 15 institutions whose amounts of funding ranged from $3 million to $440 million per year. Prestige was determined by the institution’s rank in the US News & World Report on Best Medical Schools, Research. Five institutions were chosen from the top of the list, and the remaining 10 were selected at random from mid-ranked, low-ranked, and unranked universities on the list.
Wahls found that in his sample, “prestigious institutions had on average 65% higher grant application success rates and 50% larger award sizes,” he writes in his report, “whereas less-prestigious institutions produced 65% more publications and had a 35% higher citation impact per dollar of funding.” Wahls measured citation impact using the Relative Citation Ratio, which weights a citation by the other papers referenced alongside it in a given manuscript.
That funding allocations favor prestigious institutions despite them being less productive indicates that the granting process is biased, perhaps unconsciously, by institutions’ reputations, argues Wahls. “We must recognize that talent and creative ideas are broadly distributed, that subconscious biases and social prestige mechanisms affect allocations of funding, and there are measurable consequences of such favoritism,” he tells The Scientist.
This favoritism leads to what’s called the “Matthew effect,” whereby well-resourced, prestigious institutions receive even more resources. A perfect example of this is the recently announced cohort of Howard Hughes Medical Institute (HHMI) investigators, says Mark Peifer, a cell biologist at the University of North Carolina at Chapel Hill. HHMI is investing $200 million in 19 investigators who come from a total of seven states; six recipients are located in Boston alone, and five in California, Peifer points out. “You couldn't find any qualified people in 43 of the states?”
With a relatively small sample size, it’s unclear how robust the findings of this study are, cautions Donna Ginther, an economist at the University of Kansas who has studied gender and racial biases in funding. She says it’s a good pilot study but that an analysis with a larger sample size (say, 100–200 institutions) is needed to make a definitive case. Wahls agrees that it would be worthwhile to extend this pilot study to include more institutions, but he expects that the findings are representative of the broader population.
Ginther also notes that the funding amounts considered in the analysis includes all research project grants: R01s, the bread-and-butter grants for individual investigators, but also R03s and R21s, which pay for smaller projects and pilot studies, R13s, which fund conferences and meetings, R41/42s and R43/44s, which invest in small business technology transfer and are intended to stimulate innovation in the private sector, among other research grants. Ginther suggests that a fairer comparison would be to look at the return on R01 grants alone. Wahls agrees that more-granular analyses would be informative, although the comprehensive approach he used was also used in the studies looking at diminishing returns by investigator.
NIH has recognized the problem of dimishing returns via its own data analyses, and recently considered capping the total amount of funding that any NIH investigator can receive. The plan was scrapped due to pressure from the most senior and well-funded NIH investigators. “There’s a super blind spot in a system where many decisions are being made by exactly the same people who are benefiting from the current situation,” says Peifer.
Nevertheless, some efforts, such as the NIGMS policies requiring special justification for support exceeding $750,000 to individual investigators, and the National Institute of Neurological Disorders and Stroke’s policy to “give extra scrutiny to proposals from labs that already have $1 million or more per year in direct funding” have begun to make some steps to address these issues.
Wahls suggests that the formula he developed for his study, which is success rate (the fraction of NIH applications that get funded in a fiscal year) divided by productivity (the publications or citation impact per unit of funding), provides an impartial way to make allocations of funding across institutions in a way that maximizes the return on taxpayers’ investments, taking into account the impacts of diminishing marginal returns. Because these values can be calculated for investigators grouped in any way desired, the proposed mechanism could be used to address imbalances in funding allocations in other ways, such as by race, gender, age, and state, he adds.
“I think that we all must embrace the empirical scientific method to funding policy,” Wahls says. “If unbalanced allocations of funding are affecting the productivity of the research enterprise, they should be addressed.”
The Scientist reached out to NIH and NIGMS and will update the article if we receive comments from the agencies.
W. Wahls, “High cost of bias: Diminishing marginal returns on NIH grant funding to institutions,” bioRxiv, doi:10.1101/367847, 2018.