ABOVE: © istock.com, Alexey Yaremenko

All academics know the feeling. After months of hard work to compose a brilliant research proposal and months of waiting for a decision, a rejection notice pops up in your inbox. Valuable time and resources spent thinking through the methodological details of how to address important research questions have been wasted. All that is left is disappointment and feelings of unfairness. Despite excellent review reports, some other project was deemed to be slightly better, more relevant, or more feasible.

Why did your proposal not receive the funding? Usually, this remains unclear. It may have been a very close call, perhaps too close for reviewers to make a reliable and rational selection of one proposal over another. In fact, you are lucky if you learn what project was funded, let alone how the selection committee arrived at your assessment. To make things worse, the proposals that do end up winning the funding competition are usually rather homogeneous, hailing from the same kinds of institutes, covering the same kinds of topics, and involving the same kinds of eminent researchers. It’s the Matthew Effect at full steam: those who are already at an advantage accumulate more and more advantages, while the disadvantaged grow less and less likely to secure funding.

Not only does this lead to personal disappointment, disillusion, and lost career opportunities, it also harms the research system at large. While many promising innovations have recently been introduced to foster quality, efficiency, and diversity in science—most notably efforts triggered by the open science movement, such as preprints, registered reports, and open peer review—funding practices are notoriously lagging behind. Even though funding agencies increasingly facilitate or mandate open science practices from the projects they fund, their own practices are still shrouded in clouds of secrecy.

We argue that research funding agencies should do much more. Specifically, we call on them to experiment with open applications and assessments as well as partial lotteries. We believe these innovations can contribute to more diverse, efficient, and transparent grant allocation. Several funding agencies have recently started experimenting with some of these elements, including the Swiss National Science Foundation and funding agencies collaborating in the RoRI network. These initiatives are a good start, but we believe that greater change is needed.

Open applications and assessments

We believe that similar to research processes, funding procedures would benefit from transparency for the sake of quality and trustworthiness. We argue that the submitted applications, review reports, rebuttals, interviews, deliberations around funding decisions, and all identities of those involved in these steps ought to be disclosed and made publicly available. This will make the decision process fully accountable. We believe that such transparency will lead to fewer errors and less bias and unfair judgement. Furthermore, we expect this to improve the efficiency of funding processes because the conditions of openness will likely result in fewer appeal cases.

Similar to research processes, funding procedures would benefit from transparency for the sake of quality and trustworthiness.

Open applications and open procedures also provide materials for meta-research on funding processes. Additionally, when good applications that were “unlucky” are openly available, other funders may decide that these fit their priorities and grant them. Similarly, other researchers may get inspired and integrate ideas into their own research or reach out to establish collaborations. To those who are afraid of being scooped, we’d argue that the system of open applications makes it clear when ideas were first established and by whom. Hence, the system will also help to settle priority claims.

The idea of open identities of reviewers and committee members is probably the most contested element of our proposal. We acknowledge that this may come with undesirable consequences and that it may be quite intimidating, especially for early career researchers. We believe, however, that the benefits outweigh the risks, and we invite researchers and funders to experiment with open funding practices in order to examine their promises, limitations, and outcomes.

Partial lotteries

Generally, only a minority of eligible proposals that are good enough to be funded are actually awarded due to a shortage of budget. Convincing evidence has shown that after peer review, rebuttal, interviews, and committee deliberations, a number of applications typically remain that are so similar in quality that making a reliable and reproducible funding decision is not possible. Thus, a substantial element of luck is involved in getting a grant.

We propose that funding organizations let the dice decide by performing a lottery: after a selection committee has weeded out the applications not suitable for funding, a few outstanding applications might be selected to be funded directly, while all other applications enter a lottery to distribute the remaining budget. We believe that this system of partial lottery is more fair, because it is less vulnerable to the personal biases of actors in the selection process.

The time has come to take funding practices out of the mist and into the light of day.

This way of formalizing the element of chance in funding processes has the great benefit that it will foster diversity, as lotteries are agnostic to status, background, or other circumstantial characteristics. In addition, it promotes efficiency because it obviates the need for endless discussions on applications of indistinguishable quality. Lastly, we think that it will help researchers on a personal, mental level. Applicants can rightly say: “My proposal was good enough to be funded, but it was unlucky in the draw.” And we suggest that the positive evaluations to come out of the process will be part of the recognition and rewards criteria for researcher assessment.

Real consequences

While many researchers will recognize the mixed feelings of disappointment and frustration after having their proposals rejected for unknown or unclear reasons, in some cases these situations lead to more harm than only frustration.

One of us serves on the objections committee of one of the major Dutch funding agencies. Applicants can file complaints about unfair treatment and assessment. One recent example concerns the agency’s most prestigious personal grant, which usually leads to a tenured professorship. The complaint regarded a proposal that was rejected for unclear reasons, after which the applicant accidentally discovered that the language in her rejection letter was almost identical to that in the acceptance letter of another application. The transparency of the appeal made clear that the scores of these two projects were identical up to the second decimal, discussion in the selection committee did not change these scores, and the rule that female applicants get the grant when scores are equal could not break the tie, as both were female. Finally, a subset of committee members present in the meeting voted on which to fund, but it was unclear why they voted as they did. The appeal was approved and the applicant got her prestigious grant because it was judged that the decision was not replicable.

This example illustrates that action is needed. And we believe it is possible. The example demonstrates exactly how, in cases of very comparable scores, decisions tend to contain an element of arbitrariness, and hence a lottery might be more efficient and more fair. Also, transparent procedures might have prevented much of the harm.

It is likely that making the shifts we propose will have effects that are both positive and negative, intended and unintended. We have outlined several of them here, but there are probably more. These consequences are likely to vary across different contexts and might affect individual researchers to different degrees. Some funding agencies have started experimenting to obtain further evidence regarding outcomes.

We call upon other agencies to follow suit. Funding practices are a core element of scholarly research, and having them lag so embarrassingly behind the open science agenda is a missed opportunity. The time has come to take funding practices out of the mist and into the light of day.

Serge P. J. M. Horbach works as a scholar in science and technology studies at the Danish Center for Studies in Research and Research Policy, Aarhus University.

Joeri K. Tijdink is a professor in the Philosophy Department (Faculty of Humanities) and principal investigator at the Ethics, Law and Medical Humanities department of the Amsterdam University Medical Centers (UMC), location VUmc.

Lex M. Bouter is a professor emeritus of methodology and integrity at the Department of Epidemiology and Data Science of the Amsterdam UMC and the Department of Philosophy of the Sciences of the Faculty of Humanities, Vrije Universiteit Amsterdam.