SXC.HU, JOHKABiomedical researchers are grumbling a lot these days. The worries span funding levels at the National Institutes of Health (NIH), the peer review process, academic promotion policies, the effectiveness of conferences, waste caused by scientific error, regulatory burdens, and so on. However the grumbling won’t amount to much unless there is a systematic way to formulate, analyze, implement, and monitor reforms to the systems and institutions that make conducting research possible. To do this, the community should develop a new academic tradition of analyzing the biomedical research enterprise. A 21st century ability to apply research data to medical advances will require a 21st century understanding of how to organize biomedical research.
The core impediment to the adoption of this approach is that biomedical research is rarely treated as a product of organizational structure, culture, and incentives. Many scientists see “curiosity” or other lofty ideals as the primary drivers of the research process. They view administration as simply the cost of doing business, failing to recognize that it actually influences (for good and bad) the goals and directions of research. The result is the absence of a tradition for measuring and analyzing organizational performance.
The NIH system for grant funding is a prime example of a process that has come in for strong criticism. Many argue that scientists tailor proposals to win grants rather than to describe the most innovative, boldest, or best approaches for solving society’s medical problems. Some argue that review committees are tainted by conflicts of interest, group-think, and inadequate preparation, among other things. But coming up with something better isn’t so simple.
It’s easy to grumble and equally easy to throw up one’s hands in frustration. Whether it’s the NIH granting process, the effectiveness of scientific communications, or academic promotion policies, the biomedical research enterprise is too large, complex, and tradition-bound to easily identify and implement superior metrics and incentives; particularly ones that can also keep pace with rapidly changing technology and scientific opportunities.
Still, there are constructive ideas. The San Francisco Declaration on Research Assessment is an example of a well-publicized and thoughtful document, which has gathered widespread support. But how do we actually implement that document’s call to “encourage a shift toward assessment based on the scientific content of an article rather than publication metrics,” or to “consider the value and impact of all research outputs (including datasets and software) in addition to research publications?” Not only do these recommendations require serious analysis before implementation, they will require continuous monitoring, revaluation, and adjustment. Without sustained scholarship, thoughtful ideas have little chance to be woven into the fabric of our biomedical research system.
There have also been lost opportunities. The brainstorming sessions that identified the Huntington’s disease-associated gene in the 1980s, the different ways institutions structure internal requests for research funds, Stand Up 2 Cancer Dream Teams, the unique incentives in the Howard Hughes Medical Institute system, or recent efforts in data sharing all constitute organizational “experiments.” These should have been evaluated for effectiveness and the potential for broader application.
Change requires the development of a research culture for studying the research process itself. Only then can thoughtful suggestions be moved into well-defined and effective policy recommendations. This is hardly radical. There is a well-established academic tradition for doing this in health care delivery and education. The nation’s business schools concern themselves with financial, organizational, and cultural incentives in corporate and non-profit organizations. Government agencies employ numerous think tanks to evaluate long-term policies. There is, however, virtually no scholarly tradition for analyzing the biomedical research process. “One off” studies by the Institute of Medicine or the occasional ad hoc committees cannot substitute for a sustained program of research and analysis.
To encourage the establishment of this tradition, we propose the creation of a national biomedical research policy institute. It should be loosely connected to the NIH because it would need access to the agency’s data and policy makers. But it must be governed by an independent board and be free to conduct independent analysis, unencumbered by existing NIH practices and policies. Such an institute must have a broad multi-disciplinary expertise, combining biomedical researchers with specialists in economics, management sciences, social sciences, health care and even patient advocates. It should strive to develop an ethic of unbiased and transparent research and seek to become a central meeting place for discussing the future of biomedical research. Such an institute could develop the data to improve our understanding of how the system is performing and use that data to develop effective arguments about the value of biomedical research to society.
The stakes are enormous. Biomedical research represents a significant financial investment, and it carries the hopes and fears of the afflicted. Long-range health care costs will certainly be influenced by the policy and strategic choices made in biomedical research. The enterprise is too important, large, and complex to be governed casually and with little awareness of the factors shaping it.
David Rubenson is the associate director for administration and strategic planning at the Stanford Cancer Institute. Paul Salvaterra is a professor of neuroscience at the Beckman Research Institute of the City of Hope.