Advertisement
NeuroScientistNews
NeuroScientistNews

Opinion: Mind the Measures

The next big thing in medical research is to more comprehensively evaluate its impact.

By | June 26, 2013

WIKIMEDIA, STETHOSCOPESMedical research has vastly improved the health of average Americans and has bolstered both the length and quality of their lives. The statistics from federally funded research are compelling: the survival rate for children with the most common childhood leukemia is now 90 percent, the five-year breast cancer survival rate has increased from 75 percent in the mid 1970s to 90 percent in 2011, chronic disability among American seniors has dropped nearly 30 percent since 1982, and the list goes on. Few would deny the social and economic benefits of medical advances made possible through research. After all, healthier Americans are more productive, and the academic research enterprise itself supports many jobs.

Still, recent trends call for the research community to revisit how we analyze and communicate the investment in and impact of research: new expectations and technologies such as social media platforms for advancing transparency, the national political and economic debate, and the engagement of patient advocacy groups in assessing research efficiency and impact.

These trends impact debates about federal funding at the national level, but there are also trickle down effects on universities, medical schools, and academic health systems across the US, where individual scientists and teams conduct research. Leaders at these institutions face increasing pressure to assess their investments in research and communicate the impact to their local stakeholders—state governors and legislators, boards of directors, community partners, patients, and their families.  In these academic settings, research success has been traditionally measured by respected and credible academic metrics, such as volume of grant funding and the number of publications and citations, founded on peer review.

But metrics that rely on quantifying monetary inputs and academic outputs alone cannot paint the full picture of the complex multi-year scientific process through which research translates into outcomes that matter to the public. While celebrating nationwide, population-level health improvements made possible by research investments, institutional leaders could benefit from evaluation frameworks that help them communicate those success to stakeholders.

A number of national research evaluation frameworks have been studied—for example, the United Kingdom Research Excellence Framework, Excellence for Research in Australia, and Evaluating Research in Context from the Netherlands. Within these frameworks are mechanisms capable of analyzing investments, allocation of funds, and advocacy and public education about the value of research.  

Aspects of these frameworks could provide institutional leaders with an enhanced suite of tools, beyond traditional academic metrics, to help assess their investments as well as to garner stakeholder support by framing measures that speak to the broader public.

We must, however, weigh these potential opportunities against some wariness in the research community about engaging in new methods of evaluation. One reason for caution is that evaluating research is not easy—the methodologies are complex and require a conceptual understanding of the theoretical underpinning, rationale, goals, and implementation of the evaluation tools. Research leaders have long worried that “evaluation methods du jour” will disregard the inevitable time lag for research to lead to breakthroughs in health, will become unfunded mandates that create laborious and meaningless reporting for researchers, or will upend the foundation of peer review at the heart of evaluating research in the US. New evaluation methods may also create a new set of “rankings” that could disregard the local context and value of research, and could ultimately result in ill-founded attempts to curtail the dwindling dollars so critical to research.

In order to address these concerns, any evaluation tool should be an adjunct to and guided by  experts who understand the context for conceptual models for how research translates to academic, health, social, and economic impacts, and who can build consensus on how best to measure those impacts in a rigorous, systematic, and institutionally-appropriate manner.

Nurturing and sustaining public support for the full spectrum of medical research from bench to bedside to community, and for ensuring a diverse, robust research pipeline is essential for our future. By taking the lead in demonstrating accountability as well as evaluating and communicating the value of medical research to broad audiences with a suite of academic and non-academic measures, the research community can bolster ongoing public support for funding.

Ann C. Bonham is Chief Scientific Officer at the Association of American Medical Colleges. Prior to joining the AAMC, Bonham served as executive associate dean for academic affairs and professor of pharmacology and internal medicine at the University of California at Davis School of Medicine.

Advertisement

Add a Comment

Avatar of: You

You

Processing...
Processing...

Sign In with your LabX Media Group Passport to leave a comment

Not a member? Register Now!

LabX Media Group Passport Logo

Comments

Avatar of: davidrubenson

davidrubenson

Posts: 1

July 1, 2013

Dear Dr. Bonham:

I would like to agree with your analysis and add an additional thought.  There is little tradition, and perhaps more importantly, little interest in evaluation (and analysis of organization performance) in biomedical research. 

As an example, there has been a great deal of recent publicity about scientific error and retractions.  One distinguished researcher from Stanford has argued that half of all biomedical research contains serious errors.  In any other organization, or even government agency, such concerns would result in a major crisis and initiaitves for organizational reform.  However in our biomedical research system, there is no audience for such red flags and no real mechanism to evaluate the seriousness of these concerns.

As your article points out, there are major conceptual challenges to developing reliable and trusted methods for scientific evaluation.  However the challenge is magnified by the lack of any real tradition for doing so.  We need to start out on small focused problems and build a tradition for evaluation.

I would therefore propose the creation of a biomedical research policy institute Such an institute would be composed of biomedical researchers working with economists, management scienitists, and other social scientists.  Such an institute must be separate from the NIH, but have ready access to data and programmatic information.  It should conduct sustained analysis of particular issues, with the long run goal of developing a tradition for continuous organziational analysis and evaluation. 

David Rubenson

 

 

 

 

 

Follow The Scientist

icon-facebook icon-linkedin icon-twitter icon-vimeo icon-youtube
Advertisement

Stay Connected with The Scientist

  • icon-facebook The Scientist Magazine
  • icon-facebook The Scientist Careers
  • icon-facebook Neuroscience Research Techniques
  • icon-facebook Genetic Research Techniques
  • icon-facebook Cell Culture Techniques
  • icon-facebook Microbiology and Immunology
  • icon-facebook Cancer Research and Technology
  • icon-facebook Stem Cell and Regenerative Science
Advertisement
Advertisement
Life Technologies