Advertisement

Opinion: Measuring Impact

Scientists must find a way to estimate the seemingly immeasurable impact of their research efforts.

By | February 24, 2014

FLICKR, THOMAS HAWKThe United States currently spends about 2.7 percent of its gross domestic product (GDP) on research and development, about half of which comes from federal sources.  This amount is comparable to annual expenditures on transportation and water infrastructure (3 percent of GDP) and on education (5.5 percent). The magnitude of the investments required for maintaining the scientific enterprise have resulted in calls for a quantitative assessment of the impact of the contributions of individuals and institutions, so that policy makers are persuaded that resources are being used effectively.

Despite its importance, whether and how to quantify scientific impact remains a source of controversy within the research community. For example, the San Francisco Declaration on Research Assessment has promoted “the need to eliminate the use of journal-based metrics, such as journal Impact Factors, in funding, appointment, and promotion considerations.” I find it surprising that a scientist would propose a move away from measurement and quantification when these activities are at the core of science itself. I believe that when considering an imperfect but necessary tool, the right course of action is to seek to improve it, rather than to discard it. The scientific community—and especially the funding agencies—should support the development of better bibliometric evaluation tools rather than oppose their use altogether.

There is a long history of using bibliometric-based measures to quantify scientific production and impact. Opponents of such measures, including many prominent scientists, have recently urged the scientific community to return to the “gold standard” of peer review. Underlying this recommendation is the “hypothesis” that, if two intelligent, unbiased evaluators were to read the papers of, say, applicants for a faculty position, they would draw the same conclusion about which candidate was best for the job.

This is a naive, unsustainable position. Anyone working as an editor of a journal, or as a member of a selection or promotion and tenure committee, knows how broadly the ratings of papers, individuals, or proposals vary across reviewers.  Indeed, Case Western Reserve University’s David Kaplan and his colleagues have demonstrated that one would need tens of thousands of independent unbiased peer-evaluations in order to obtain an accurate ranking. And the number of reviewers needed is not the only limitation of peer review as a measurement process. Like all humans, scientists have biased views of students, collaborators and competitors. Sadly, being an expert is not a guarantee that those biases will be absent; it is only a guarantee that one will be convinced that one is right.

Scientists are already being evaluated using bibliometric-based measures such as number of citations or the h-index. An indisputable fact is that most bibliometric-based measures are very strongly correlated with one another, suggesting that they capture something real and important about the impact of bodies of work.

There are, however, two major challenges when addressing scientific impact. First, scientists benefit from being perceived as having a large impact.  Thus, a sound measure of scientific impact must resist manipulation. Second, a sound measure of impact must, arguably, reward quality over quantity. Indeed, one of the risks of the broad use of bibliometric-based measures is the change in publication patterns of scientists with the goal of inflating their own apparent impact. These manipulation efforts are likely related to observed increases in publication rates and in questionable self-citation practices.

Moreover, while citations are arguably the most trustworthy indicators of scientific impact, the number of citations of single papers spans more than five orders of magnitude, with the most highly cited papers having hundreds of thousands of citations. The broad range of observed number of citations and Columbia University’s Duncan Watts and his team’s 2006 findings on cultural markets suggest that the process by which a paper’s quality gets translated into citations is almost certainly driven by “rich get richer” dynamics.

It may be possible to overcome these challenges, however, if one possessed a rigorous characterization of the statistical properties of the number of citations to scientific papers. Using the functional form of the distribution of number of citations, one could develop a principled approach to the development of measures of impact.

My lab has demonstrated that the logarithm of the number of citations to a papers published in a journal converge to an ultimate value within about 10 years. Remarkably, the distribution of the ultimate number of citations to papers published in a scientific journal converges to a discrete lognormal distribution with stable parameters μ and σ, which are analogous to the parameters with the same name for a Gaussian distribution.  Our results suggest that there is a latent quantity—which one might denote “citability”—that determines a paper’s ability to accrue citations.

Even though we lack a deep understanding of the concept of scientific impact, our burgeoning understanding of the dynamics of citations will enable the development of measures that are objective, easy to calculate, resist manipulation, and foster desirable publication behaviors. With such measures in hand, we will be able to finally uncover the individual and institutional conditions that foster significant scientific advances, and help policy makers and the public to become confident that resources are being used wisely.

Luís A. Nunes Amaral is a professor of chemical and biological engineering, physics and astronomy, and medicine at Northwestern University, where he co-directs the Northwestern Institute on Complex Systems.

Advertisement

Add a Comment

Avatar of: You

You

Processing...
Processing...

Sign In with your LabX Media Group Passport to leave a comment

Not a member? Register Now!

LabX Media Group Passport Logo

Comments

Avatar of: dfritzin

dfritzin

Posts: 2

February 24, 2014

The problem with using the number of citations is that the number of scientists in a given field will greatly affect the number of citations an article in that field will receive. For example, the number of researchers working in complement is only a few hundred, which limits the number of citations that even a breakthrough paper in this field will get. Meanwhile, someone inventing a new technique, such as Western blotting will get far more citations, as that technique is used in many different fields. It is for this reason that I feel the number of citations is not a very accurate estimation of the worth of the science being done. 

Avatar of: Ken Pimple

Ken Pimple

Posts: 23

February 24, 2014

I should think that actual impact would be measured by how much a paper is built upon (replicated, extended), for which citation is only a proxy. Many works are cited that are only tangential to the paper citing them; sometimes papers are cited because of their flaws. The hazard of numerical impact measures is that we love numbers so much that we tend to disregard judgment. Metrics are only as good as the people who apply them.

Avatar of: Shengqian

Shengqian

Posts: 10

February 24, 2014

Using Google scholar and search some professors around you, you will see their total as well as citations acummulated last 5 years, you can easily find "better" ones when you compare them. Yeah, nothing is perfect, but it reflects the level of impact quite convincingly. For example, I know one "famous"prof in a well cited field from a top university, has a h index of 10, it is embarrassing to say the least. As Sheldon would say, he should think about teaching.

Follow The Scientist

icon-facebook icon-linkedin icon-twitter icon-vimeo icon-youtube
Advertisement

Stay Connected with The Scientist

  • icon-facebook The Scientist Magazine
  • icon-facebook The Scientist Careers
  • icon-facebook Neuroscience Research Techniques
  • icon-facebook Genetic Research Techniques
  • icon-facebook Cell Culture Techniques
  • icon-facebook Microbiology and Immunology
  • icon-facebook Cancer Research and Technology
  • icon-facebook Stem Cell and Regenerative Science
Advertisement
Advertisement
Life Technologies