Assessing Research Productivity

A new way of evaluating academics’ research output using easily obtained data

By , , and | January 1, 2015

It can often be difficult to gauge researcher productivity and impact, but these measures of effectiveness are important for academic institutions and funding sources to consider in allocating limited scientific resources and funding. Much as in the lab, where it is important for the results to be repeatable, developing an algorithm or an impartial process to appraise individual faculty research performance over multiple disciplines can deliver valuable insights for long-term strategic planning. Unfortunately, the development of such evaluation practices remains at an embryonic stage.

Several methods have been proposed to assess productivity and impact, but none can be used in isolation. Beyond assigning a number to an investigator—such as the h-index, the number of a researcher’s publications that have received at least that same number of citations, or a collaboration index, which takes into account a researcher’s relative contributions to his or her publications—there are additional sources of data that should be considered. At our institution, Memorial Sloan Kettering Cancer Center (MSKCC) in New York City, there is an emphasis on letters of recommendation received from external expert peers, funding longevity, excellence in teaching and mentoring, and the depth of a faculty member’s CV. For clinicians, additional assessments of patient load and satisfaction are also taken into consideration by our internal committees evaluating promotions and tenure. Other noted evaluation factors include the number of reviews and editorials an individual has been invited to author; frequency of appearance as first, middle, or senior author in collaborations; the number of different journals in which the researcher has published; media coverage of his or her work; and the number of published but never-cited articles.

Here we propose a new bibliometric method to assess the body of a researcher’s published work, based on relevant information collected from the Scopus database and Journal Citation Reports (JCR). This method does not require intricate programming, and it yields a graphical representation of data to visualize the publication output of researchers from disparate backgrounds at different stages in their careers. We used Scopus to assess citations of research articles published between 2009 and 2014 by five different researchers, and by one retired researcher over the course of his career since 1996, a time during which this individual was a full professor and chair of his department. These six researchers included molecular biologists, an immunologist, an imaging expert, and a clinician, demonstrating that this apparatus could level the playing field across diverse disciplines.

ACROSS DISCIPLINES: A graphical display illustrates the publication productivity and impact of three researchers from disparate fields whose names appeared in highlycited.com. The journal’s average impact for the year (gray squares) is compared to the impact of the researcher’s articles (red circles) in the same journal that year. Non-review journals ranked in the top 50 by impact factor, as determined by Journal Citation Reports, are noted in gold. This manner of representing journals equalizes researchers across disciplines such that the impact of a particular manuscript can be appreciated by seeing if the author’s red dot is higher or lower than the journal’s gray/gold one.
See full infographic: JPG | PDF
The metric we used calculates the impact of a research article as its number of citations divided by the publishing journal’s impact factor for that year, divided by the number of years since the article was published. The higher the number, the greater the work’s impact. This value is plotted together with the average impact of all research articles the journal published in that same year (average number of citations for all research articles published that year divided by the journal impact factor for that year divided by the number of years since publication). Publications in journals that rank in the top 50 by impact factor (not including reviews-only journals) are also noted.

ACROSS AGES: This method of visualizing researchers’ productivity can be a useful tool for comparing scientists at different points in their career.
See full infographic: JPG | PDF

By developing such a graph for each scientist being evaluated, we get a snapshot of his or her research productivity. Across disciplines, the graphs allow comparison of total output (number of dots) as well as impact, providing answers to the questions: Are the scientists’ manuscripts being cited more than their peers’ in the same journal (red dots above gray)? How many of each researcher’s papers were published in leading scientific journals (gold squares)? The method also allows evaluation of early-career scientists and those who are further along in their careers. (See graphs at right, top.) For young researchers, evaluators can easily see if their trajectory is moving upward; for later-stage scientists, the graphs can give a sense of the productivity of their lab as a whole. This can, in turn, reveal whether their laboratory output matches their allocated institutional resources. While the impact factor may be a flawed measurement, using it as a normalization tool helps to remove the influence of the journal, and one can visualize whether the scientific community reacts to a finding and integrates it into scientific knowledge. This strategy also allows for long-term evaluations, making it easy to appreciate the productivity of an individual, in both impact and volume, over the course of his or her career.

LONG-TERM ANALYSIS: A nearly 20-year stretch (1996–2014) is shown for a newly retired faculty member after a productive research career. Note that this individual did not publish any articles in top 50 non-review journals as determined by Journal Citation Reports impact factors. Although this researcher published several papers before 1996, Scopus has limited reliability for citations prior to that year; therefore the analysis excluded these data.
See full infographic: JPG | PDF

Assessing research performance is an important part of any evaluation process. While no bibliometric indicators alone can give a picture of collaboration, impact, and productivity, this method may help to buttress other measures of scientific success. 

Ushma S. Neill is director of the Office of the President at Memorial Sloan Kettering Cancer Center (MSKCC). Craig B. Thompson is the president and CEO of MSKCC, and Donna S. Gibson is director of library services at the center.

Add a Comment

Avatar of: You

You

Processing...
Processing...

Sign In with your LabX Media Group Passport to leave a comment

Not a member? Register Now!

LabX Media Group Passport Logo

Comments

Avatar of: PeterUetz

PeterUetz

Posts: 5

January 6, 2015

Is there a tool to create such plots?

Avatar of: Jakepgh

Jakepgh

Posts: 2

January 26, 2015

Yet another attempt to cosset the intellectually lazy administrator. These pseudo-quantitative "productivity measures" are a detriment to scientific excellence. The very concept of a "top journal" has been thoroughly debunked for years -- there is NO correlation between the impact of an individual paper and that of the journal. Please try to find something useful to do.

Avatar of: GFM

GFM

Posts: 1

February 20, 2016

Disagree with Jakepgh - this sounds potentially useful. But viewing graphs too iffy. How about calculating for each paper (I/J)/(1+logG) where I = individual impact, J = Journal impact and G is years since publication. This normalizes for field size and for researcher age to some extent. Playing around with this though it looks as if it could encourage researchers to publish highly citeable work in lower impact or smaller journals to leverage I/J.

Popular Now

  1. Opinion: Why I Published in a Predatory Journal
    News & Opinion Opinion: Why I Published in a Predatory Journal

    My “colleagues” and I at the fictitious Arthur Vandelay Urological Research Institute were surprised to find our bogus “uromycitisis” case report swiftly accepted, with only minor revisions requested.

  2. Consilience, Episode 3: Cancer, Obscured
  3. March for Science: Dispatches from Washington, DC
  4. Record-Setting Corn Grows 45 Feet Tall
AAAS