It can often be difficult to gauge researcher productivity and impact, but these measures of effectiveness are important for academic institutions and funding sources to consider in allocating limited scientific resources and funding. Much as in the lab, where it is important for the results to be repeatable, developing an algorithm or an impartial process to appraise individual faculty research performance over multiple disciplines can deliver valuable insights for long-term strategic planning. Unfortunately, the development of such evaluation practices remains at an embryonic stage.
Several methods have been proposed to assess productivity and impact, but none can be used in isolation. Beyond assigning a number to an investigator—such as the h-index, the number of a researcher’s publications that have received at least that same number of citations, or a collaboration index, which takes into account a researcher’s relative contributions to his or her publications—there are additional sources...
Here we propose a new bibliometric method to assess the body of a researcher’s published work, based on relevant information collected from the Scopus database and Journal Citation Reports (JCR). This method does not require intricate programming, and it yields a graphical representation of data to visualize the publication output of researchers from disparate backgrounds at different stages in their careers. We used Scopus to assess citations of research articles published between 2009 and 2014 by five different researchers, and by one retired researcher over the course of his career since 1996, a time during which this individual was a full professor and chair of his department. These six researchers included molecular biologists, an immunologist, an imaging expert, and a clinician, demonstrating that this apparatus could level the playing field across diverse disciplines.
The metric we used calculates the impact of a research article as its number of citations divided by the publishing journal’s impact factor for that year, divided by the number of years since the article was published. The higher the number, the greater the work’s impact. This value is plotted together with the average impact of all research articles the journal published in that same year (average number of citations for all research articles published that year divided by the journal impact factor for that year divided by the number of years since publication). Publications in journals that rank in the top 50 by impact factor (not including reviews-only journals) are also noted.
By developing such a graph for each scientist being evaluated, we get a snapshot of his or her research productivity. Across disciplines, the graphs allow comparison of total output (number of dots) as well as impact, providing answers to the questions: Are the scientists’ manuscripts being cited more than their peers’ in the same journal (red dots above gray)? How many of each researcher’s papers were published in leading scientific journals (gold squares)? The method also allows evaluation of early-career scientists and those who are further along in their careers. (See graphs at right, top.) For young researchers, evaluators can easily see if their trajectory is moving upward; for later-stage scientists, the graphs can give a sense of the productivity of their lab as a whole. This can, in turn, reveal whether their laboratory output matches their allocated institutional resources. While the impact factor may be a flawed measurement, using it as a normalization tool helps to remove the influence of the journal, and one can visualize whether the scientific community reacts to a finding and integrates it into scientific knowledge. This strategy also allows for long-term evaluations, making it easy to appreciate the productivity of an individual, in both impact and volume, over the course of his or her career.
Assessing research performance is an important part of any evaluation process. While no bibliometric indicators alone can give a picture of collaboration, impact, and productivity, this method may help to buttress other measures of scientific success.
Ushma S. Neill is director of the Office of the President at Memorial Sloan Kettering Cancer Center (MSKCC). Craig B. Thompson is the president and CEO of MSKCC, and Donna S. Gibson is director of library services at the center.