A recent toast to James Watson highlights a tolerance for bigotry many want excised from the scientific community.
A new way of evaluating academics’ research output using easily obtained data
January 1, 2015|
It can often be difficult to gauge researcher productivity and impact, but these measures of effectiveness are important for academic institutions and funding sources to consider in allocating limited scientific resources and funding. Much as in the lab, where it is important for the results to be repeatable, developing an algorithm or an impartial process to appraise individual faculty research performance over multiple disciplines can deliver valuable insights for long-term strategic planning. Unfortunately, the development of such evaluation practices remains at an embryonic stage.
Several methods have been proposed to assess productivity and impact, but none can be used in isolation. Beyond assigning a number to an investigator—such as the h-index, the number of a researcher’s publications that have received at least that same number of citations, or a collaboration index, which takes into account a researcher’s relative contributions to his or her publications—there are additional sources of data that should be considered. At our institution, Memorial Sloan Kettering Cancer Center (MSKCC) in New York City, there is an emphasis on letters of recommendation received from external expert peers, funding longevity, excellence in teaching and mentoring, and the depth of a faculty member’s CV. For clinicians, additional assessments of patient load and satisfaction are also taken into consideration by our internal committees evaluating promotions and tenure. Other noted evaluation factors include the number of reviews and editorials an individual has been invited to author; frequency of appearance as first, middle, or senior author in collaborations; the number of different journals in which the researcher has published; media coverage of his or her work; and the number of published but never-cited articles.
Here we propose a new bibliometric method to assess the body of a researcher’s published work, based on relevant information collected from the Scopus database and Journal Citation Reports (JCR). This method does not require intricate programming, and it yields a graphical representation of data to visualize the publication output of researchers from disparate backgrounds at different stages in their careers. We used Scopus to assess citations of research articles published between 2009 and 2014 by five different researchers, and by one retired researcher over the course of his career since 1996, a time during which this individual was a full professor and chair of his department. These six researchers included molecular biologists, an immunologist, an imaging expert, and a clinician, demonstrating that this apparatus could level the playing field across diverse disciplines.
The metric we used calculates the impact of a research article as its number of citations divided by the publishing journal’s impact factor for that year, divided by the number of years since the article was published. The higher the number, the greater the work’s impact. This value is plotted together with the average impact of all research articles the journal published in that same year (average number of citations for all research articles published that year divided by the journal impact factor for that year divided by the number of years since publication). Publications in journals that rank in the top 50 by impact factor (not including reviews-only journals) are also noted.
By developing such a graph for each scientist being evaluated, we get a snapshot of his or her research productivity. Across disciplines, the graphs allow comparison of total output (number of dots) as well as impact, providing answers to the questions: Are the scientists’ manuscripts being cited more than their peers’ in the same journal (red dots above gray)? How many of each researcher’s papers were published in leading scientific journals (gold squares)? The method also allows evaluation of early-career scientists and those who are further along in their careers. (See graphs at right, top.) For young researchers, evaluators can easily see if their trajectory is moving upward; for later-stage scientists, the graphs can give a sense of the productivity of their lab as a whole. This can, in turn, reveal whether their laboratory output matches their allocated institutional resources. While the impact factor may be a flawed measurement, using it as a normalization tool helps to remove the influence of the journal, and one can visualize whether the scientific community reacts to a finding and integrates it into scientific knowledge. This strategy also allows for long-term evaluations, making it easy to appreciate the productivity of an individual, in both impact and volume, over the course of his or her career.
Assessing research performance is an important part of any evaluation process. While no bibliometric indicators alone can give a picture of collaboration, impact, and productivity, this method may help to buttress other measures of scientific success.
Ushma S. Neill is director of the Office of the President at Memorial Sloan Kettering Cancer Center (MSKCC). Craig B. Thompson is the president and CEO of MSKCC, and Donna S. Gibson is director of library services at the center.
January 26, 2015
Yet another attempt to cosset the intellectually lazy administrator. These pseudo-quantitative "productivity measures" are a detriment to scientific excellence. The very concept of a "top journal" has been thoroughly debunked for years -- there is NO correlation between the impact of an individual paper and that of the journal. Please try to find something useful to do.
February 20, 2016
Disagree with Jakepgh - this sounds potentially useful. But viewing graphs too iffy. How about calculating for each paper (I/J)/(1+logG) where I = individual impact, J = Journal impact and G is years since publication. This normalizes for field size and for researcher age to some extent. Playing around with this though it looks as if it could encourage researchers to publish highly citeable work in lower impact or smaller journals to leverage I/J.