Ditching Impact Factors for Deeper Data

A team of editors and researchers calls on journal publishers to use citation distributions as measures of publication quality rather than relying on much-derided impact factors.

Bob Grant
Bob Grant

Bob started with The Scientist as a staff writer in 2007. Before joining the team, he worked as a reporter at Audubon and earned a master’s degree in science journalism...

View full profile.


Learn about our editorial policies.

Jul 7, 2016

Science journals should calculate more-detailed citation distributions to indicate the impact of individual studies published in their pages instead of relying on the less-transparent journal impact factor (IF), according to researchers and publishers who analyzed the data underlying these scores. “We hope that this analysis helps to expose the exaggerated value attributed to the JIF [journal impact factor] and strengthens the contention that it is an inappropriate indicator for the evaluation of research or researchers,” wrote the authors of a bioRxiv preprint published this week (July 5).

By calculating simple distribution frequencies for citations of papers published in a variety of journals, the team—which included 11 top journal editors from eLife, Science, Nature, PLOS, and others—found that up to 75 percent of the studies in a given journal had lower citation counts than that journal’s IF, which indicates the average number of citations an article in...

“Although there are differences among journals across the spectrum of JIFs, the citation distributions overlap extensively, demonstrating that the citation performance of individual papers cannot be inferred from the JIF,” the authors wrote. “We propose that this methodology be adopted by all journals as a move to greater transparency, one that should help to refocus attention on individual pieces of work and counter the inappropriate usage of JIFs during the process of research assessment.”

While officials at Thomson Reuters agreed with most of the arguments made by the authors of the journal IF analysis, they were unwilling to concede that IFs should be done away with completely, according to coauthor Bernd Pulverer, chief editor at The EMBO Journal. “The discussion was actually rather constructive, and Thomson Reuters wanted to continue the dialogue,” he told Science. But “while they agreed to essentially all the key points we made, they did not want to change anything that would collapse journal rankings, as they see this as their key business asset.”

Interested in reading more?

Ditching Impact Factors for Deeper Data

The Scientist ARCHIVED CONTENT

ACCESS MORE THAN 30,000 ARTICLES ACROSS MANY TOPICS AND DISCIPLINES

Become a Member of

Receive full access to more than 35 years of archived stories, digital editions of The Scientist Magazine, and much more!
Already a member?