Science journals should calculate more-detailed citation distributions to indicate the impact of individual studies published in their pages instead of relying on the less-transparent journal impact factor (IF), according to researchers and publishers who analyzed the data underlying these scores. “We hope that this analysis helps to expose the exaggerated value attributed to the JIF [journal impact factor] and strengthens the contention that it is an inappropriate indicator for the evaluation of research or researchers,” wrote the authors of a bioRxiv preprint published this week (July 5).
By calculating simple distribution frequencies for citations of papers published in a variety of journals, the team—which included 11 top journal editors from eLife, Science, Nature, PLOS, and others—found that up to 75 percent of the studies in a given journal had lower citation counts than that journal’s IF, which indicates the average number of citations an article in...
“Although there are differences among journals across the spectrum of JIFs, the citation distributions overlap extensively, demonstrating that the citation performance of individual papers cannot be inferred from the JIF,” the authors wrote. “We propose that this methodology be adopted by all journals as a move to greater transparency, one that should help to refocus attention on individual pieces of work and counter the inappropriate usage of JIFs during the process of research assessment.”
While officials at Thomson Reuters agreed with most of the arguments made by the authors of the journal IF analysis, they were unwilling to concede that IFs should be done away with completely, according to coauthor Bernd Pulverer, chief editor at The EMBO Journal. “The discussion was actually rather constructive, and Thomson Reuters wanted to continue the dialogue,” he told Science. But “while they agreed to essentially all the key points we made, they did not want to change anything that would collapse journal rankings, as they see this as their key business asset.”