WIKIMEDIA, MIKE FERNWOODWith the release of the 2013 Thomson Reuters journal impact factor list comes news that a record number of journals have been excluded from the list for attempting to rig their ratings. This year, 66 journals—including 37 first time offenders—will not be included on the annual list, which measures the average number of times papers from individual journals are cited. Reasons for exclusion include excessive self-citation and “citation stacking”—a ploy in which journals cite each other to an excessive degree. Thomson Reuters, the publishing giant that publishes the list, claims the self-citations distort rankings. The company says that banned excluded journals will be reconsidered for inclusion after 2 years.
The number of banned journals is a tiny fraction—0.5 percent—of the 10,853 journals that received a ranking, including 379 journals that are receiving their first impact factor. And while some journals jockey for position on the citation...
Biologist George Lozano, who recently performed an analysis of citations in 29 million papers over the past century, poked holes in the usefulness of the impact factor in his June 8 blog post for the London School of Economics: “Among top papers, the proportion NOT published in top journals was decreasing, but now it is increasing,” he wrote. “Hence, the best (i.e., most cited) work now comes from increasingly diverse sources, irrespective of the journals’ impact factors.”
Science’s recently retired editor-in-chief, Bruce Alberts, wrote in a May editorial for the journal that striving to enhance impact factor can “bias journals against publishing important papers in fields (such as social sciences and ecology) that are much less cited than others (such as biomedicine).” He also claimed that focusing too heavily on impact factors creates “a strong disincentive to pursue risky and potentially groundbreaking work.”