Scientists are universally familiar with the Impact Factor, even if they’re often frustrated with how it can be manipulated and misused. More recently, Ferric Fang and Arturo Casadevall have introduced the idea of the Retraction Index, a measure of how many papers journals retract for every 1,000 they publish. As science journalists who have spent the last 2 years closely monitoring retractions, we think this is a great idea.
Last year, in a post on our blog Retraction Watch, we recommended that journals publicize their Retraction Indices just as they trumpet their Impact Factors. It’s unlikely many will take us up on the suggestion, but we’ll go once more into the breach anyway and suggest another metric of journal performance: the Transparency Index.
Regardless of what metric scientists use to rank journals, one of the reasons they read the top-ranked journals is their sense that the information is...
We understand—in theory, at least—why some journals and editors might be reluctant to share the details of a retraction with their readers. Sometimes the problems involve shoddy reviews, failure to check a manuscript for evidence of plagiarism or duplicate publication, or other avoidable mistakes.
But lack of transparency serves only to reinforce a sense of incompetence. Journals and editors willing to pull aside the curtain to show readers what went wrong with a particular article or group of articles send the messages that 1) they care about conveying truth to their audiences; 2) they are committed to producing a high-quality publication; and 3) potential fraudsters are not welcome in their pages.
Our hope is to turn the above criteria into a numerical metric that can give authors and readers a sense of a journal’s transparency. How much can they trust what’s in its pages? Help us refine the Transparency Index at retractionwatch.wordpress.com/transparencyindex. The number, however, will just be an indicator. Scientists’ judgment will still be the most important factor.