On Thursday (October 25), the blog Retraction Watch, which tracks problematic scientific literature, released an online database of more than 18,000 papers and conference materials that have been retracted since the 1970s.
The journal Science partnered with Retraction Watch to analyze the catalog. The upshot, Science concludes, is that although the number of retractions per year has risen in recent decades, that might reflect more policing of science.
“Retractions have increased because editorial practices are improving and journals are trying to encourage editors to take retractions seriously,” Nicholas Steneck, a research ethics expert at the University of Michigan, tells Science.
Retraction Watch (RW), founded in 2010 by journalists Ivan Oransky and Adam Marcus, has been amassing the database for several years, the blog says in its announcement. It is the most extensive source of retractions data in existence, ahead...
See “Top Retractions of 2017”
The number of retractions has increased in recent years—“from fewer than 100 annually before 2000 to nearly 1000 in 2014,” Science reports, amounting to about 4 in 10,000 papers having been retracted. Yet the number of retractions per journal per year has been fairly steady since 1997. Further, the annual number of retractions has basically leveled off since 2012.
Although some retractions are due to scientists’ honest mistakes, dishonest practices still underlie more than half of the withdrawals, Science reports, involving “fabrication, falsification, or plagiarism.” Another 10 percent of retractions were attributed to other unethical behaviors. One quarter of retractions came from just 500 authors, and those were usually due to deliberate misconduct.
Judging both by retractions per number of papers published and retractions per dollar of research funding, Iran and Romania lead in retraction rates. In Romania, that may in part reflect the efforts of a watchdog group called PANDORA, Oransky suggests in his contribution to Science’s report. Iran’s high retraction rates may have been due to scandals involving fake peer review—when scientists approve their own papers by posing as other people—but also perhaps skewed by the analysis only considering papers published in English.