Last month, researchers released a new initiative that would allow scientists to pay to have their data validated by an independent source before or after publication. Known as the Reproducibility Initiative (RI), the program was hailed by many in the scientific community as an answer to the growing number of irreproducible experiments and retractions. But will it solve the problem?
The RI plans to match researchers with independent third parties to repeat their experiments, then gives scientists the option of publishing those validation studies along with the original experiments in PLOS ONE. The initiative’s founders claim that such authentication will identify and commend researchers who produce high-quality, reproducible research, while helping to suppress the increasing numbers of retractions.
But Kent Anderson, chief executive officer and publisher of the Journal of Bone & Joint Surgery, doesn’t believe that the initiative is up to the task. On the blog The...
Costs aside, Anderson questioned whether the validation studies will add any value to the research, calling them “redundant publications.” He also noted that the initiative has weak support from publishers, like Nature and The Rockefeller University Press, which only offer to link to validated studies hosted on an independent server; only PLOS ONE would publish the study in its entirety alongside the original manuscript.
The RI is “proposing to reinvent is science itself,” Anderson concluded in his blog post. “But this time, with certificates”—a feature many feel is of little worth considering that repetition and validation is already an integral part of the scientific process, with subsequent studies building upon research findings and repeatedly testing particular hypotheses.
Anderson and others argue that the RI is merely treating a symptom of a much bigger problem—an unhealthy scientific process. If the recent rise in retractions is any indication, “then we need to worry that the self-correcting mechanisms of science aren't keeping up with the number of unintentional errors and out-and-out fraud,” said Ivan Oransky, co-founder of the blog Retraction Watch and former deputy editor of The Scientist. The question is: What is driving this failure?
Last year, Arturo Casadevall, editor in chief at mBio, and Ferric Fang, editor in chief at Infection and Immunity, reviewed what they perceived as the methodological, cultural, and structural problems in US biomedical research, and asserted that the problems with science may go much deeper than simple validation studies will be able to reach. Casadevall and Fang refer to the scientific enterprise as a pyramid scheme, with a small number of principle investigators overseeing a vast population of researcher scientists, postdocs, and students with poor chances of careers progression. This, along with the “publish or perish” mentality, puts extreme pressure on researchers to produce high profile results, motivating scientific misconduct. Casadevall and Fang also cite a winner-takes-all system and the priority rule that both unjustly reward the first to publish or announce research findings. The authors propose that one of the roots to all these issues may be an inadequate level of government funding that may ultimately drive poor practice in the form of fraudulent or even sloppy science.
One possible solution, then, would be to increase funding, thereby reducing excessive competition and alleviating some of the pressures that may drive some researchers to publish dubious results. “If you take care of the other problems, then much of the reproducibility issues will be solved,” said Vincent Racaniello, a microbiology and immunology professor at Columbia University in New York.
But Iorns disagrees that more money will solve the problem. The RI “will still be required no matter what the funding situation is,” she said. While Iorns does agree there are a number of issues the RI cannot address, such as the need to share data openly, it is just one piece of a larger puzzle, she argued. A well-rounded effort to clean up science may include, for example, increased support of open lab book practices, in which a project’s methods and results are freely available online, to anyone who wants to try to reproduce the experiments.
Oranksy agreed that a multi-pronged approach to science’s irreproducibility problem is the way to go. “It's always risky to talk about a single answer to any problem, particularly one as complex as the one the [RI] is trying to solve,” he said.