Solving Irreproducible Science

Will the recently launched Reproducibility Initiative succeed in cleaning up research and reducing retractions?

By | September 26, 2012

image: Solving Irreproducible Science Flickr, U.S. Army Research, Development and Engineering Command

Last month, researchers released a new initiative that would allow scientists to pay to have their data validated by an independent source before or after publication. Known as the Reproducibility Initiative (RI), the program was hailed by many in the scientific community as an answer to the growing number of irreproducible experiments and retractions. But will it solve the problem?

The RI plans to match researchers with independent third parties to repeat their experiments, then gives scientists the option of publishing those validation studies along with the original experiments in PLOS ONE. The initiative’s founders claim that such authentication will identify and commend researchers who produce high-quality, reproducible research, while helping to suppress the increasing numbers of retractions.

But Kent Anderson, chief executive officer and publisher of the Journal of Bone & Joint Surgery, doesn’t believe that the initiative is up to the task. On the blog The Scholarly Kitchen, Anderson attacked both the RI’s means and the incentives it uses to encourage researcher participation. He noted that it would be costly and detract resources away from more important issues. However, Elizabeth Iorns, an RI advisory board member and the chief executive officer at Science Exchange—the online marketplace that will service the RI—counters that each verifying lab works under a cost recovery basis and only the most critical experiments are selected for validation, making the RI highly cost effective.

Costs aside, Anderson questioned whether the validation studies will add any value to the research, calling them “redundant publications.” He also noted that the initiative has weak support from publishers, like Nature and The Rockefeller University Press, which only offer to link to validated studies hosted on an independent server; only PLOS ONE would publish the study in its entirety alongside the original manuscript.

The RI is “proposing to reinvent is science itself,” Anderson concluded in his blog post. “But this time, with certificates”—a feature many feel is of little worth considering that repetition and validation is already an integral part of the scientific process, with subsequent studies building upon research findings and repeatedly testing particular hypotheses.

Anderson and others argue that the RI is merely treating a symptom of a much bigger problem—an unhealthy scientific process. If the recent rise in retractions is any indication, “then we need to worry that the self-correcting mechanisms of science aren't keeping up with the number of unintentional errors and out-and-out fraud,” said Ivan Oransky, co-founder of the blog Retraction Watch and former deputy editor of The Scientist. The question is: What is driving this failure?

Last year, Arturo Casadevall, editor in chief at mBio, and Ferric Fang, editor in chief at Infection and Immunity, reviewed what they perceived as the methodological, cultural, and structural problems in US biomedical research, and asserted that the problems with science may go much deeper than simple validation studies will be able to reach. Casadevall and Fang refer to the scientific enterprise as a pyramid scheme, with a small number of principle investigators overseeing a vast population of researcher scientists, postdocs, and students with poor chances of careers progression. This, along with the “publish or perish” mentality, puts extreme pressure on researchers to produce high profile results, motivating scientific misconduct. Casadevall and Fang also cite a winner-takes-all system and the priority rule that both unjustly reward the first to publish or announce research findings.  The authors propose that one of the roots to all these issues may be an inadequate level of government funding that may ultimately drive poor practice in the form of fraudulent or even sloppy science.

One possible solution, then, would be to increase funding, thereby reducing excessive competition and alleviating some of the pressures that may drive some researchers to publish dubious results. “If you take care of the other problems, then much of the reproducibility issues will be solved,” said Vincent Racaniello, a microbiology and immunology professor at Columbia University in New York.

But Iorns disagrees that more money will solve the problem. The RI “will still be required no matter what the funding situation is,” she said. While Iorns does agree there are a number of issues the RI cannot address, such as the need to share data openly, it is just one piece of a larger puzzle, she argued. A well-rounded effort to clean up science may include, for example, increased support of open lab book practices, in which a project’s methods and results are freely available online, to anyone who wants to try to reproduce the experiments.

Oranksy agreed that a multi-pronged approach to science’s irreproducibility problem is the way to go. “It's always risky to talk about a single answer to any problem, particularly one as complex as the one the [RI] is trying to solve,” he said.

Advertisement

Add a Comment

Avatar of: You

You

Processing...
Processing...

Sign In with your LabX Media Group Passport to leave a comment

Not a member? Register Now!

LabX Media Group Passport Logo

Comments

Avatar of: 38Murphy

38Murphy

Posts: 7

September 27, 2012

This initiative addresses an issue that is not critical to the surge in retractions. The surge in retractions, which often involve journals with a high impact factor, are primarily due to scientific misconduct. A high placed value on publishing in these journals is a driving force as publications in these journals leads to increase an researchers prestige at the institutional level as well as a receiving significant accolades at review committees of various granting agencies. So, perhaps the pressure to "succeed" as defined by publishing in "high value" journals is too great for many individuals to avoid, leading to a behavior that is of a misconduct nature.

This initiative is essentially addressing a problem that it is not inherently designed to solve and furthermore, just sounds nice to the public and governmental officials. In the end, we all know that how science is deemed correct is through many individuals repeating similar experiments over the years. The collective building of the literature on a topic strengthens the position of the original research. This initiative, while maybe meant to be a positive force in science, is merely a misplaced effort to address a very simple point, pressure to succeed causes many people to cheat in science. We have collectively created this environment by looking down our noses at hose publishing in low impact factor journals without ever examining the work. In essence, we started to focus on marketing, not on substance, which has changed the course of science and the behavior of those who work in the field.

Avatar of: Mike Anson

Mike Anson

Posts: 1457

September 29, 2012

"... repetition and validation is already an integral part of the scientific process ... ?" Get real! No one has funding to spare to validate previously published research, these days. However, more funding is not the answer: ending the pyramid scheme (10 pre-docs supporting 5 post-docs supporting 1 grantee) is going to be required.

Avatar of: Tim Vines

Tim Vines

Posts: 1457

October 1, 2012

I'll repeat what I said on Kent Anderson's article on the Scholarly Kitchen: this initiative sounds very promising, and any attention drawn to the reproducibility problem is good attention. However, if reproducibility is so important to PLoS ONE, why don't they introduce a data archiving policy that makes all authors put their data on a public archive or as supp mat at acceptance? This would allow the research community to evaluate the reproducibility of all 17,000 papers they publish this year, and not just the 50 or so papers that end up in this initiative.

Avatar of: EllenHunt

EllenHunt

Posts: 74

October 1, 2012

This article is better than the current one in The Scientist. http://www.guardian.co.uk/busi...

Avatar of: EllenHunt

EllenHunt

Posts: 74

October 1, 2012

Google for:
The drugs don't work: a modern medical scandal by Goldacre.

Avatar of: EllenHunt

EllenHunt

Posts: 74

October 1, 2012

Data transparency and publishing. Requiring that authors reveal whether they conducted other studies they aren't talking about. Laws requiring scientists to be audited on some kind of statistical quality control basis if they get federal funding. And listening to graduate students. Grad students know what research coming out of their labs is falsified. Postdocs do as well. Grad students who talk about it should receive priority for placement and funding elsewhere.

Avatar of: Dr. Matt

Dr. Matt

Posts: 3

October 1, 2012

However, grad students and postdocs are often the source of the falsified data. http://ori.hhs.gov/case_summar...

Avatar of: John Mulligan

John Mulligan

Posts: 1457

October 1, 2012

I think the Reproducibility Initiative is a great experiment, one that is worth doing. Recent publications about the reproducibility of high-profile research reported in the best journals suggests that current mechanisms are not working perfectly. That said, I agree that the RI will not solve all the structural problems with the scientific enterprise.

Added resources for science will not solve the problems either.

The resource problem is easy to understand from an ecological perspective: we have an organism (faculty member) that reproduces (trains a grad student / postdoc) every year or so over a 30-40 year period. Most of the off-spring survive and compete for a faculty position with the same reproductive characteristics. The result is exponential growth, a situation that never lasts long in the real world. No society can support long-term exponential growth at this rate (think about a pile of postdocs the size of Jupiter), so simply adding resources is not a solution to the problem. The ecological perspective suggests one kind of structural change that will help- we need to direct most scientists into positions that are independent but not reproductive. We need to shift a significant fraction of research funding to PIs that don't train posdocs or grad students. Competitive, independent, soft-money positions for many of the best scientists would help move science forward without the inevitable limitations of exponential growth.

Avatar of: FJScientist

FJScientist

Posts: 52

October 1, 2012

I am intrigued by the concept since I sense that poorly repeated studies are a problem. But if you ask me to take a side today, I would say 'I foresee problems'. Maybe the best way forward is what is currently done. If others find our results interesting enough to follow it up, they will repeat it in due course.
First of all, there are some very complex studies out there that would require, for example, the transfer of multiple mouse colonies to an independent laboratory in order to conduct the crosses needed. For that matter, what if this say, an 'aging' study, conducted over a period of 2-3 years. When do we say 'independent repetition would delay the publication of these findings, which would impede scientific advancement?'
Still, I believe there to be a real issue, at least in the health-affiliated biologic sciences. Based upon the manuscripts/grants I review and reports I hear from other laboratories, there's a surprising fraction of material published that never had multiple independent repetitions that is (or rather should be) a pre-requisite for manuscript submission and statistical analysis. I perceive that as a failure of peer review. But also, it speaks volumes about the neglect of rigorous scientific process in some laboratories.
Requiring outsourced repetition will make people aware that repetition is mandatory. There are details in implementation that need to be considered. Will there be laboratories that simply get their 'sloppy', unsubstantiated data worked up in a meaningful way by a rigorous laboratory? At that point, whose 'finding' is it? At what stage does a rigorous laboratory get tired of not being able to replicate the findings of others and opt out of this process? Do we rapidly spiral down to sloppy labs replicating the slop of other labs? How many replicates will the outsourcing lab do? What if the results are found to be in error, but the repeating laboratory finds something else that is iinteresting? Whose study is it at that point?
So, after considering all of these angles, I would have to say that outsourced repetition may be counter-productive, but a more rigorous enforcement of rigorous scientific procedures should be implemented at the peer review stage.

Avatar of: Chris Muller

Chris Muller

Posts: 2

October 2, 2012

Throwing more money at a problem, thinking it will be resolved by such actions, and then asking for more of the taxpayers' money from funding agencies to compensate for this in an attempt to clarify that one is using that money properly and that one's results are reproducible. Hmm.... What will keep these testing agencies honest? If agency X finds that Top Scientist's work is not reproducible, will they get Top Scientist and his/her friends' business again? Or are they planning another initiative to regulate THAT? I thought PLOS One was a good journal.

Well, live and learn. Anyway, I think all journals need to start something like a comments section beneath ALL the papers they publish online with the same kinds of options and FAQ hints that many websites like Yahoo! etc have to help people classify their comments. Those simply writing in to blow kisses to the authors for confirming their results can write such comments under the love column, others who have questions about experiments and data shown and not shown can write their columns under another column etc etc. There will of course be rules to ensure no obscenity, no bad language, much like the guidelines implemented by many discussion forums these days.

Avatar of: Connor Bamford

Connor Bamford

Posts: 3

October 3, 2012

Tim thanks for your point. I do agree that the RI is an interesting idea and one that they are one of a few who are actually doing something about it. What I think a number of people have an issue with is that if you determine the root cause of the lack of reproducibility then it may be more efficient and economical to target instead. I think everyone is glad that somebody is taking this issue seriously. Lets see how the RI proceeds over the next year. To note, the RI along with FigShare is supporting a data archiving policy.

Avatar of: Connor Bamford

Connor Bamford

Posts: 3

October 3, 2012

In a perfect world repitition and validation is one of the pillars of the scientific method. Whether people follow that or not is a different matter. A lot of research builds upon previous work and maybe people don't replicate experiments exactly but they further those earlier findings. If they prove to be incorrect they will be found out.

Follow The Scientist

icon-facebook icon-linkedin icon-twitter icon-vimeo icon-youtube
Advertisement
Panasonic
Panasonic

Stay Connected with The Scientist

  • icon-facebook The Scientist Magazine
  • icon-facebook The Scientist Careers
  • icon-facebook Neuroscience Research Techniques
  • icon-facebook Genetic Research Techniques
  • icon-facebook Cell Culture Techniques
  • icon-facebook Microbiology and Immunology
  • icon-facebook Cancer Research and Technology
  • icon-facebook Stem Cell and Regenerative Science
Advertisement
Mirus
Mirus
Advertisement
The Scientist
The Scientist