Menu

Preclinical Cancer Studies Not as Reproducible as Thought

Researchers overestimate the reliability of findings from animal studies that are part of the Reproducibility Project: Cancer Biology.

Jun 30, 2017
Jef Akst

Squamous cell carcinomaFLICKR, ED UTHMANResearchers working at the Reproducibility Project: Cancer Biology, a collaboration between the Center for Open Science (COS) and Science Exchange, were unable to replicate the results from six mouse experiments published in top-tier medical journals. But when 196 scientists who were unaware of this were asked to predict what would happen if the experiments were repeated, they predicted a 75-percent probability that the results would be similarly significant and a 50-percent probability that the effect size would be the same, according to a study published in PLOS Biology yesterday (June 29).

“What is surprising here is that researchers are not very accurate, actually they are less accurate than chance, at predicting whether a study will replicate,” Benjamin Neel, director of New York University’s Perlmutter Cancer Center who was not involved in the research, tells Reuters.

The scientists surveyed for this study included both early-career researchers and more-established investigators, and the team found that the experienced scientists tended to be more accurate in their predictions. This suggests that training could help researchers better interpret published findings.

While the study highlights the long-recognized reproducibility problem in science, it comes on the heels of some promising news in this area: Reproducibility Project researchers were recently able to replicate the findings of two highly cited leukemia studies.

See “Cancer Studies Seem Replicable

Of course, not all of the work from the Reproducibility Project has yielded such positive results. In fact, the first five replication studies completed by project investigators, published in January, were all largely unsuccessful, yielding different results than the original publications had reported.

See “Replication complications”                          

“Probably the biggest reason the studies don’t hold up is because the sample size is too small. For example, if only 5 to 10 mice are used, a 50-animal study might not yield the same result,” Neel tells Reuters. “I think all preclinical results should be validated by an independent laboratory before they are used as the basis for clinical trials.”

See “The rules of replication

November 2018

Intelligent Science

Wrapping our heads around human smarts

Marketplace

Sponsored Product Updates

Slice® Safety Cutters for Lab Work

Slice® Safety Cutters for Lab Work

Slice cutting tools—which feature our patent-pending safety blades—meet many lab-specific requirements. Our scalpels and craft knives are well suited for delicate work, and our utility knives are good for general use.

The Lab of the Future: Alinity Poised to Reinvent Clinical Diagnostic Testing and Help Improve Healthcare

The Lab of the Future: Alinity Poised to Reinvent Clinical Diagnostic Testing and Help Improve Healthcare

Every minute counts when waiting for accurate diagnostic test results to guide critical care decisions, making today's clinical lab more important than ever. In fact, nearly 70 percent of critical care decisions are driven by a diagnostic test.

LGC announces new, integrated, global portfolio brand, Biosearch Technologies, representing genomic tools for mission critical customer applications

LGC announces new, integrated, global portfolio brand, Biosearch Technologies, representing genomic tools for mission critical customer applications

LGC’s Genomics division announced it is transforming its branding under LGC, Biosearch Technologies, a unified portfolio brand integrating optimised genomic analysis technologies and tools to accelerate scientific outcomes.