Preclinical Cancer Studies Not as Reproducible as Thought

Researchers overestimate the reliability of findings from animal studies that are part of the Reproducibility Project: Cancer Biology.

Jun 30, 2017
Jef Akst

Squamous cell carcinomaFLICKR, ED UTHMANResearchers working at the Reproducibility Project: Cancer Biology, a collaboration between the Center for Open Science (COS) and Science Exchange, were unable to replicate the results from six mouse experiments published in top-tier medical journals. But when 196 scientists who were unaware of this were asked to predict what would happen if the experiments were repeated, they predicted a 75-percent probability that the results would be similarly significant and a 50-percent probability that the effect size would be the same, according to a study published in PLOS Biology yesterday (June 29).

“What is surprising here is that researchers are not very accurate, actually they are less accurate than chance, at predicting whether a study will replicate,” Benjamin Neel, director of New York University’s Perlmutter Cancer Center who was not involved in the research, tells Reuters.

The scientists surveyed for this study included both early-career researchers and more-established investigators, and the team found that the experienced scientists tended to be more accurate in their predictions. This suggests that training could help researchers better interpret published findings.

While the study highlights the long-recognized reproducibility problem in science, it comes on the heels of some promising news in this area: Reproducibility Project researchers were recently able to replicate the findings of two highly cited leukemia studies.

See “Cancer Studies Seem Replicable

Of course, not all of the work from the Reproducibility Project has yielded such positive results. In fact, the first five replication studies completed by project investigators, published in January, were all largely unsuccessful, yielding different results than the original publications had reported.

See “Replication complications”                          

“Probably the biggest reason the studies don’t hold up is because the sample size is too small. For example, if only 5 to 10 mice are used, a 50-animal study might not yield the same result,” Neel tells Reuters. “I think all preclinical results should be validated by an independent laboratory before they are used as the basis for clinical trials.”

See “The rules of replication