Celeste Kidd and Steven Piantadosi had sued the university over its handling of sexual harassment allegations made against colleague Florian Jaeger.
An initiative to replicate key findings in cancer biology yields a preliminary conclusion: it’s difficult.
January 18, 2017|
WIKIMEDIA, JUN SIETAFive papers published in eLife this week (January 19) provide the first results of the Reproducibility Project: Cancer Biology—a collaborative effort between the Center for Open Science (COS) and Science Exchange that aims to independently replicate experiments from high-profile cancer biology papers. The results reveal that, for an array of technical and other reasons, reproducing published results is challenging.
“This is an extremely important effort. Even though the published results pertain to only a small set of the larger project, the picture is convincing that reproducibility in cancer biology is very difficult to achieve,” said Stanford University School of Medicine’s John Ioannidis, who was not involved with the project. “I see the results not in a negative way . . . but as a reality check and as an opportunity to move in the right direction, which means more transparency, more openness, more detailed documentation of [methodology], and more honesty with ourselves.”
Researchers launched the Reproducibility Project: Cancer Biology in 2013 with a goal of independently replicating a subset of experiments from 50 of the most impactful cancer biology papers published between 2010 and 2012. (For financial reasons, the team has, to date, scaled the analysis down to roughly 30 papers.)
Following the results of a similar project from the COS—the Reproducibility Project: Psychology—in which just one-third to one-half of the examined results were successfully replicated, the expectation for this new project was that replication would be similarly arduous, said Brian Nosek, cofounder and executive director of the COS, who led both projects.
“So the goal of the project was to maximize the quality of the methodology so that we could learn as much as possible,” he told The Scientist. After all, “if we just do a bad job of the experiments then it is not interesting to fail to replicate things.”
To boost their chances of replication success, the researchers contacted the authors of the original papers for advice and—where possible—reagents, wrote detailed protocols that were then reviewed by those authors, and submitted the protocols and experimental plans to eLife for peer review and publication as “registered reports.”
Despite these preparatory efforts, few of the experiments reported in the first five replication studies yielded equivalent results to those of the original studies. In some cases, the new experiments yielded different or less statistically meaningful results than the originals, while in others, the replication results were confounded by, among other things, differences in how particular cells or model mice behaved between the original and new studies.
In emails to The Scientist, authors of the original studies highlighted potential reasons for some of the discrepancies, such as suspected differences in sample preparations, the use of different statistical methods, and more.
Putting individual labs in the hot seat is not the point of these replication studies, said Lawrence Tabak, Principle Deputy Director of the National Institutes of Health, who was not involved with the project. “I’m particularly worried that people will conflate difficulties in reproducibility with things that are nefarious, like misconduct and so forth. That’s not what this is about.” Instead, he said, “efforts like this reproducibility project really underscore how incredibly complicated biomedical research is.”
To improve reproducibility in science through promoting access to methodological details and raw data, the COS has created a series of “transparency and openness promotion” (TOP) guidelines for journals, funders, and scholarly societies to adopt. More than 750 journals and organizations are now TOP signatories, according to Tim Errington, metascience manager at the COS, who co-led the new project. But, he said, cultural change is needed.
The way we reward scientists is “all based on outcomes,” he said. “How many publications did you get? . . .What journals are they in? But we’re not scientists because we know what the outcome is.” Reproducibility should be rewarded, said Errington.
Ultimately, Tabak noted that such a shift is “a shared responsibility” between the funding agencies that invest in replication studies, the journals that promote and reward transparency, and scientists themselves.
Indeed, agreed Ioannidis: “the more buy-in you get from these stakeholders, the more likely it is that things really will get better.”
F. Aird et al., “Replication Study: BET bromodomain inhibition as a therapeutic Strategy to target c-Myc,” eLife, 6:e21253, 2017.
S. Horrigan et al., “Replication study: Melanoma genome sequencing reveals frequent PREX2 mutations,” eLife, 6:e21634, 2017.
S. Horrigan et al., “Replication study: The CD47-signal regulatory protein alpha, SIRPa, interaction is a therapeutic target for human solid tumors,” eLife, 6:e18173, 2017.
I. Kandela et al., “Replication study: Discovery and preclinical validation of drug indications using compendia of public gene expression data,” eLife, 6:e17044, 2017.
C. Mantis et al., “Replication study: Coadministration of a tumor-penetrating peptide enhances the efficacy of cancer drugs,” eLife, 6:e17584, 2017.
B.A. Nosek and T.M. Errington, “Making sense of replications,” eLife, doi:10.7554/eLife.23383.001, 2017.
Correction (January 18): The Reproducibility Project: Cancer Biology launched in 2013, not in 2014, as was previously written. The Scientist regrets the error.