FLICKR, UNIVERSITY OF EXETERRecent years have seen increasing numbers of retractions, higher rates of misconduct and fraud, and general problems of data irreproducibility, spurring the National Institutes of Health (NIH) and others to launch initiatives to improve the quality of research results. Yesterday (April 7), at this year’s American Association for Cancer Research (AACR) meeting, researchers gathered in San Diego, California, to discuss why these problems to come to a head—and how to fix them.
“We really have to change our culture and that will not be easy,” said Lee Ellis from the University of Texas MD Anderson Cancer Center, referring to the immense pressure researchers often feel to produce splashy results and publish in high-impact journals. Ellis emphasized that it is particularly important in biomedical research to ensure that the data coming out of basic research studies—which motivate human testing—is accurate. “Before we start a clinical trial, we’d...
C. Glenn Begley, chief scientific officer of TetraLogic Pharmaceuticals and former vice president of hematology and oncology research at Amgen, discussed a project undertaken by Amgen researchers to reproduce the results of more than 50 published studies. The vast majority were irreproducible, even by the original researchers who had done the work. “That shocked me,” he said.
William Sellers, global head of oncology at Novartis Institutes for Biomedical Research, described a similar experience. In addition to being unable to reproduce the majority of published experiments they attempted, Sellers and his colleagues got startling results when they began to verify the cell lines they purchased, finding that several commonly used lines were mislabeled as the wrong cancer type.
And these were just a few of the myriad problems that plague the literature, the experts noted. Lack of blinding or controls, unvalidated reagents, and inappropriate statistical tests were also common in the top-tier publications the researchers surveyed, not to mention the rising rates of research misconduct.
As for the cause of these problems, the panelists cited pressure from journals to tell nicely packaged stories, a professional culture that emphasizes high-impact publications, and the ongoing funding strain. “Right now, we have a system that I think is an unprecedented scientific enterprise, but by under-resourcing it, we’re placing it under enormous pressure,” said Ferric Fang of the University of Washington, who has studied rates and causes of retractions and misconduct.
The discussants offered a handful of possible solutions. For reagents and cell lines, Sellers suggested a Wikipedia-like reporting system through which properties could be recorded and verified. And for all the thousands of publications that have used inappropriate or mislabeled materials, retractions may be practical, he noted, suggesting some sort of flagging system on PubMed that could alert readers to potential problems.
And when it comes to outright misconduct, which has been on the rise in recent years, Ellis argued for more stringent consequences. “The punishment of being found guilty of misconduct is relatively light,” he said. “For those found guilty of fraud . . . you should be out [of science], that’s my personal feeling.”
Whatever the solution, the panelists agreed that something needs to happen—and soon. “Our ability to take a drug from concept to FDA [Food and Drug Administration] approval is very poor,” said Ellis. “In the field of cancer, only about 5 percent of drugs that start end up with FDA approval. To improve upon this dismal 5 percent success rate, we really need to have more confidence in our data.”