Menu

Lost in Translation

Failure to translate preclinical research to humans may be due in part to biased reporting.

Jul 16, 2013
Ruth Williams

WIKIMEDIA, RAYSONHOThere is excessive reporting of positive results in papers that describe animal testing of potential therapies, just like the publishing bias seen in clinical research, according to a paper published today (July 16) in PLOS Biology. As a result, many potential therapies move forward into human trials when they probably should not.

“It’s really important [work] in that it gives another explanation for why treatments that appear to work in animals don’t work in humans,” said David Torgerson, director of the York Trials Unit at the University of York in the U.K., who was not involved in the study. “I’ve personally always thought that animal models are potentially not as good as people might assume, but actually that view could be completely wrong, according to this paper.”

Indeed, “many people have argued that maybe there are problems with animal studies—that they cannot capture human physiology and pathophysiology,” said John Ioannidis, a professor of medicine at Stanford University in California, who led the research. “I have believed all along that animal studies should be perfectly fine, if the model is ok. It should be a very decent step toward screening interventions,” he said. Ioannidis was instead worried that an inherent bias toward publishing positive results and suppressing negative or neutral results might create a misleading impression about the effectiveness of interventions, making them destined to fail in human trials.

“We know publication bias happens a lot in clinical trials,” said Torgerson, “so I was surprised at myself for being surprised at the results, because of course, if it happens in human research, why wouldn’t it happen in animal research?”

Ioannidis confirmed his suspicions about publication bias by performing a statistical meta-analysis of thousands of reported animal tests for various neurological interventions—a total of 4,445 reported tests of 160 different drugs and other treatments for conditions that included Alzheimer’s disease, Parkinson’s disease, brain ischemia, and more.

The analysis compared the number of expected significant results—calculated from the results of the largest and most precise individual studies—with the number of observed significant results present in the literature. “We saw that it was very common to have more significant results in the literature compared with what would be expected,” Ioannidis said, “which is a strong signal that that literature is enriched in statistical significance.”

There are two main reasons why this would happen, said Ioannidis. One, as mentioned, is the suppression of negative results. The second is selective reporting of only the statistical analyses of data that provide a significant score. “Practically any data set, if it is tortured enough, will confess, and you will get a statistically significant result,” said Ioannidis. Not surprisingly, such post hoc massaging of data is scorned in the scientific community.

Although the present study focused on animal testing of neurological interventions, a publication bias “almost certainly” applies in other areas of preclinical research, said Bart van der Worp, a neurologist at the Brain Center Rudolf Magnus Institute in Utrecht, The Netherlands, who also was not involved in the study.

So, what can be done to avoid it? “One possibility is to develop registries for all animal studies,” said van der Worp. “Then, if you are working in a specific field at least you’ll know some studies are going on, or have been performed, and may not have been published yet.”

Such registries exist for human clinical trials, so should not be too difficult to implement, he said. Furthermore, he suggested that a forum where investigators can deposit neutral or negative results in the form of articles, should be established to ensure that such findings are in the public arena and not hidden. This should help prevent other researchers from pursuing fruitless avenues of research, he said, “which is a waste of animals and a waste of research money.” Not to mention a risk to people enrolled in potentially pointless trials.

 

K.K. Tsilidis et al., “Evaluation of excess significance bias in animal studies of neurological diseases,” PLOS Biology, 11: e1001609, 2013.

April 2019

Will Car T Cells Smash Tumors?

New trials take the therapy beyond the blood

Marketplace

Sponsored Product Updates

Application of TruBIOME™ to Increase Mouse Model Reproducibility
Application of TruBIOME™ to Increase Mouse Model Reproducibility
With this application note from Taconic, learn about the effects of the microbiome on reproducibility and predictability and how TruBIOME™ helps researchers generate custom microbiota mouse models!
Getting More Consistent Results by Knowing the Quality of Your Protein
Getting More Consistent Results by Knowing the Quality of Your Protein
Download this guide from NanoTemper to learn how to identify and evaluate the quality of your protein samples!
Myth Busting: The Best Way to Use Pure Water in the Lab
Myth Busting: The Best Way to Use Pure Water in the Lab
Download this white paper from ELGA LabWater to learn about the role of pure water in the laboratory and the advantages of in-house water purification!
Shimadzu's New Nexera UHPLC Series with AI and IoT Enhancements Sets Industry Standard for Intelligence, Efficiency and Design
Shimadzu's New Nexera UHPLC Series with AI and IoT Enhancements Sets Industry Standard for Intelligence, Efficiency and Design
Shimadzu Corporation announces the release of the Nexera Ultra High-Performance Liquid Chromatograph series, incorporating artificial intelligence as Analytical Intelligence, allowing systems to detect and resolve issues automatically. The Nexera series makes lab management simple by integrating IoT and device networking, enabling users to easily review instrument status, optimize resource allocation, and achieve higher throughput.