Advertisement
Bethyl Laboratories
Bethyl Laboratories

Stats Are Right Most of the Time

A new analysis suggests that only 14 percent of published biomedical results are wrong, despite prominent opinions to the contrary.

By | January 28, 2013

WIKIMEDIA, VMENKOV

Using a mathematical model that estimates false positive rates among published p-values, two researchers have found that only 14 percent of statistically significant results may be false, despite claims by some critics that a vast majority of biomedical research is erroneous. The study, which was posted on an open-access study archive, arXiv.org, follows a series of claims beginning with a 2005 essay in PLOS Medicine that “most current published research findings are false” due to small study sizes and bias.

“Our results suggest that while there is an inflation of false positive results above the nominal 5% level, but [sic] the relatively minor inflation in error rates does not merit the claim that most published research is false,” the authors wrote in the study’s discussion.

They conclude that their study, which extracted 5,322 p-values from 77,430 published biomedical study abstracts, upholds the reliability of biomedical literature and scientific progress. They do acknowledge, however, that it is not the last word on the matter. The results “do not invalidate the criticisms of standalone hypothesis tests for statistical significance that were raised,” the authors wrote. “Specifically, it is still important to report estimates and confidence intervals in addition to or instead of p-values when possible so that both statistical and scientific significance can be judged by readers.”

Advertisement

Add a Comment

Avatar of: You

You

Processing...
Processing...

Sign In with your LabX Media Group Passport to leave a comment

Not a member? Register Now!

LabX Media Group Passport Logo

Comments

Avatar of: Paul Stein

Paul Stein

Posts: 122

January 28, 2013

This is not the first time the use, or misuse, of statistics has been brought up in the biomedical literature.  Decades ago, the issue was not using the most appropriate statistical tests in the first place; e.g. multiple t-tests versus an analysis of variance.  The PLOS article did bring up some significant (dictionary definition, not statistical) issues associated with scientific research statistics, one being the use of small sample sizes.  With research funding sources always a major issue, and with the three R's forcing smaller and smaller numbers of animal subjects per group per experiment, both false positives and false negatives will remain rampant.

The arXiv.org article did little to dissuade these arguments.  Indeed, the particular choice of journals used by the authors might have forced including mostly large sample sizes per study in their investigation, immediately mitigating some of the arguments proposed in the PLOS article.  Even in the face of that, 14% is a fairly large number, so under the "best" of conditions, the research pros still have a very long way to go.  Everyone involved with biomedical scientific research, from the technician and student to the principal investigator, always needs to be more knowledgeable, vigilant, and conservative with their research designs, statistical analyses, and claims.

Follow The Scientist

icon-facebook icon-linkedin icon-twitter icon-vimeo icon-youtube
Advertisement

Stay Connected with The Scientist

  • icon-facebook The Scientist Magazine
  • icon-facebook The Scientist Careers
  • icon-facebook Neuroscience Research Techniques
  • icon-facebook Genetic Research Techniques
  • icon-facebook Cell Culture Techniques
  • icon-facebook Microbiology and Immunology
  • icon-facebook Cancer Research and Technology
  • icon-facebook Stem Cell and Regenerative Science
Advertisement
TwistDx
TwistDx
Advertisement