WIKIMEDIA, PLOS One. 2010 AUG 11;5(8):e12068.Weak statistics are the downfall of many neuroscience studies, according to researchers that analyzed the statistical strategies employed by dozens of published reports in the field. Especially lacking in statistical power are human neuroimaging studies—especially those that use fMRI to infer brain activity—noted the coauthors of the analysis, published in Nature Reviews Neuroscience last week (April 10).
By taking a look at 49 meta-analyses published in 2011, involving a total of 730 neuroscience studies, a team of researchers from the United Kingdom and United States found that about half of the papers had a statistical power below 20 percent. Statistical power is a measure of the probability that a study will detect the presence of a true phenomenon in the samples at hand. High-powered studies can demonstrate even small effects if sample sizes are big enough. Low-powered studies, however, run the risk of either missing genuine phenomena or reporting false positives, usually due to considering too few samples. Research studies typically aim to have a statistical power of at least 80 percent.
Human neuroimaging studies had a median statistical power of just 8 percent, the researchers found.
“This paper should help by revealing exactly how bad things have gotten,” Hal Pashler, a psychologist at the University of California, San Diego, who wasn’t involved with the study, told Wired Science.
Senior author Marcus Munafò, a psychologist at the University of Bristol, United Kingdom, told Wired that such underpowered statistics in neuroscience research may be the result of publishing pressure. ”In many cases, we’re more incentivized to be productive than to be right,” he said.