PIXABAY, JEREMIAH7A large undertaking in psychology aimed at determining the reproducibility of 100 studies in the field reported last August that about four in 10 could be repeated. The results were a damning assessment of the reliability of psychology research, but a critique of project, published in Science last week (March 3), has found flaws in the 2015 study’s methodology.
“Don’t trust the headlines when you see that somebody replicated a study,” Daniel Gilbert, a psychology researcher at Harvard University who coauthored the technical critique, told The Chronicle of Higher Education. “You have to look carefully to see what they really did.”
As Pacific Standard reported, Gilbert and colleagues pointed out three major errors in the Reproducibility Project: Psychology: “error (conducting ‘replication’ studies that didn't truly re-create the study being tested); power (using a single attempt at replication as evidence, rather than making multiple attempts); and bias (using...
Brian Nosek, a psychologist at the University of Virginia and a coauthor on the Reproducibility Project, told The Verge: “If different results are observed from original, it could be that original is wrong, replication is wrong, or both are right in that a difference between them explains why they found different results,” and that his project cannot say which explains the 2015 study’s poor replication results.
Another analysis published in Science last week—this one redoing 18 economics studies—found the same results as the originals 61 percent of the time. “The [reproducibility] rate we report for experimental economics is the highest we are aware of for any field,” coauthor Juergen Huber of the University of Innsbruck said in a statement.
Correction (March 7): This post originally stated that the Reproducibility Project found four in 10 studies could not be replicated. The Scientist regrets the error.