PIXABAY, JEREMIAH7A large undertaking in psychology aimed at determining the reproducibility of 100 studies in the field reported last August that about four in 10 could be repeated. The results were a damning assessment of the reliability of psychology research, but a critique of project, published in Science last week (March 3), has found flaws in the 2015 study’s methodology.
“Don’t trust the headlines when you see that somebody replicated a study,” Daniel Gilbert, a psychology researcher at Harvard University who coauthored the technical critique, told The Chronicle of Higher Education. “You have to look carefully to see what they really did.”
As Pacific Standard reported, Gilbert and colleagues pointed out three major errors in the Reproducibility Project: Psychology: “error (conducting ‘replication’ studies that didn't truly re-create the study being tested); power (using a single attempt at replication as evidence, rather than making multiple attempts); and bias (using protocols that appear to be weighted toward giving the original study a failing grade).”
Brian Nosek, a psychologist at the University of Virginia and a coauthor on the Reproducibility Project, told The Verge: “If different results are ...