FLICKR, ENGLISH 106

Over the last few months, the news about one of the biggest cases of scientific fraud in recent memory—affecting more than 100 published research papers and resulting in the firing of Diderik Stapel from his post as the head of the Institute for Behavioral Economics Research at Tilburg University—has unfolded bit by bit, shocking the scientific community. Now the discussion is moving beyond the Stapel and overt data falsification and focusing on less obvious, even unintentional, examples of data misuse in the field of social psychology, which some worry are becoming increasingly common.

"If high-impact journals want this kind of surprising finding, then there is pressure on researchers to come up with this stuff," methodological expert Eric-Jan Wagenmakers of the University of Amsterdam told The Chronicle of Higher Education. But, as he commented to statistician Andrew Gelman, "most surprising hypotheses are wrong."

There have also been...

"Many of us," the authors wrote, end up "yielding to the pressure to do whatever is justifiable to compile a set of studies that we can publish."

Indeed, according a survey to be published in an upcoming issue of Psychological Science, about a third of academic psychologists admitted to practicing such questionable techniques in their own research, including omitting how many variables they tested from the final paper and ceasing data collection once they achieve a desired result.

Interested in reading more?

Become a Member of

Receive full access to more than 35 years of archives, as well as TS Digest, digital editions of The Scientist, feature stories, and much more!
Already a member?