Celeste Kidd and Steven Piantadosi had sued the university over its handling of sexual harassment allegations made against colleague Florian Jaeger.
Efforts to reproduce an experimental psychology study yield failure, accusations, and ultimately, discourse on how to improve the process.
May 29, 2014|
WIKIMEDIA, LOTYIn 2008, Simone Schnall, then at the University of Plymouth, published a paper in Psychological Science indicating that cleanliness is tied to less severe moral judgments. The findings were intriguing, and the study was selected as part of a larger effort to attempt to replicate certain research in the field. But as independent researchers failed to reproduce Schnall’s findings, the discussion—at first collegial—turned sour, and over the past few weeks the scientists involved have participated in a public dialog about what went wrong and how to improve the handling of replication attempts, especially when they fail.
“I feel like a criminal suspect who has no right to a defense and there is no way to win: The accusations that come with a ‘failed’ replication can do great damage to my reputation, but if I challenge the findings I come across as a ‘sore loser,’” Schnall, now at the University of Cambridge, wrote online recently of her experience at the University of Cambridge’s psychology department blog.
Chief among her complaints was an initial denial to publish a comment alongside the failed replication attempt, which was to be published in a special issue of Social Psychology, because of space constraints. The guest editors of the special issue, Brian Nosek and Daniel Lakens, instead encouraged Schnall to write a blog post. “The academic currency is in journal articles, not blog posts,” Schnall wrote to the editors on New Year’s Day in an e-mail, which Nosek posted online several days ago. “They create the permanent record, which now states that the replication failed, implying that my work is flawed.”
Another complaint—and one that has reverberated online—is that the presentation of the failure came off as bullying. “Exchanges have spiraled out of control,” Etienne LeBel wrote at the blog Prove Yourself Wrong, “with unprofessional and overly personal comments uttered. For example, an original author accusing replicators of engaging in ‘replication bullying’ and a ‘status quo supporter’ calling (young) replicators ‘assholes’ and ‘shameless little bullies.’”
But it appears that in the wake of the mud slinging, a self-reflection among psychologists is emerging. At The Guardian’s Headquarters blog, Pete Etchells from Bath Spa University urged psychologists to move forward, first by accepting that reproducibility efforts are here to stay. “Second, researchers and commenters on all sides of the debate need to take a more mature and reasoned approach to dealing with criticism,” Etchells wrote.
Earlier this month, Daniel Kahneman proposed several rules to smooth “the difficult relationship of adversarial replication.” These include giving the original author a chance to comment on the replication methods and allowing reviewers to read the correspondence between the original team and the replicators. “The rules are designed to motivate both author and replicator to behave reasonably even when they are thoroughly irritated with each other.”
Uta and Chris Frith, writing in The Guardian’s Occam’s Corner blog, offered up their own practical approach to balancing the responsibilities of replication. “The original author is informed about the attempt to replicate by the new experimenter and is given a chance to check what the replicating lab is actually proposing to do in detail,” they wrote. “This somewhat reverses the now quite common procedure where a replicating lab demands to see the original data and to assume that the procedure is specified in such a way that it can be flawlessly replicated.”
And at his blog, LeBel discussed his effort to organize the open nature of the recent replication attempts by experimental psychologists. LeBel founded Curate Science, a site for collecting data from replication efforts, including the findings from several studies that repeated Schnall’s experiments. “The web platform aims to be a one-stop shop to locate, add, and modify such information and also facilitate constructive discussions and new scholarship of published research findings,” LeBel wrote. “The kinds of heated debates currently happening regarding Schnall et al.’s studies . . . makes science so exciting—well, minus the ad hominem attacks!”
Nosek told Science last week that he hopes the tensions will subside as attempts to reproduce published research become more commonplace. “Our primary aim is to make replication entirely ordinary,” he said, “and move it from a threat to a compliment.”