Advertisement

Misconduct Shakeup

The ongoing saga that led to psychologist Dirk Smeesters’s resignation from the Erasmus University Rotterdam has the scientific community discussing new ways to detect data fraud.

By | July 3, 2012

image: Misconduct Shakeup Flickr, Sharon Hall Shipp

FLICKR, SHARON HALL SHIPP

Late last month, psychologist Dirk Smeesters of Erasmus University Rotterdam resigned from his post after an investigative committee concluded that it had “no confidence in [his studies’] scientific integrity.” On June 25, ScienceInsider reported that the wrong-doing was first brought to the university’s attention by “an anonymous fraud hunter.” Three days later, the university identified Uri Simonsohn, a social psychologist at the Wharton School of the University of Pennsylvania, as the anonymous whistleblower. His technique: a statistical analysis that looks at the effect of removing extreme data, according to a blog post by Richard Gill of Leiden University in the Netherlands, who evaluated the technique.

Simonsohn also notified a US university about another psychology paper flagged by his method as possibly being fraudulent, and the main author on that paper is under investigation and has resigned, he told ScienceInsider. More details about the statistical method—including what kind of inconsistencies it catches, how sensitive it is, and what kind of false positive rate comes with it—are expected very soon. For now, the scientific community is discussing the validity and ethics of the approach.

“There’s a lot of interest in this,” Brian Nosek of the University of Virginia in Charlottesville told ScienceInsider, who added that the method may identify other cases of misconduct in the literature and help the field gain more credibility. On the other hand, he noted, it could turn colleagues against each other and harm reputations and careers in the process. “This is psychology’s atomic bomb,” he said.

“If we really do have a new tool to uncover fraud, then we should be grateful,” added Harvard University psychologist Daniel Gilbert. “But the only difference between a tool and a weapon is in how judiciously they are wielded, and we need to be sure that this tool is used properly, fairly, and wisely.”

Advertisement

Add a Comment

Avatar of: You

You

Processing...
Processing...

Sign In with your LabX Media Group Passport to leave a comment

Not a member? Register Now!

LabX Media Group Passport Logo

Comments

Avatar of: RichardPatrock

RichardPatrock

Posts: 52

July 4, 2012

Every journal should implement fraud and plagiarism software.  When a manuscript came in, it would be scanned and an opinion rendered before the editor had a chance to look at it.  The question arises what to do with the results of such a scan given that there will be false positives.  At this point, I would recommend that the journal have a committee look at the paper.  If there is clearly fraud or plagiarism, I would hope that more than the manuscript's rejection would the outcome. I would be in favor of having the results be placed in a database where every journal could trace the paper and at some point the author(s) of the paper to determine if a pattern of cheating is obvious.  Otherwise, we could have any researcher gassing the literature like an infamous anesthesiologist.
.

Avatar of: Daniel Dvorkin

Daniel Dvorkin

Posts: 20

July 4, 2012

Assuming that the "fraud and plagiarism software" actually detects fraud and plagiarism.  Most of the software that claims to detect plagiarism uses proprietary, trade-secret algorithms, and this "fraud detection algorithm" appears to consist of the following steps:

1. Assume non-fraudulent data will be normally distributed.
2. Test data for a specific attribute of the normal distribution.
3. If data fails the test in step 2, claim fraud.

Given the grotesque consequences of a false positive for this kind of test, we need a LOT more testing, and a lot more openness about the methods used, before we even begin to trust any kind of automated detection.

Avatar of: RichardPatrock

RichardPatrock

Posts: 52

July 5, 2012

We'll have to wait for Simonsohn's paper to come out which explains the methods he used to discuss this matter further.  It is supposed to be out this week according to the Gill's blog (cited above). 

Avatar of: alyzzyla

alyzzyla

Posts: 4

July 5, 2012

There have been many incidents of peer-reviewers rejecting submitted papers,
stealing ideas and publishing claimed work as their own. We have to
admit, that getting a paper published is neither a certificate of
originality nor is it  verification of data. I am so sick of the current
system, and i am hoping that honest scientists, who have witnessed
fraud and misconduct, will stand together and put an end to this anarchy. And, who is accountable when heavily cited fraudulent article attracts the authors awards and promotion? We have not heard of one fraudulent professor getting stripped off his title. and what happens to articles that claim reproducing similar data and  verify fabricated research? Journals do not trace those articles after they retract the original fake ones. The whole thing is a mess .

Avatar of: jack woodall

jack woodall

Posts: 3

July 5, 2012

If this case were a false positive, wouldn;'t you expect the researcher to fight it?  Isn't the resignation an admission of guilt?

Avatar of: 38Murphy

38Murphy

Posts: 7

July 5, 2012

The idea of every journal running fraud and plagiarism software on every submission is not going to happen.  In today's world, the pressure is to move manuscripts as rapidly as possible through peer-review.  In addition, I have found numerous times that software that detects homology between papers is certainly not an indicator that a manuscript and/or paper is identical to another.  At that point, the two must be compared manually to determine if there is overlap or not.  

I frankly think there are many aberrant behaviors that many EIC avoid addressing that are what I consider entry behaviors to misconduct.  Many authors get into to a habit of inflating the value of their work presented in a submission or may have issues with statistical analysis that are left unaddressed by some EIC.  I have seen papers published in solid journals that the statistical analysis is a mess and grossly incorrect, yet the paper went through peer-review, an Associate Editor looked at it, and an EIC accepted it for publication.  So why does this happen?  Why doesn't someone care?  Perhaps it happens because the system for that particular journal is overworked or perhaps the authors are well known and someone wants the work published.  Either way, it shouldn't be happening.  These entry behaviors litter the literature with poor papers that are then referenced by many, thereby expanding and perpetuating the issue.   

So, when discussing scientific misconduct, don't forget all of the little issues that are also often left unaddressed in the peer-review process, although I for one think EIC need to pay attention to these issues and be much more diligent to curb these behaviors.

Follow The Scientist

icon-facebook icon-linkedin icon-twitter icon-vimeo icon-youtube
Advertisement

Stay Connected with The Scientist

  • icon-facebook The Scientist Magazine
  • icon-facebook The Scientist Careers
  • icon-facebook Neuroscience Research Techniques
  • icon-facebook Genetic Research Techniques
  • icon-facebook Cell Culture Techniques
  • icon-facebook Microbiology and Immunology
  • icon-facebook Cancer Research and Technology
  • icon-facebook Stem Cell and Regenerative Science
Advertisement
Advertisement
Life Technologies