Injecting molecules from a sea slug that received tail shocks into one that didn’t made the recipient animal behave more cautiously.
Publishers need to be proactive about detecting and deterring copied text.
June 1, 2013|
© MARK AIRS/ISTOCKPHOTO.COMThe National Science Foundation (NSF) announced on March 8 that it is investigating nearly 100 cases of suspected plagiarism in a year’s worth of agency-funded proposals. Though the amount of funding dished out to these projects is unclear, extrapolating from the NSF’s 2011 budget, it could represent more than $96 million.
Unfortunately, the problem isn’t limited to the NSF. Retractions in academic publishing have skyrocketed—up 10-fold in the past three decades—with plagiarism and duplication (a kind of self-plagiarism) at the root of about 25 percent of those retractions. In the same month that the NSF launched its investigation into the suspect proposals, primatologist Jane Goodall’s forthcoming book was delayed by publishers after early reviewers discovered plagiarized passages.
Outside of academia the problem of plagiarism continues to generate headlines and scandals for politicians. In Germany, two prominent cabinet members have been forced to step down due to allegations of plagiarism in their doctoral dissertations. Meanwhile, in Canada, the head of the nation’s largest school district was forced to resign in the face of plagiarism allegations, and plagiarism scandals have also embroiled a senator in the Philippines, the prime minister of Romania, and several members of the Russian Duma.
Most worrisome, all of these high-publicity scandals came to light in the past 3 years, due in large part to heightened public awareness of plagiarism and increased access to plagiarism-detection tools, suggesting that such activities are nothing new; they’ve just become easier to expose.
In all of these cases, there is a common thread. Whether it’s a government agency or an academic publisher, the organization usually does not face the issue of plagiarism head on, instead relegating the subject to closed-door conversations and relying on luck and hope—until faced with a scandal and forced to clean up the mess. In most cases, this has involved not only the practical matter of finding and removing plagiarized content, but also dealing with public-relations issues and rebuilding public trust. Careers are ruined and reputations forever tarnished.
And while there are notable measures being taken—by the Nature Publishing Group and many other publishers, as well as by some government agencies and research departments—to maintain research integrity, the question remains: Why doesn’t every organization employ active defenses against plagiarism? In the age of Google, there’s no reason why readers should discover plagiarism before a publication’s editors do.
Plagiarism-detection software tools are well tested, widely available, affordable, and simple to use. Though they still rely on human analysis, they can greatly expedite the process of validating the originality of submitted work. Indeed, publications that have mandated the use of plagiarism-detection tools as part of their review process and author guidelines—specifically requesting that authors run their submissions through such detection software to avoid pulling words from their own previous publications, as well as to catch any unsourced quotes—have seen retractions decrease. For example, the Landes Bioscience journal Cancer Biology and Therapy rejected more than 221 articles for plagiarism in 2012 alone, but has yet to issue a single retraction since implementing rigorous screening protocols using iThenticate, a plagiarism-detection service used by many scholarly publishers and research departments.
Too many organizations, however, are ignoring the issue. According to an October 2012 survey by iThenticate, one out of every three scholarly editors says they encounter plagiarism regularly, yet, according to the same survey, more than half of researchers don’t check their own work, leaving instances of duplication—even accidental ones—to be flagged by editors.
This makes no sense. The time when it is easiest to detect and handle plagiarism is before a proposal is funded or an article or study made publicly available. Grantors and scholarly editors shouldn’t wait for a crisis before setting up a system to prevent plagiarism. Even if it is possible to repair trust and undo most of the damage of a plagiarism scandal, it is impossible to get back the wasted publication space, missed funding opportunities, and a clean professional record. Instead, a sound defensive strategy needs to be in place well in advance, and that strategy should include a clear message about the consequences for those who breach it.
Jonathan Bailey is a plagiarism and copyright consultant who has been working in the field since 2005. He blogs at Plagiarism Today, as well as at The iThenticate Blog, for which he is a paid contributor.
June 24, 2013
how far does this plagarism witch-hunt go? I am reminded of the hundred monkeys in the room with typewriters. If they typed for eternity, would the works of Shakespeare they produced be considered plagarised?
June 24, 2013
I am more concerned about plagiarism than I am about duplication. If I learn to tell a particular story in a coherent and understandable way in a paragraph, am I not free to use and re-use that that paragraph as paft of, for example, the "Introduction" in a scientific paper that deals with the topic?
June 24, 2013
Jonathan is spot on. As an Editor-in-Chief of Lipids, a journal of the American Oil Chemists' Society, we now run each and every submission and each revision through iThenticate. I made this decision with input from the editorial board in early May of this year. Little did I know how powerful this tool was until I started to use the program and how easy it would be to abuse the power by an unethical EIC. After about 40 submissions, 4-6 issues that were correctable were detected and about 10 rejections without peer-review.
I found that setting some mysteriousily derived metric for the percentage of homology is rather ridiculous. I view self-plagiarism in the Methods section as rather harmless and in technical writing it is difficult to achieve clarity of methods that we have used for years without duplication. In fact a manuscript I recently submitted to Lipids, my homology was 19% despite having written the manuscript de novo and the bulk of this homology was the result of the Methods section. Minor word combinations were homologous to countless other published papers, but all were limited to two to four word stretches (aka the room full of monkeys and typewriters comments by Bob below). Hence, I have found that homology is a tool to alert me to take a deeper look at the iThenticate file, but no decision can be made based upon that value. I have seen submissions with 12% have problems and submission with a value of 25% not having any problems whatsoever.
On the other hand stretches of sentences in the Introduction or Results that are clearly duplicated are noted prior to peer-review and at the recommendation letter stage the authors are notified of the offending sections for them to correct. We want to work with authors to correct these minor mistakes, not punish them. This includes both self-plagiarims and plagiarism.
However, large stretches of manuscripts, which can include anywhere from a paragraph to many paragraphs, that exhibit plagiarism or self-plagiarism are rejected without review. This includes plagiarizing websites, which iThenticate examines for homology.
However, how much broader is the power of iThenticate? I have found several instances of fabrication and falsification using this program. Authors who self-plagiarize will often extend that to creatively adapting data to their newest submission. Bad news for them, as this program gives direct links to the paper from which the work was plagiarized and I constantly check the literature noted by iThenticate to ensure F&F is not occurring.
So, to address both Bob and Robert below, I for one am careful to make sure that it is a large stretch of words or repeatedly taking a sentence or two from the same papers and using these copied sentences here and there throughout the manuscript. Issues of self-plagiarism are important to address as well, but one must be aware that a sentence or two here and there are not the same as self-plagiarizing a paragraph here and there. It is the expectation that authors try and work on manuscripts to avoid the "cut and paste" folly.
So, I think it is the responsibility of each and every EIC to work to limit all acts of scientific misconduct and to be vigilant in detecting these acts. We should not be waiting to the retraction stage to detect these issues. We have new tools that need to be used wisely and judiciuosly to aid us in our ability to detect these acts. However, I think it is equally important for EIC to work with authors on other acts of misconduct that are not FFP, but certainly an issue as well.
June 24, 2013
I was not "copied", but my ideas were blatently plagiarised. It had to do with inteeractions of Gulf War chemicals, and how they may have contributed to "Gulf War Syndrome".
How will this software detect the plagiarism of ideas?
June 25, 2013
please donot mistake me. really i am not able to understand in which way the plagiarism( particularly self plagiarism) is affecting research outcome and why it is considered as a scientific misconduct.
i request to anyone of you help me to understand regarding this issue..