The nationwide experiment will initially include around 100,000 volunteers.
An analysis of retractions dating back to 1977 shows that most papers are retracted due to misconduct.
October 1, 2012|
Scientific misconduct contributes to more retractions than previously realized, according to a new analysis published today (October 1) in Proceedings of the National Academy of Sciences. Using retractions indexed in Pubmed, researchers found that fabrication, falsification, and duplication led to more retractions than error or plagiarism.
“Tracking down these corrections and retractions to find out what is going on is really innovative,” said David Resnik, a bioethicist at the National Institute of Environmental Health Sciences, who did not participate in the research. It turns out that “a high percentage of time, there really some kind of misconduct” behind retractions.
Previous studies suggested that error, not ethical lapses, prompted most retractions. In order to get a clearer sense of what mistakes led to scientific studies being pulled from the literature, lead author Arturo Casadevall at Albert Einstein College of Medicine and his colleagues identified more than 2,000 articles listed in Pubmed as retracted since the first identified article was retracted in 1977. They then assigned the papers to categories based on the reason for retraction: fraud or suspected fraud (including falsification, and fabrication), error, plagiarism, duplication, or other. When Casadevall's team used several sources (including Office of Research Integrity reports and retraction notices in journals) to assign cause, the reasons for retraction were less clear. Sometimes, the retracting scientists’ explanation referred to errors while official reports described misconduct, for example.
This agrees with Resnik’s work, which shows that scientists being disciplined for misconduct often avoid mentioning ethical breaches in their accounts and instead blame error.
While about 21 percent of papers retracted were retracted due to error, more than twice that number (43 percent) were retracted due to fraud or suspected fraud. Only 14 percent were retracted due to duplicate publication, and less than 10 percent were retracted because of plagiarism. Overall, the rate of retractions is on the rise, the team found, and journals with higher impact factors (IF) are hardest hit by fraud.
Nicholas Steneck, an ethicist at the University of Michigan who did not participate in the research, was not surprised that fraud underlies most retractions. Nevertheless, it is important to document the phenomenon, said Steneck, suggesting that journal editors occasionally avoid pursuing retractions that would be controversial. Steneck believes that the rise in retractions in recent years is primarily due to increased vigilance, but Casadevall disagrees. He believes that increased scrutiny would markedly shorten the time to retraction seen in fraud-related retractions at high impact journals, but he and and first author Ferric Fang found the time between publication and retraction was only slightly shorter at higher IF journals.
The disproportionate number of fraud-related retractions from high-IF journals likely reflects the pressures on scientists to publish impressive data in prestigious journals. “There’s greater reward,” said Resnik, “and more temptation to bend the rules.”
Although identifying scientific fraud as a serious problem is a first step, scientists still struggle with how to prevent misconduct in the first place. Research by Donald Kornfeld, a Columbia University pyschiatrist, suggests that teaching basic ethics to college scientists may come too late, but steps may still be taken to help relieve the intense pressure to perform that can push high achievers to doctor their results. Mentorship could be improved by making sure that training grants go to mentors who don’t take too many trainees, allowing them to concentrate on just a few mentees, Kornfeld explained. Mentors can also “redefine failure,” he said. Students with a history of high performance may feel intense pressure to succeed, said Kornfeld, but mentors can help by explaining that no one is always successful, and sharing stories of their own stumbles.
Resnik believes that research auditing—examining lab notebooks much like an accountant examines financial books—though difficult to put into practice, could help prevent some ethical breaches. “Scientists and administrators would be loathe to do this,” he said, but noted that it’s standard practice in industry.
Casadevall hopes to shed greater light on the problem of fraud and dispel any notion that scientific misconduct may be a crime that only affects the perpetrators. Even though retracted papers “are only a small part of the literature, when people do this kind of behavior often they do it with things society really cares about,” he noted, pointing to Andrew Wakefield’s notorious study linking vaccination to autism—the most cited retracted study Casadevall and Fang identified. “We’ve already had measles epidemics due to not vaccinating [because of Wakefield’s study],” said Casadevall. “Look at the damage done.
F. Fang et al., “Misconduct accounts for the majority of retracted scientific publications,” Proceedings of the National Academy of Sciences, doi:10.1073/pnas.1222247109, 2012.
October 2, 2012
Surprisingly, when readers write in to journal editors (all these big journals like Springer, Elsevier, Wiley have a publishing editor and journal editors) with problems they see in certain papers, journal editors actually blow them off. An acquaintance recently showed me some correspondence between him and editors of journals under these three publishing banners, which indicates that the journal editors are curiously more occupied with figuring out who the person is who is criticizing their decisions to publish these papers and politely, and sometimes not so politely, tell the critics to go take a hike because if they were so great, they could simply do those experiments themselves and look at the results instead of complaining about someone else's work and if these questions were good, then the reviewers would have asked them.
The crux of the argument that important details that would help a reader perform those experiments have been omitted, and that none of the mistakes have caught the attention of the reviewers for various reasons, go unaddressed and unnoticed. So unless there is blatant fraud like someone coming forward with plagiarism or tampered notebooks or nonexistent equipment being used for experiments described in papers, it's not so easy to get people to admit to the lower grades of misconduct. Institutions also who house these miscreants are almost never willing to tarnish their own names by admitting that such misconduct was going on under their own noses. So the big fish who have become big by knowing when and how to fiddle with data will continue to get away because these publishers are also interested in getting these people's names in their journals. So fraud does breed retractions but sometimes fraud uses effective birth control and is able to avoid breeding visible progeny like retractions and corrections and notices etc.
October 17, 2012
I did a similar research study when I was Associate Director in the Office of Research Integrity (ORI), working with Dr. Mary Scheetz of ORI and Dr. Sheldon Kotzin of NIH National Library of Medicine (Medlline).We submitted an abstract for Drummond Rennie’s 2005 Peer Review Congress (but it was not accepted for presentation): “We noticed a high correlation between the retraction of publications involving United States Public Health Service (USPHS) support as listed in MEDLINE® and the existence of a related research misconduct case involving one of the authors of the retracted publications. We describe an analysis of that data, and related data involving other cases known to ORI staff or made public as involving allegations of misconduct. Approximately 64% to 72% of such PHS-related retracted publications were found to be associated with allegations and/or findings of research misconduct known to ORI, over the years that such retractions have been indexed in MEDLINE. We speculate whether a significant fraction of the other 28% of the retracted papers may also have involved PHS-related research misconduct, rather than being retracted because of errors or other scientific judgment reasons. We furthermore encourage editors to examine seriously requests for retraction, to ensure that those involving scientific misconduct will identify the person/author responsible and exonerate the other authors. . . . “Over the past three decades there have been 572 retracted publications listed in MEDLINE, an average of about 20 publications retracted per year, about 30% of which involved PHS support. Of the 175 retracted publications citing in MEDLINE with PHS support, 114 (64%) were known by ORI staff to have involved cases in which research misconduct was alleged (some cases led inquires but no investigation, but others led to investigations, most of which found research misconduct for the authors/papers cited). When the 43 other retracted publications known to ORI staff to have been involved in such allegations or investigations related to PHS-appropriated funds are added, then of all of the 218 such retracted publications, 156 (72%) were known by ORI staff to have been related to research misconduct cases.” Alan Price