In light of these findings, researchers and other observers have proposed several initiatives to help the scientific community with its apparent honesty issues. One suggestion was the creation a Retraction Index. Unlike the Impact Factor, which is based on a journal’s citation rate, the Retraction Index would indicate the number of retractions a journal has for every 1,000 papers published. Following suit, Adam Marcus and Ivan Oransky at Retraction Watch blog suggested creating a Transparency Index, which could include a score for how well a journal controls its manuscript review process, including how it conducts peer review, whether supporting data are also reviewed, whether the journal uses plagiarism detecting software, and a number of other measures. Finally, the lab-services start-up Science Exchange and the open access journal PLOS ONE have collaborated to suggest the Reproducibility Initiative, which would provide a platform for researchers to submit their studies for replication by other labs for a fee. Studies that are successfully reproduced will win a certificate of reproducibility.
Still, The Scientist found no shortage of stories to discuss in this year’s roundup of misconduct stories. Here are a few of the most glaring examples of scientific fraud in 2012:
10 years of fabrication
This year, University of Kentucky biomedical researcher Eric Smart was discovered to have falsified or fabricated 45 figures over the course of 10 years. His research on the molecular mechanisms behind cardiovascular disease and diabetes was well regarded, despite his having used data from knockout mouse models that never existed. “Dr. Smart’s papers were highly cited in the specific caveolae/cardiovascular research field,” Philippe Frank of Thomas Jefferson University in Philadelphia told The Scientist. Smart resigned from his university post in 2011, when the investigation in his misconduct started, and agreed to exclude himself from federal grant applications for the next 7 years. He now teaches chemistry at a local school.
Setting the record for the most publications up for retraction by a single author, Japanese anesthesiologist Yoshitaka Fujii fabricated data in a whopping 172 papers. Beginning his career in falsification in 1993 while at the Tokyo Medical and Dental University, he continued it at the University of Tsukuba, and at Toho University in Tokyo, where he was finally dismissed in February 2012. According to investigations, Fujii never actually saw the patients he reported in his clinical studies, failed to get ethical review board approval for his research, and misled co-authors, sometimes including their names without their permission or knowledge. Although the retractions are not expected to have a large impact on the field—many of them had low citation rates—Fujii used the publications to further his career, publishing a total of 249 papers.
The results from roughly 34,000 criminal drug cases were put into question earlier this year, when forensic chemist Annie Dookhan at the shuttered Department of Public Health Lab in Massachusetts was discovered to have falsified records on samples she was assigned to process. Instead, she forged signatures and did not perform tests she recorded as complete, according to investigations. Suspicions may have first arisen due to her impressive output—she claimed to have processed 9,000 samples in a year, whereas colleagues only averaged around 3,000. As a result of her actions, a number of defendants may have been wrongly imprisoned, while others who may have been rightly accused were freed. This month, Boston police warned of an expected spike in crimes due to the large number of convicted drug offenders who will be released because of Dookhan’s misconduct.
Creative reviewing strategies
Rather than falsify data in order to get published, researchers have taken a new tack this year by writing glowing expert reviews for their own papers. When asked by journal editors to suggest names of experts in their field who were not involved in their research, at least four submitting authors suggested names and emails that then forwarded back to their own inboxes. The trend, first reported by Retraction Watch, was caught by one journal editor when author Hyung-In Moon, assistant professor at Dong-A University in Busan, South Korea, offered up names of reviewers with Google and Yahoo rather than university email accounts. “It should be a wake-up call to any journals that don’t have rigorous reviewer selection and screening in place,” Irene Hames, a member of the Committee on Publication Ethics, told The Chronicle of Higher Education.