Several recent, very public cases of scientific fraud have led the public to question the efficacy of manuscript review by biomedical journals. I do not think that the process is fundamentally flawed. I do think, however, that it can be improved.
For centuries, the editors of scientific journals have relied on expert academic peer reviewers to determine whether the conclusions drawn by the authors of a manuscript are supported by the data presented. Now, editors can take manuscript review one step further, by determining whether the digital image data presented are accurate representations of the original data. I believe they should use this ability to help ensure the accuracy of the data they publish.
As a result of the revolution in electronic communication, many journals now have completely electronic workflows, whereby authors submit all text and figures as electronic files. This provides for efficient transfer of information and...
The ease of image manipulation in powerful applications such as Photoshop makes it tempting for authors to adjust or modify digital image files. Authors have been using these applications for more than 10 years; however, during most of this time, journals have had paper workflows. This meant that editors saw only a paper printout of the images, and they could not examine the image files. Electronic workflows now make these files available to journal editors and, with simple forensics, manipulations that are not visible on a printout can be revealed.
In an ideal world, principal investigators would compare the figures for a manuscript to the original data before submission to a journal, to ensure they are accurate representations of those data. In our less-than-ideal world, where it is clear that manipulated images are getting into submitted manuscripts - and most peer reviewers are working from printouts while riding the train home or drinking their morning coffee - the onus falls on the journal editor. The screening can be easily worked into the production process at the stage where image files from accepted manuscripts are prepared for import into layout templates.
Setting and enforcing guidelines
As editors implement electronic workflows, they have a responsibility to set guidelines for their authors on the proper handling of digital image data. It is essential to have clear guidelines, as some level of image manipulation is accepted practice-for example, image cropping or limited adjustment of brightness and contrast. The boundaries between acceptable and unacceptable manipulation, however, must be made clear to authors.
After these guidelines are introduced, editors have a responsibility to enforce them. To do so requires the establishment of definitions of misconduct, procedures for identifying misconduct, and policies for handling misconduct.
At the Journal of Cell Biology (JCB) we examine all digital images in all accepted manuscripts for evidence of manipulation. For black and white images this involves simple adjustments of brightness and contrast in Photoshop (see figure, part A). For color images, we use the "Levels" adjustment in Photoshop to compress the tonal range and visualize dim pixels (see figure part B).
We have created standards for acceptable manipulation,
Editors of some high-profile biomedical journals have recently voiced the opinion that journals cannot be investigative bodies.
During our three and a half years of screening experience at the JCB, we have had to revoke the acceptance of 1% of papers that passed peer review because we detected fraudulent image manipulation that affected interpretation of the data. We do not take such cases lightly. Four editors must agree on a determination of fraudulent manipulation: the managing editor and three academic editors, including the monitoring editor, a senior editor, and the editor in chief. We do not consider the element of intent; acceptance is revoked if any conclusion in a paper is called into question by the manipulation.
Twenty-five percent of all our accepted manuscripts have at least one figure that must be remade because we detect "inappropriate" manipulation, that is, the manipulation does not affect the interpretation of the data, but it violates our guidelines for presenting image data. This indicates a widespreadmisunderstanding of the line between appropriate and inappropriate manipulation, which will have to be addressed during the training of students in the responsible conduct of research. In almost all cases, we have been able to resolve incidents of image manipulation ourselves; only on very rare occasions have we needed to request the help of an institutional investigative body.
The Hwang case
One of the supplemental figures that Hwang and colleagues published in the now infamous stem cell cloning paper4 contained manipulated images. The image in the figure, part B is from the third row of Supplemental Figure S1B in that paper. It purports to show negative staining for a particular cell-surface marker in four different cell lines. A simple adjustment of tonal range clearly shows that the two middle images are identical. The minor differences in pixel structure are due to image compression.
It is likely that we would have identified this duplication with our routine screen. It is important to note, however, that this would only have led us to request the original data from the authors, who could have dishonestly claimed to have made a clerical error and provided different images. This illustrates a potential limitation of our investigative capabilities, but editors will be surprised at how easily a deception can unravel if they start asking questions.
Other types of data
What about other types of data besides image data? It is clearly more difficult to determine whether numerical data have been misrepresented, fabricated, or falsified. There are, however, clues to potential problems with these data that should raise red flags with peer reviewers and editors. These include exceedingly small data sets, exceedingly narrow and/or uniform error bars on a graph, or the presentation of data from a "representative experiment."
One possible way to address problems with numerical data would be to require authors to submit all their raw data for comparison to the prepared figures. However, doing this comparison would be much more time consuming than image screening. At a minimum, journals should set guidelines for presentation of numerical data that promote accurate graphing practices.
Protecting the published record
We at the JCB advocate that journal editors take a proactive approach to detect potential misconduct and resolve it before publication. It is not enough to respond to allegations of misconduct made by others, and it is certainly not necessary to pass on all such suspicions to an investigative body, as others have advocated.
Despite the additional safeguards we have put in place, if someone is determined to publish fraudulent data, they may well succeed. It is important to keep in mind that our screening can pick up manipulations done to image files after they have been acquired, but it will not pick up adjustments to the data made during acquisition (e.g., using settings on a microscope). Even with these limitations, it is a cop-out for an editor to say that the review process can never be perfect, when the publication of fraudulent work comes to light.
To editors who argue that they do not have the time or funds to do this kind of screening, think about the effort expended in dealing with a high-profile case of fraudulent research. The vast majority of biomedical journals are owned by commercial publishers, who make considerable profits from them. These companies should bear the cost of improving manuscript review as part of the responsibility of participating in scholarly publishing. Such enhanced review has the potential to save considerable time and public funds wasted by scientists forced to debunk published fraudulent research, when they could have been used instead to make real progress.
Mike Rossner is managing editor of the Journal of Cell Biology.