Opinion: Stop Gaming Peer Review
Opinion: Stop Gaming Peer Review

Opinion: Stop Gaming Peer Review

It is a violation of publishing ethics to use the peer review comments of one journal to mature a manuscript and submit to another with a higher impact factor.

Jun 6, 2019
Jens P. Goetze and Jens F. Rehfeld

ABOVE: © ISTOCK.COM, LAMAIP

The decision to publish in a particular journal is driven by a number of factors, including reputation and impact. The first attraction, reputation, has been known since the dawn of scientific journals, whereas impact is of a more recent vintage. Journal impact factors were introduced by the charismatic Eugene Garfield, founder of The Scientist, in 1955. While the aim then was to generate a basic tool for librarians and institutions, the journal impact factor (JIF) rapidly became a measure of individual research quality. This repurposing of JIF, its use and misuse, has even caused a new “resistance” to the practice with the use of DORA (San Francisco Declaration on Research Assessment). In brief, this assessment asks less for numbers but rather focuses on content: “what have you discovered?” and not, “where did you publish it?” 

The hunt for impact by authors and editors is, however, still very much ongoing. Editors must recognize their part in perpetuating this situation. Authors, too, should examine their own practices. 

There needs to be a professional and ethical pact between authors and editors.

It has come to our attention, for example, that some authors might submit their work to high-impact journals, knowing that there is little chance of success. While sometimes this represents the triumph of hope over experience, it might also be a way to use high impact journals with their expert reviews as free tools for manuscript maturation. One may reasonably speculate that high-impact journals are better at engaging the best peer reviewers, who in turn will provide high-quality peer reviews. And as the research community works at large, these peer reviewers may aid with important suggestions for improving the science presented. Thus, a rejection from a high impact journal combined with a constructive peer review process is beneficial for the maturation of the manuscript.

A less frequently used strategy is to submit a research manuscript to medium-impact journals. If the journal accepts the manuscript with only minor suggestions for improvement, authors then withdraw the paper and aim for a higher–impact factor journal. It is understandable that authors may be led to this behavior by our JIF-driven system. However, one might also see this strategy as pushing an ethical boundary. After all, if the manuscript was submitted with the intent of publication, and an editor and the peer-reviewers agree to put it in print and online, it seems questionable not to follow the process through. Is this fair to the journal, the editors, and the peer reviewers? A simple moral answer to that question from our viewpoint is no, albeit we realize that journals have little possibility to act upon such behavior.

The most productive strategy for publishing is probably the oldest one. Basically, you assess your work carefully prior to submission (or get help from a colleague to do so); you try to match your research findings to the concrete aims and scopes of journals, and if in doubt, you may ask a journal editor for preliminary advice. Editors are always on the lookout for great science, and maybe you have just that in a manuscript form. Importantly, good editors are not just looking for ways to keep you out, they are also looking for reasons to let you in. But there needs to be a professional and ethical pact between authors and editors. Collectively, we can address the problems of the journal impact factor: Attempts to take advantage of the system might advance individual careers, but will do little for science.

Jens P. Goetze and Jens F. Rehfeld are both professors in the Department of Clinical Biochemistry at Rigshospitalet, University of Copenhagen, Denmark.