The Pressure To Publish Promotes Disreputable Science

The pressure on university scientists to publish research papers in great quantity is relentless; and themotive behind it is clear. More papers mean more prestige for a researcher’s department—and the prestige will translate, department heads hope, into more financial support from the university. Unfortunately, this pressure is likely to prompt disreputable, unethical, and even fraudulent publication practices. At the very least, the pressure encourages scientists to adjust their p

Jul 10, 1989
Ag Wheeler

The pressure on university scientists to publish research papers in great quantity is relentless; and themotive behind it is clear. More papers mean more prestige for a researcher’s department—and the prestige will translate, department heads hope, into more financial support from the university. Unfortunately, this pressure is likely to prompt disreputable, unethical, and even fraudulent publication practices. At the very least, the pressure encourages scientists to adjust their priorities, putting more important work off in order to-prepare for publication material that otherwise would not be submitted.

Of course, there are those who sincerely hold that quantity of scientific publication can be equated with quality of scientific achievement. But there is a fallacy in this logic: An increase in the number of publications per author actually can be attributed not so much to greater productivity, but to changes in the way researchers publish.

There is, to be sure, a positive side to the matter. Encouragement to publish, for example, may spur a researcher to undertake additional projects in order to generate significant new publications. And genuinely valuable work that previously has not yet been written up can, of course, benefit the scientific community—as well as the department’s finances—when it appears in print. But when the pressure to publish is being exerted, it is more likely to be the marginal experimental results— those lying neglected at the back of the cupboard—that will be resurrected, reexamined over the weekend, and written.

There are several strategies that a researcher can adopt to increase his or her number of papers published without increasing the research output—and they are all are more or less disreputable. one such strategy is to repeatedly publish the same material. Duplicate submission of already-published work—in other words, “‘self-plagiarism”—can pollute the publication record by, for example, distorting through artificial increase the apparent incidence of an adverse drug effect. In one such instance, a description of a serious adverse pulmonary effect associated with a new drag used to treat cardiovascular patients was published twice, five months apart in different journals. Although the authors were different, they wrote from the same medical school about patients that appear identical. Any researcher counting the incidence of complications associated with this drug from the published literature could easily be misled into concluding that the incidence is higher than it really is. Journal editors, especially in the medical subjects, are increasingly concerned about the abuses of multiple publications, and have issued comprehensive guidelines for authors that include advice on the very limited justification for such duplication.

Another strategy to increase one’s list of published works is “salami science”—the slicing up of results to produce as many separate publications as the editors will bear. Salami science is correctly perceived as squandering the resources of science; evidence of its occurrence is the declining length of papers. (A euphemism—"the least publishable unit”—has emerged to characterize the result of fragmenting data in order to produce the greatest possible number of publications.

The burden of duplicate publication and salami science on the published scientific literature is worrying many editors. With 10% of all published papers never subsequently cited, the increasing weight of journals is being aggravated by a background noise wasting the resources of scientific publications.

A third strategy one might use to increase the apparent number of publications is to increase the numbers of coauthors on each paper published. The attraction of this strategy is that with coauthors reciprocating the gratuity, all of the researchers can count all the shared publications in their own “scores”—again without increasing actual research workload or output.

That the incidence of multiple authorship has increased is indisputable; the percentage of papers published with multiple authors in the New England Journal of Medicine has increased from 1.5% in 1886 (when the publication was called the Boston Medical and Surgical Journal) to 96% in 1977. This is due in part to the changing nature of scientific inquiry. But part of the increase in coauthorship can be attributed to the fact that, increasingly, the length of the publication list on a scientist’s CV is used as a measure of the scientist’s worth. The promotion and funding of physicians in academic medicine, in particular, is closely linked to number of publications, implying a strong motive for increasing this number.

There are signs that reliance on the number of publications as an indicator of an academic’s productivity and value is reaching truly ridiculous proportions: One source cites bibliographies containing as many as 600 or 700 papers. However, editors are increasingly troubled by die unjustified inclusion of coauthors. And the International Committee of Medical Journal Editors has gone so far as to produce guidelines intended to ensure that all those named as coauthors have contributed legitimately to the work being described and that each is able to take responsibility for that piece of work.

As well as duplicate publications, salami science, and gratuitous coauthorships, other unethical consequences of the pressure to publish have been identified, including conducting trivial research because it is likely to produce quick results. And then there is the reporting of false data, or fraud

Scientific Fraud

Scientific fraud in all its forms is being taken seriously by more and more scientists, partly because it has proved to be so difficult to identify. I am not suggesting that the pressure to publish is the sole cause of all scientific fraud. There are many other causes and incentives contributing to the complex problem and making solution of the problem that much more difficult. Nevertheless, publication pressure must be recognized as one leading motive for fraudulent and disreputable practices.

One of the solutions proposed to counter this destructive “publish or perish” philosophy is a uniform limit to the number of publications per year, with reduced grant support for offenders. A more practical suggestion by Marcia Angell, assistant editor of the New England Journal of Medicine, is that applicants for promotion or funding be asked to offer a limited number of their best publications for consideration, say three in one year or 10 from a five-year period. (Harvard Medical Schood has instituted such a policy.) Editors can play a greater role in checking the credentials of submitted manuscripts and ensuring that authors understand and accept clearly defined conditions of publication.

The advantage of assessing only a limited number of publications is that the researcher will be encouraged to maintain a bibliography with recent high-quality publications. The incentive will be to accumulate experimental data for fewer, full-length, comprehensive publications. “Quick” projects generating rapid publications will be of little help in seeking appointments, promotions, and funds and will be eschewed in favor of research contributing to more profound inquiries. Another advantage to such a system of self-restraint within science is that the limitation is not intrusive. No committee is needed to oversee its use and interfere with individuals’ decisions, and the increase in the number of papers that our journals will have to publish, our indexers cope with, and ourselves read, will at least be checked.

Even a quick perusal of recent literature reveals a universal disapproval among scientists of disreputable publication practices. Unfortunately, and perversely, others in authority over researchers hold the converse to be true.

But the fact that others higher in our university structure are advocating and using the number of publications as a measure of research output and quality should not deter us. If academics and researchers recognize that number of publications is a poor indicator of research performance, and that using this criterion encourages disreputable publications practices—and even fraud— then we should resist the pressure to publish.

The resolution of this problem lies within the grasp of practicing, publishing academics and researchers. Individual scientists can change the way that they apply for jobs and grants and the way they interview job and grant applicants; in this way we can move the emphasis back from quantity to quality. After all, if the privilege of academic tenure is not used to resist unsound practices—such as relying on number of publications-then what is the point of granting tenure to academics?

AG. Wheeler is a seniot tutor in the department of physiology and pharmacology at the University of Queensland, Australia.