Citations: Too Many, or Not Enough?

We are citing too many papers inappropriately.

Students, colleagues and coauthors must critically read each paper cited in its entirety.

For more than 100 years, before PubMed was freely accessible via the Internet, the medical literature was commonly accessed via Index Medicus, the first comprehensive index of journal articles available through the Library of Medicine. Finding the perfect reference often necessitated hours of paging through the “big red book,” followed by a trip to the stacks, and perhaps in the later years, a trip to the copy machine. The required effort constituted a form of activation energy that naturally restricted the numbers of articles retrieved to only the most relevant and pertinent to the argument at hand. It also encouraged a careful and thoughtful reading and critique of each paper cited.

The introduction of PubMed in the mid-1990s revolutionized the process of finding...

Other consequences of the trend toward less critical evaluation of cited literature include not only a gradual erosion of scholarly rigor, but also a dilution of the value of the impact factor as a measure of journal prominence. Inaccurate citations inappropriately augment citation counts, compromise estimates of a publication’s influence, and introduce error into calculations of journal impact factors. This effect is further enhanced by the growing practice of citing general review articles in place of ground-breaking original studies.1 Another complication is the trend for many journals to relax prior restrictions on the maximum number of citations allowed, in hopes of increasing total numbers of citations for the journal. For example, random samples of research articles published in the American Journal of Physiology reveal that the number of papers per bibliography averaged approximately 29 in 1989, 37 in 1999, and 42 in 2009; during the same interval the journal impact factor also increased. Whereas this policy shift may encourage more complete and through citations of fundamental work, the downside is that it may also facilitate unnecessary citations and may even incentivize self-citation, another growing problem. Certainly, many of these trends are the result of growing pressure on investigators to publish more papers, and a simultaneously falling emphasis on manuscript quality. Is it any wonder why some authors promote the concept of the “minimum publishable unit,” defined as the smallest amount of data that can pass peer review?

In the midst of this challenging environment for science publishing, what are the best options? Simply put, students, colleagues and coauthors must critically read each paper cited in its entirety. Cite only the best, strongest and most original publications. Cite review articles only if they offer unique perspectives, concepts, or synthesis. In the tradition of the great news journalist Walter Cronkite, endeavor to present both sides of each controversy with balance and insight; curtail the practice of citing only work that supports a given position while ignoring work that doesn’t. With these simple practices, much of the quality and depth that are typical of excellent science can become more commonplace. Only then will the marvelous modern tools of citation retrieval and management reach their fullest potential.

William J. Pearce is head of the Genito-Urinary & Reproductive Pharmacology section of the Faculty of Pharmacology & Drug Discovery at F1000, and a professor of physiology at Loma Linda University School of Medicine.

1. S. Wiley, “Down with reviews,” The Scientist, 24:31, April 2010.

Interested in reading more?

Magaizne Cover

Become a Member of

Receive full access to digital editions of The Scientist, as well as TS Digest, feature stories, more than 35 years of archives, and much more!
Already a member?