Citations: Too Many, or Not Enough?

By William J. PearceCitations: Too Many, or Not Enough? We are citing too many papers inappropriately.Students, colleagues and coauthors must critically read each paper cited in its entirety.For more than 100 years, before PubMed was freely accessible via the Internet, the medical literature was commonly accessed via Index Medicus, the first comprehensive index of journal articles available through the Library of Medicine. Finding the perfect reference often necess

William J. Pearce
Jul 31, 2010

Citations: Too Many, or Not Enough?

We are citing too many papers inappropriately.

Students, colleagues and coauthors must critically read each paper cited in its entirety.

For more than 100 years, before PubMed was freely accessible via the Internet, the medical literature was commonly accessed via Index Medicus, the first comprehensive index of journal articles available through the Library of Medicine. Finding the perfect reference often necessitated hours of paging through the “big red book,” followed by a trip to the stacks, and perhaps in the later years, a trip to the copy machine. The required effort constituted a form of activation energy that naturally restricted the numbers of articles retrieved to only the most relevant and pertinent to the argument at hand. It also encouraged a careful and thoughtful reading and critique of each paper cited.

The introduction of PubMed in the mid-1990s revolutionized the process of finding and retrieving relevant literature. With much of the drudgery and inconvenience gone, long lists of potentially important publications could be compiled quickly and easily on any computer with an Internet connection. The parallel development of reference database management software further expanded the ability to compile and organize large numbers of abstracts, and ultimately article PDFs. On one hand, these impressive tools greatly facilitated preparation of comprehensive literature reviews with unprecedented breadth. On the other hand, easy access to so many publications reinforced the temptation to read each paper cited less critically, and sometimes not at all. Thus was born the practice of citing numerous diverse publications to support a point of discussion, instead of citing the one or two most relevant publications with the greatest impact on a field, as if quantity and quality of citations were interchangeable and equally persuasive. Unfortunately, conscientious reviewers of grants and manuscripts gradually found that efforts to verify the pertinence and strength of the evidence cited often revealed that numerous citations were inappropriate, or even incorrect. In this way, reviewers have become increasingly important in critical evaluation of the cited literature, which further burdens an already overburdened peer-review process.

Other consequences of the trend toward less critical evaluation of cited literature include not only a gradual erosion of scholarly rigor, but also a dilution of the value of the impact factor as a measure of journal prominence. Inaccurate citations inappropriately augment citation counts, compromise estimates of a publication’s influence, and introduce error into calculations of journal impact factors. This effect is further enhanced by the growing practice of citing general review articles in place of ground-breaking original studies.1 Another complication is the trend for many journals to relax prior restrictions on the maximum number of citations allowed, in hopes of increasing total numbers of citations for the journal. For example, random samples of research articles published in the American Journal of Physiology reveal that the number of papers per bibliography averaged approximately 29 in 1989, 37 in 1999, and 42 in 2009; during the same interval the journal impact factor also increased. Whereas this policy shift may encourage more complete and through citations of fundamental work, the downside is that it may also facilitate unnecessary citations and may even incentivize self-citation, another growing problem. Certainly, many of these trends are the result of growing pressure on investigators to publish more papers, and a simultaneously falling emphasis on manuscript quality. Is it any wonder why some authors promote the concept of the “minimum publishable unit,” defined as the smallest amount of data that can pass peer review?

In the midst of this challenging environment for science publishing, what are the best options? Simply put, students, colleagues and coauthors must critically read each paper cited in its entirety. Cite only the best, strongest and most original publications. Cite review articles only if they offer unique perspectives, concepts, or synthesis. In the tradition of the great news journalist Walter Cronkite, endeavor to present both sides of each controversy with balance and insight; curtail the practice of citing only work that supports a given position while ignoring work that doesn’t. With these simple practices, much of the quality and depth that are typical of excellent science can become more commonplace. Only then will the marvelous modern tools of citation retrieval and management reach their fullest potential.

William J. Pearce is head of the Genito-Urinary & Reproductive Pharmacology section of the Faculty of Pharmacology & Drug Discovery at F1000, and a professor of physiology at Loma Linda University School of Medicine.

1. S. Wiley, “Down with reviews,” The Scientist, 24:31, April 2010.