Advertisement

Citations: Too Many, or Not Enough?

By William J. Pearce Citations: Too Many, or Not Enough? We are citing too many papers inappropriately. Students, colleagues and coauthors must critically read each paper cited in its entirety. For more than 100 years, before PubMed was freely accessible via the Internet, the medical literature was commonly accessed via Index Medicus, the first comprehensive index of journal articles available through the Library of Medicine. Finding the perfect reference often necess

By | August 1, 2010

Citations: Too Many, or Not Enough?

We are citing too many papers inappropriately.

Students, colleagues and coauthors must critically read each paper cited in its entirety.

For more than 100 years, before PubMed was freely accessible via the Internet, the medical literature was commonly accessed via Index Medicus, the first comprehensive index of journal articles available through the Library of Medicine. Finding the perfect reference often necessitated hours of paging through the “big red book,” followed by a trip to the stacks, and perhaps in the later years, a trip to the copy machine. The required effort constituted a form of activation energy that naturally restricted the numbers of articles retrieved to only the most relevant and pertinent to the argument at hand. It also encouraged a careful and thoughtful reading and critique of each paper cited.

The introduction of PubMed in the mid-1990s revolutionized the process of finding and retrieving relevant literature. With much of the drudgery and inconvenience gone, long lists of potentially important publications could be compiled quickly and easily on any computer with an Internet connection. The parallel development of reference database management software further expanded the ability to compile and organize large numbers of abstracts, and ultimately article PDFs. On one hand, these impressive tools greatly facilitated preparation of comprehensive literature reviews with unprecedented breadth. On the other hand, easy access to so many publications reinforced the temptation to read each paper cited less critically, and sometimes not at all. Thus was born the practice of citing numerous diverse publications to support a point of discussion, instead of citing the one or two most relevant publications with the greatest impact on a field, as if quantity and quality of citations were interchangeable and equally persuasive. Unfortunately, conscientious reviewers of grants and manuscripts gradually found that efforts to verify the pertinence and strength of the evidence cited often revealed that numerous citations were inappropriate, or even incorrect. In this way, reviewers have become increasingly important in critical evaluation of the cited literature, which further burdens an already overburdened peer-review process.

Other consequences of the trend toward less critical evaluation of cited literature include not only a gradual erosion of scholarly rigor, but also a dilution of the value of the impact factor as a measure of journal prominence. Inaccurate citations inappropriately augment citation counts, compromise estimates of a publication’s influence, and introduce error into calculations of journal impact factors. This effect is further enhanced by the growing practice of citing general review articles in place of ground-breaking original studies.1 Another complication is the trend for many journals to relax prior restrictions on the maximum number of citations allowed, in hopes of increasing total numbers of citations for the journal. For example, random samples of research articles published in the American Journal of Physiology reveal that the number of papers per bibliography averaged approximately 29 in 1989, 37 in 1999, and 42 in 2009; during the same interval the journal impact factor also increased. Whereas this policy shift may encourage more complete and through citations of fundamental work, the downside is that it may also facilitate unnecessary citations and may even incentivize self-citation, another growing problem. Certainly, many of these trends are the result of growing pressure on investigators to publish more papers, and a simultaneously falling emphasis on manuscript quality. Is it any wonder why some authors promote the concept of the “minimum publishable unit,” defined as the smallest amount of data that can pass peer review?

In the midst of this challenging environment for science publishing, what are the best options? Simply put, students, colleagues and coauthors must critically read each paper cited in its entirety. Cite only the best, strongest and most original publications. Cite review articles only if they offer unique perspectives, concepts, or synthesis. In the tradition of the great news journalist Walter Cronkite, endeavor to present both sides of each controversy with balance and insight; curtail the practice of citing only work that supports a given position while ignoring work that doesn’t. With these simple practices, much of the quality and depth that are typical of excellent science can become more commonplace. Only then will the marvelous modern tools of citation retrieval and management reach their fullest potential.

William J. Pearce is head of the Genito-Urinary & Reproductive Pharmacology section of the Faculty of Pharmacology & Drug Discovery at F1000, and a professor of physiology at Loma Linda University School of Medicine.

1. S. Wiley, “Down with reviews,” The Scientist, 24:31, April 2010.
Advertisement

Comments

Avatar of: anonymous poster

anonymous poster

Posts: 1

August 5, 2010

Excellent and totally true

August 5, 2010

Very little could be added. I wonder if just self-discipline could save the day. May be the enforcement of a policy could help? I remember the pre-internet days when at an Institute all papers were to be checked before submissions, and one of the checks was to show the first page of every cited item. The idea was, I guess: if you can show you have the real paper, you may as well have read it.
Avatar of: robinp clarke

robinp clarke

Posts: 4

August 5, 2010

I was just now in the process of adding a citation of a review to my new theory paper ms (it's Geier's Acta Neurobiol Exp review of the conclusive evidence that mercury was involved in the autism increase). The reason is that it saves my paper from having to parrot-like re-do what that reviewer has already done with adequate competence. It enables my paper to be that much shorter than it was in an earlier draft containing my own review there. It enables my reference list to be reduced by rather a lot (along with some other review citations). \n\nCitations have multiple rationales. One, which is the one I most respect, is to point the reader to the evidence or background on which the author is basing his work or his ideas. Of course citations can be used also to assign credit, but they are a pretty useless, grossly distorted means of so doing.
Avatar of: anonymous poster

anonymous poster

Posts: 2

August 6, 2010

Haven't you every received a nasty review because you didn't cite the reviewer's paper(s), relevant or not? Self-aggrandizing referees are forcing over-citation or irrelevant citation in many cases. As for citing the original ground-breaking paper vs. a review, that is rarely appropriate after some moderate time period: too much correction, amplification, explanation, etc. will have been published in the mean time, and one can capture this either by citing 100s of papers or a single review. Sure, there is no excuse or reason for citing irrelevant papers, unless the editor/referees demand it...(they who cannot, by definition, be wrong), but this is very subjective. I don't find many referees understand enough outside a narrow area to make such a determination on the total set of papers cited in many articles. As for self-citation, if a paper is relevant, what difference does it make who wrote it?
Avatar of: anonymous poster

anonymous poster

Posts: 2

August 7, 2010

The author fails to mention the enormous increase in the number of articles over the years. Someone writing a paper today has to content with a 1-2 (and maybe more) order-of-magnitude increase in the # of references in a field compared to a few decades earlier. Scientist today are not smarter, not faster readers, nor able to absorb more material than 25-50 years ago. Hence, the importance of reviews has increased, as has citation of these. The time required to sift through all pertinent references by careful reading is a daunting task. While self-control is admirable, limiting the # of citations in papers may lead to authors citing papers of lower importance or relevance because less work is required. The amount of total work is decreased with reviews; rather than many investigators doing a thorough scan of the literature, one or more brave souls takes on the responsibility of writing a review. Perhaps we should require stricter criteria for screening and accepting reviews.
Avatar of: anonymous poster

anonymous poster

Posts: 1

August 9, 2010

The point is made by some posters that review papers are increasing in importance, apparently because this saves researchers the work of doing an independent review. This is undoubtedly a convenient and efficient approach, but I have certainly read review papers that disproportionately play up the importance of the authors' own work and skim over the work of others - this becomes self-perpetuating as other authors relying on the review will make the same mistake. Reading a review paper is no substitute for going back to the original sources, and seeking out other papers the review may have missed. Indeed, I have found this to often reveal errors in conventional wisdom that yield interesting research ideas.
Avatar of: anonymous poster

anonymous poster

Posts: 2

August 12, 2010

There seems to be the impression that people haven't actually read these papers.\nMaybe they have. There is the point that papers are a lot easier to get access to than they ever used to be. You don't need to have a subscription to the journal or even have the journal in your city, you can just read the pdf (however you get it). If you have read even part of the paper you should be citing it.\nIn some fields, such as astrophysics 'papers' are no more than one side of A4 wheras in others it would be at least 10 pages. \nMaybe some people are just ahead of the curve?
Avatar of: anonymous poster

anonymous poster

Posts: 15

August 12, 2010

This is an interetsing and compelling article. It does not satisfy all of the arguments but has nicely opened up a forum for debate.\n\nThe author states "Simply put, students, colleagues and coauthors must critically read each paper cited in its entirety. Cite only the best, strongest and most original publications. Cite review articles only if they offer unique perspectives, concepts, or synthesis."\n\nHowever, this would be a subjective assessment. My own view of what is an important paper will differ markedly from the next person.\n\nThe author (like myself I might add) is also a member of The Faculty of 1000 community. F1000 was established, in part, to cite really important, ground-breaking work. Not necessarily that published in Nature, Science, Cell, Blood of JBC but work from the vast repertoire of available literature. This too is subjective.\n\nSince (nearly) all work has been peer-reviewed prior to publication, the onus of responsibility for citing supportive and important work rests with the authors of the paper citing that work. Similarly, there is an onus of responsibility of the readers of that work to judge the paper critically while critically examining the cited works used either to support one's argument or which was cited to lay the foundations for that published work. The real problem with forcing an opinion of citing the "perceived" best and most important may risk excluding work published in some of the lesser well known journals. This would be a massive mistake as an article in Science alluded to recently describing infamous papers that failed to make the top 1% tier of journals but which were cited repeatedly in the 100's and 1000's. You could make the case perhaps that these were cited inappropriately? I doubt it.\n\nThis article although compelling, raises more invasive questions than simply putting forth an opinion to cite only the best and most important. The argument is flawed since human error and subjective reasononing is involved.\n\n
Avatar of: anonymous poster

anonymous poster

Posts: 25

August 12, 2010

What is wrong with citing reviews that encompass what is known in a field with a large number of references? For the sake of practicality, the reader could refer to the review and narrow down his search according to his particular interests. It may not be the publication in the journal with the highest impact that the reader could be interested. As for the impact factor, it is all relative........... When the green fluorescent protein was discovered this was not at all considered high impact research. \nNevertheless, scientists like to have their work cited as a means of recognition. \n
Avatar of: Douglas Easton

Douglas Easton

Posts: 32

August 12, 2010

I an currently toiling away at a review. I did one on the same subject published ten years ago. I forgot what a task it is. From,my point of view,accurate citation is an absolute must. One's job in reviewing is to get it right.This can point researchers to the appropriate papers for citation, and hopefully, for reading. Also,I do hope that they cite me in the introduction sections of their papers; particularly in journals like Science and Nature which have severe limits on space.
Avatar of: anonymous poster

anonymous poster

Posts: 3

August 12, 2010

Great comment about "you ignore repercussions for failing to cite". We have added marginal citations to get through a reviewer's comments that likely were citations of that reviewers work. It should be unethical to mention one's own papers in a review. The H-index is useful, but is being misused by a few (and it does not count patents, which some of us think are more useful than peer-reviewed pubs).\n\nEditors are in a position to minimize this reviewer abuse of authority.
Avatar of: Marcus Muench

Marcus Muench

Posts: 1

August 12, 2010

The number of scientists publishing, the number of journals in existence and consequently the number of papers published is increasing. Thus, it makes sense that today a single topic may have more papers associated with it than 10, 20 or 30 years ago. I don't think it is fair to just cite one paper in the best journal when multiple papers representing work done in different labs simultaneously but published in different journals within months of each other could all be justifiably given credit. \n\nPapers have changed noticeably over the last several decades in that the amount of work represented by a single paper has increased. Papers with 6 figures a couple of decades ago generally had 6 individual graphs. Today, it is not uncommon to find a single figure made from 6 graphs. Science is more efficient now in generating reams of data, yet the length of papers has not changed much. In my experience, I've had to rely more and more on citations (including self-citations) to condense the Materials and Methods and Introduction to fit journal length requirements. \n
Avatar of: anonymous poster

anonymous poster

Posts: 15

August 12, 2010

In talking to some researchers you do find they haven't read the whole paper. Sometimes it's plain that they don't understand the big picture. It becomes obvious why, when they say that they simply used the pdf search function to find the relevant phrases.\n\n"It takes too much time, to read everything properly...." \n\nWhich raises the next question. Do peer reviewers read the full texts of everything a researcher quotes, to work out if the paper is cited appropriately?
Avatar of: ERIC J MURPHY

ERIC J MURPHY

Posts: 18

August 16, 2010

This editorial is one of the best ones I have read for sometime in The Scientist. It covers many important topics on the correct use of citations. Along with the thoughts of many other individuals placing their comments herein, I am not sure an increase in the number of citations in a research paper is a bad thing. Rather this represents the growth of any given discipline and the easy accessibility of the literature to everyone world-wide via sites such as PubMed or SCOPUS. \n\nLike the author, I to recall going to the indicies looking for a particular topic, digging through the stacks looking for a particular paper and then using the references from one paper to lead me to 10 more papers, which soon led me to 20-50 more papers on the topic. While old school, this was a slow a methodical process that forced one to actually read the papers, as that was the only reliable means to expand your knowledge regarding the validity and importance of the other papers cited in the bibliography. The important point is that it forced one to READ the papers, not merely read the titles of the papers and from the title surmise the entire contents of the paper.\n\nThis then moves to another part of this well written editorial. The failure to actually read the papers cited by individuals. There is a growing use of citing a citation from some other paper in your own manuscript without ever verifying if the original citation was indeed correct. This then propagates a bad citation as several more people will use that citation. It all stems from not really reading the literature, but rather spending time reading the titles or abstracts of the literature. \n\nMore than once I have seen abstracts leave out data that would perhaps force the authors to make another or alternative conclusion. Journal editors have almost universally included a question on the review form about whether the abstract accurately reflects the body of work and then the same question is asked about the title. Why do editors ask these questions of their reviewers? In part, because these are important components of any given paper, but more pertinently, because we know that many citations will derive from readers only examining those parts of the paper. Hence, we are collectively trying to correct the growing problem by at least making those components reflective of the work contained in the paper.\n\nOne very important point that the editorialist made was that it is important to cite the original paper in which the experiments were published. Many individuals commenting on this editorial do not respond favorably to this point. However, the editorial is absolutely correct. The proper means of citation is to cite the original papers in which any particular point or outcome was published. Give credit where credit is due, rather than merely citing a review on the topic, more than likely not done by the authors who have earned the right to having their paper cited. In the end, authors should ask themselves if they would like their own work to go uncited? I think this is the application of the golden rule. \n\nReviews of course can be cited, but more along the lines of "for recent reviews on this topic, see Smith et al, 2010, Jones et al, 2009. Reviews are important, but every review has a bias and the authors may not have thoroughly examined the literature, thereby leaving out important points. Right now I am working on a review and have another list of 25 papers or so to go get out of the stacks. I will probably have well over 125 references on a topic that I thought I would have about 30-40 references. Again, using the old school approach of a reading the literature and finding citations that did not come up in my PubMed search. In the end, as an author, it is important for me to provide the reader of my review the best bibliography as possible so that the reader can access those papers and read each and every paper that catches their interest. However, it is also important for me to give factual information and to speculate about the significance of this information in as complete of a manner as possible. This includes citing literature that puts forth alternative hypothesis, no matter how far in left field I personally think these papers might be. \n\nSo, reviews are not evil, but certainly should not be cited in lieu of actually citing the work. I urge our young students beginning their careers in science about their obligation to accurately reflect what is in the literature and that this extends well beyond merely reading the title and abstract. We have a duty to our colleagues as well as a responsibility to the greater community to do so. \n\n
Avatar of: bob jo

bob jo

Posts: 5

August 16, 2010

I think the author is confused about the purpose of citation. The purpose of citation is to provide reinforcement for a point illustrated in a manuscript. put We do not cite to provide support for the cited article but rather to support our own point. \n So, when the author says "Cite only the best, strongest and most original publications" he is making this suggestion based on his false understanding of the purpose of citation. Perhaps his time with F1000 has caused him to forget that we do not read citations so we can get an idea of what the author thinks is the best of the current literature, but rather to determine if his/her own work is supported by the literature as a whole.\n With that in mind one should cite all of the relevant studies regarding any particular research. Indeed, our reference libraries should be expanding geometrically to keep pace with the expansion of published studies.\n I am aware that citation has become a default measure of the importance of a particular piece of work, but that is entirely secondary to the true purpose of citations. If you want to reform the use of citation as a measure of a study's worth I am behind you all of the way. In contrast, if your intent is to stop people from citing peer reviewed material, because it does not meet your capricious definition of what is "best, strongest and most original"... well try to remember that papers are arguments that should use all of the evidence available. \nOh, and keep your snobbishness out of my science policy please.
Avatar of: VETURY SITARAMAM

VETURY SITARAMAM

Posts: 69

August 17, 2010

The article, though the good purpose behind it cannot be questioned, was rather off the mark. The author wanted the best judged papers alone be cited with great care. This is like the district administrator who said that all their students were above average! Objective judgement is an oxymoron.\nWhat is all the clamour about citations? Other than career rewards supported by apparent social consensus, they serve little purpose of enhancing your own work. We are supposed to find a worthwhile question and an evaluation of the product of these efforts, the publicaton, is post-facto. Do only the best research is the obvious parallel and the flaw is in trying to predetermine the outcome. Fostering an individual attitude is most desirable but working out a mechanism is self-defeating. That is, unless one changes the paradigm.\nI see behind all this is an increasing pressure to change the paradigm of citations. I think that citation is a two-way street. It confirms the authenticity of the argument build-up for the author as the author concern. It also ensures (though rarely) that self-policing such that thought plagiarism does not occur.Improper citations and omitting others' work, rampant as it is, are a matter of thought plagiarism in a publication world that is primarily market driven. The solution is in automation of citations linked via computer, if we can evolve a system for it. We can. Thus, what is past is automatically linked 'regardless' of the author's choice and what the author submits is only in defense of his own antecedents, methodology, thought and action. \nThe gravest danger that we face is the increasing acceptance uncritically of what is good science simply because the overwheling social forces will make you go with the flow. That cannot be good science.\nIt is also necessary to tone down on our views on what is a great paper. A lot of good science is thoroughly professional. An exclusively reward-based view of science makes us hopelessly myopic.
Avatar of: CAMILO COLACO

CAMILO COLACO

Posts: 10

August 17, 2010

Would one solution to the citation problem be to give the referees of papers responsibility for picking up citation amnesia? Here is a quote from the author of a review when I contacted him: "I am familiar with your publications and I can only apologise for my omission which also failed to be detected during the peer review process".
Avatar of: Andy Li

Andy Li

Posts: 1

August 17, 2010

I absolutely agree that we have to take care when it comes to citing a paper. I do know some people listing a paper in References of their manuscript after only Title reading. Considering that the good and bad intermingled in Publicatios, we have to to careful. Sometimes the more is not the better.

August 18, 2010

It is one of the most pertinent articles I have come across recently. Infact, an article of this kind was long overdue. The problem described is very important and genuine. I had faced rejection of my own paper during my active research career by a reviewer because his own papers were not cited by me. Such prejudiced and biased decisions are not uncommon in the research publication fields. I have known of Herman Mark as Editor allowing me three repeated submissions in his attempt to do me full justice in defending my arguments against reviwer's objections and comments. In citing papers on any research paper, what is most important at stake being the intellectual honesty on the part of the researcher and his upright attitude in not resorting to favouritism or pampering any one, superiors, friend or any established individual if it is not justified or required to quote their work. He/She should always bear in mind that quotation of such unjustified citations amounts to misguiding and misinforming the future research workers. Temptations of such kind should strictly be curbed through resolute mental training and cultivating judicious attitude.
Avatar of: JOSHUA MILLER

JOSHUA MILLER

Posts: 1

August 24, 2010

I understand the desire of journals to limit the number of citations. However, I find myself making hard choices when preparing my papers. Whose paper do I cite and whose do I leave out? Yes, the most relevant papers and the papers with the most impact should be cited, but does that mean other papers that support the point being made are not worthy of mention? Yes, inappropriate and incorrect references should not be cited, and it is my responsibility as the writer of the paper to not cite such papers. However, often there are appropriate and correct references that, alas, are not the top 2 or 3 references on the topic. Shouldn't the reader of your paper know about these references, too? Shouldn't the authors of these references receive notice for their work? Isn't it possible that there is important information buried in these references that may only be recognized by a reader of your paper who looks up that second tier citation, reads it, and finds something you and others have missed? Could this lead to novel hypotheses or new interpretations of that particular field? Also, findings of one study may be pursuasive, but aren't we more convinced if 2, 5, or 10 studies confirm the findings of the first study? In other words, both quality and quantity matter.\n\nWhen you look at the issue this way, you come to the conclusion that limiting citations is short-sighted, potentially snobbish, and perhaps counter-productive.\n\nNonetheless, I recognize that print space is at a premium. So here's an alternative: publish in the print version a primary citation list, but in the online version have links to supplemental references that include the other papers relevant to the point being made. Journals already do this with tables and figures. Why not references?

Follow The Scientist

icon-facebook icon-linkedin icon-twitter icon-vimeo icon-youtube
Advertisement

Stay Connected with The Scientist

  • icon-facebook The Scientist Magazine
  • icon-facebook The Scientist Careers
  • icon-facebook Neuroscience Research Techniques
  • icon-facebook Genetic Research Techniques
  • icon-facebook Cell Culture Techniques
  • icon-facebook Microbiology and Immunology
  • icon-facebook Cancer Research and Technology
  • icon-facebook Stem Cell and Regenerative Science
Advertisement
Molecular Devices
Molecular Devices
Advertisement