Re: “I Hate Your Paper,” 1 the real problem is that publications have lost their purpose. The point of publication is to inform the scientific community of really important findings and to contribute to the growth of knowledge. When I hear—as I typically do when a speaker is being introduced—that some very senior scientist has hundreds of publications, I always wonder: do any of them matter? We have become obsessed with publication for publication’s sake. There are now more journals than ever and still the “top-tiered” journals reject enough papers to fill a library. Here’s a proposal: each scientist has a maximum of 25 papers he or she can publish during a career lifetime. I bet science would not suffer one bit—and all these issues would go away.
James S. McDonnell Foundation
St. Louis, Mo.
In my opinion, peer review not only works, it is essential. I would say almost all of my papers have become better focused, clearer, and more useful as a result of peer review.
Having said that, there is almost always a comment or two that makes little sense, is biased, off the mark, or not useful. However, my experience is that editors know this and this is the reason there is almost always more than one reviewer and the authors can cogently challenge reviewers’ comments.
With regard to prestigious journals, they are very difficult for most scientists to publish in, which you would expect given that they are meant to be publishing the best of the best. This is fairly true, but these journals also appear to favor certain institutions because they too are prestigious, while the science may well not be so top notch in some cases.
Matthew J. Grossman
I strongly disagree with the point that there are too many papers. The opposite is true.
Some investigators publish trivial or negative results. Although this dilutes the literature somewhat, many of us other investigators learn from the failures of others’ experiments. It is depressing, as well as a waste of public research resources, to waste time pursuing an avenue that has been a blind alley for others.
Other investigators hoard the negative and trivial results, partly out of pride for their scientific reputations, and partly out of a desire for their competitors to make the same mistakes and not gain an advantage.
Scripps Research Institute
The fact that high-quality papers are not published in particular journals does not reflect the defects in the peer-review system. Indeed, the papers identified in “Breakthroughs from the Second Tier” 2 are published in journals respected in their fields. The problems lie with the defects in rating journals by simplistic impact factors, and the use of defective compilations by “authorities” such as granting agencies and promotion and tenure committees.
Pennsylvania State University Hershey
While this analysis of the peer-review system may be appropriate, there is one other consideration that has been missed: In the same manner that reviewers for journals (and grant applications) favor work that is fashionable, readers and those who cite papers in their own publications probably follow a similar pattern.
True breakthroughs usually cannot be measured at short-term levels. Ideas and results that truly challenge established dogmas will have even greater difficulties in getting published and then in being recognized and accepted.
University of Southern California
Los Angeles, Calif.
The problem is not precisely that novel ideas would be rejected in top journals, rather novel authors. In a top-tier journal, one is more likely to be rejected immediately without review process if he has no history with the journal, because this is the easiest criterion for editors. This tends to create a sort of “verified authors” list of who can get a lot published there, and the rest.
Prague, Czech Republic
I agree with most of the paper’s comments about the peer-review process. However, nobody considers the perspective of the reviewers. It is also unreasonable to expect editors to have enough expertise in various aspects of the journal to be able to judge whether a particular reviewer is trying to sabotage a competitor’s work or not. As a reviewer, I strive to give constructive comments (and I think the majority of the reviewers are trying to do the same), however, sometime I found sloppy manuscripts requiring extensive editing or just simply data dumping without careful thought of presenting it in a coherent way. If we are discussing the peer-review process, we should also consider the other side of the story.
US Naval Research Laboratory
We are all too aware of the imperfect peer-review process, and clearly innovation is sometimes difficult. But a handful of papers that, in reality, eventually got their due is a very minor aspect of the problem—or perhaps in light of the number of citations they got, a non-problem. The number of high-ranking papers and corresponding careers that are boosted with research that is rightfully forgotten overnight clearly swamps the breakthroughs from the second tier.
Karl-Franzens University Graz
Wikipedia is part of a model of “corrections” and could conceptually be part of a publication process. Also, with the electronic media, references can be hyperlinked to almost any scientific article. Indeed, this idea can have dozens of “references” and layers of debatable comments. We also have search-and-select tools to find missing links, and to “tag” linkages or limitations in references. This makes many continued communications accessible, and corrections can be identified as “delayed references” and “active rebuttals” all linked together. A “cutoff” time would be helpful from a practical perspective. In addition, we need to add multimedia to the modes of expression and description.3
University of Texas
I understand the desire of journals to limit the number of citations.4 However, I find myself making hard choices when preparing my papers. Whose paper do I cite and whose do I leave out? Often there are appropriate and correct references that, alas, are not the top 2 or 3 references on the topic. Shouldn’t the reader of your paper know about these references, too? Could this lead to novel hypotheses or new interpretations of that particular field? Also, findings of one study may be persuasive, but aren’t we more convinced if 2, 5, or 10 studies confirm the findings of the first study? In other words, both quality and quantity matter.
When you look at the issue this way, you come to the conclusion that limiting citations is short-sighted, potentially snobbish, and perhaps counterproductive.
Nonetheless, I recognize that print space is at a premium. So here’s an alternative: publish a primary citation list in the print version, but in the online version have links to supplemental references that include the other papers relevant to the point being made. Journals already do this with tables and figures. Why not references?
University of California, Davis