FLICKR, EUROPEAN SOUTHERN OBSERVATORYScientific publications are regularly evaluated by post-publication peer review, number of citations, and impact factor (IF) of the journal in which they are published. But research evaluating these three methods, published in PLOS Biology last week (October 8), found that they do a poor job of measuring scientific merit. “Scientists are probably the best judges of science, but they are pretty bad at it,” said first author Adam Eyre-Walker of the University of Sussex in the U.K. in a statement.
Eyre-Walker and coauthor Nina Stoletzki of Hannover, Germany, analyzed post-publication peer review databases from Faculty of 1000 (F1000) and the Wellcome Trust, containing 5,811 and 716 papers respectively. In each of these databases, reviewers assigned subjective scores to papers based on merit. Eyre-Walker and Stoletzki expected that papers of similar merit would get similar scores, but they found that the reviewers assigned papers the same scores about half the time—only slightly more often than expected by chance. The researchers also found a strong correlation between the IF of the journal in which papers were published and the merit scores that reviewers assigned to papers.
“Overall, it seems that subjective assessments of science are poor; they do not correlate strongly to each other and they appear ...