WIKIMEDIA, VMENKOV
A novel scheme for rating the relative impact of scientific journals, unveiled this week by post-publication peer review outfit Faculty of 1000 (F1000), is being questioned by scholarly publication experts. The rankings, which place Nature on top of the list for biology journals and the New England Journal of Medicine atop the medicine heap, were built using scores awarded to papers published in 2010 in thousands of journals by F1000's 10,000-stong "faculty" of researchers and clinicians.
While the upper echelons of F1000's rankings, which include Cell, Science, PNAS, and Lancet, more or less correspond with rankings of the same journals based on their impact factors—a metric calculated via an algorithm that takes citation frequency, among other factors, into account—the list contains some surprises further down in the pack, the validity of which is in question. For example, Nature reported that one lesser-known journal...
F1000 editor Richard Grant (no relation to author) told Nature that the journal editor should have reported such a conflict, and that journal in question was yanked from the ratings while F1000 investigated the matter.
Critics are also decrying the system's dependence on scores awarded to papers by working researchers, who may carry their own biases into the process. "The scores may tell us as much about the composition of the F1000 faculty as they do about the relative quality of various journals," University of Washington, Seattle, biologist and F1000 faculty member Carl Bergstrom told Nature. Bergstrom publishes a rival ranking method called the Eigenfactor.
Full Disclosure—F1000 is owned by the same company, Science Navigation Group, as The Scientist.