Peer Review and the Age of Aquarius
It’s time to reinvent the system that validates scientific discovery
This month our new column, Thought Experiment, considers whether mathematics can answer the deepest perplexities of science, such as evolution and consciousness. Here’s a corollary: Can metrics point to the great discoveries of science, and reward the discoverers?
Underlying this latter question is the profound truth that scientific careers are largely made or squelched by numbers that measure the import of one’s research—particularly, how often one is published in high-impact-factor (IF) journals and how often (and where) one’s papers are cited. That flaws abound in these metrics, is framed by neurobiologist Bjoern Brembs: “Without a moment’s hesitation I would fail any undergraduate who comes with a project using statistics only half as bad as the...
Fortunately, scientometricians are tackling the problems. As laid out in a recent issue of Nature, alternatives to IF and the citation index are being developed to quantify research performance, including Google’s PageRank, social bookmarking, the h-index, online access, and mapping techniques. Yet it’s important to remember that popularity metrics can easily be skewed or gamed, and that intellectual merit can be lost in the shuffle.
Beneath these computations lies the foundation of the scientific career edifice: peer review, whereby grant proposals and research results are deemed worthy of funding and of bearing the imprimatur of a high-impact journal… or not. On this subject, one need only cede the podium to Richard Smith, former editor of BMJ, who decries the enormous time, money and energy spent with dismal outcomes in detecting fraud and error, and in unearthing the truly great papers: “After 30 years of practicing peer review and 15 years of studying it experimentally, I’m unconvinced of its value.”
More than 300 years since the invention of peer review and 30 years post-Web, it’s time to act. Lest we forget, the Web was originally designed to disrupt scientific publishing, as recently noted by Michael Clarke in the Scholarly Kitchen blog.
The first major disruption has been open access (OA) publishing, a prerequisite for the new metrics, which thrive on increasing numbers of papers and data. And despite its fledgling status, OA has ushered in a second major disruption to the scientific establishment: post-publication peer review (PPPR), in a variety of experiments and formulations, pioneered by BioMedCentral and PubMedCentral.
Before opining on the glories of PPPR, a word from our sponsor: The Scientist and Faculty of 1000 (a PPPR service) have recently joined forces. Our audience and our goals are the same: to identify and amplify the most interesting developments in the life sciences. Magazine content will thus emphasize research deemed by Faculty Members to be game-changing in 31 disciplines and 305 specialties.
In the basic formulation of PPPR, qualified specialists (peers) evaluate papers after they are published. Instead of hiding reviewers’ identities and comments, they become part of the published record and open to community review and response. Renowned educator Paolo Freire once said, “To impede communication is to reduce men to the status of things.” PPPR at its best facilitates ongoing dialogue among authors, peer reviewers, and readers.
Thus, transparency and ongoing scrutiny by a much wider community can minimize the failures of traditional peer review (depicted on the cover of this issue), and can also bring to light innovations and discoveries that may have been ahead of the curve at the time of publication. Robust involvement by the community is required, and proposed “reputation systems” may be the key to ensure rewards for commenting and revising.
The most wonderous disruption of all: the idea that papers and data have lives beyond their original posting, and discoveries may emerge from them via evaluations and iterations months or even years later. Let’s work out the details.