You switch on the evening news to hear a headline report of a small new study claiming unforeseen risks to health. The startling nature of the claims is, as so often happens, in inverse proportion to the study's sample size. The news program has already located a group of concerned parents and an apparently off-hand response from health officials. Over the ensuing days, add into the mix other interested parties, a 'maverick' scientist, vitriolic commentary drawing comparisons with thalidomide, and official rebuttals. Before you know it, the attentive population is concerned that the evidence seems to be saying several different things. Now over to you to explain why it doesn't.
Peer review usually produces comments and makes recommendations on some of the following
Are the findings original? Is the paper suitable for the subject focus of this journal? Is it sufficiently significant? (Or is it a "me too" paper? Is it "salami slicing"?)
Is the paper clear, logical, and understandable?
Does it take into account relevant current and past research on the topic?
Are the methodology, data, and analyses sound? Are the statistical design and analysis appropriate? Are there sufficient data to support the conclusions?
Are the logic, arguments, inferences, and interpretations sound? Are counterarguments or contrary evidence taken into account?
Is the theory sufficiently sound, and supported by the evidence? Is it testable? Is it preferable to competing theories?
Does the article justify its length?
In papers describing work on animals or humans, is the work covered by appropriate licensing or ethical approval? (Many biological and medical journals have their own published guidelines for such research.)
Peer review. It does not sound very compelling in the face of dramatic headlines, but wider public understanding of peer review has become essential in this age of anti-orthodoxy and scientific trial-by-media. Despite the fashion for so-called lay expertise, it is unfair and unrealistic to expect people to acquire the specialist knowledge to sift information meaningfully.
In the recent controversies surrounding mobile phone electromagnetic fields, the measles-mumps-rubella (MMR) vaccine, and genetic modification of foods, the proliferation of scientific information did not resolve dilemmas about evidence. Social surveys indicate that, on the contrary, the public felt very short of guidance about what conclusions to draw. A common question from parents to the UK's Health Protection Agency during the height of the MMR fears was: "What would you do? Vaccinate?" People need a way to make judgments about the claims on their attentions and concerns.
Despite the extensive use of peer review by scientists to determine which research is plausible and worthy of publication and further consideration (see Box), in the rest of society very little is known about the existence of the process or what it involves. With mounting concern about public trust in science, it seems strange that so little effort has been expended in explaining peer review. For example, it could be explained as a much more reliable guide to research findings than the ways in which the public is often invited to judge scientific claims, which is usually based on whether the person who conducted the work cuts a sympathetic figure, or how the research was funded.
Sense About Science, a science advocacy trust, convened a working committee to address this problem. It looked at the concerns expressed by scientists about the conduct of peer review and at the content of criticisms of the scientific "establishment" during controversies. The committee's report, "Peer review and the acceptance of new scientific ideas," published in June, argues that scientists are too defensive about the peer-review process and about explaining the importance of expert judgment from people publishing research in a similar field.
Many scientists welcome the public's demand for science news and the curiosity that goes with it. However, it is hard to compete with organized campaigns and the potential for unfounded claims to capture the imagination of policymakers and the public. We need a cultural shift towards asking tough questions about the information that is put before the public: Have these claims been peer reviewed? Has the study been published in a recognized scientific journal? How many other research papers have reached the same conclusions? Throughout the controversies over genetics, transgenics, and cloning, for example, it was surprising that so little was said about how scientific peers had assessed the claims and, in some cases, why the research had not been shown to those peers before the results were made public. That information is crucial for non-experts, and people should know to ask for it.
Of course, peer review is not the last word on a piece of research, nor is it free of problems such as delays and mistakes. However, the problems of peer review should not distract from the importance of the principle and the discipline it imposes. By becoming fixated on the "burden" of peer review, scientists seem to have overlooked a wider opportunity: to put a lot more pressure on the people who bring research claims to the public to explain exactly what the status of the work is.
People are free to judge the plausibility of research claims and their implications, but scientists, commentators, and educators who are committed to seeing public discussion informed by higher-quality research should use every opportunity to explain peer review and to make the dividing line between evidence and opinion as clear as possible.
Tracey Brown is director of Sense About Science whose report on peer review is available online at