Advertisement
Gene Tools
Gene Tools

Making Sense of Science

Lemon juice may help beat AIDS; genetically modified crops will create superweeds; measles vaccine may be responsible for autism; and mobile phones can cut male fertility by a third.

By | November 21, 2005

<p/>

Lemon juice may help beat AIDS; genetically modified crops will create superweeds; measles vaccine may be responsible for autism; and mobile phones can cut male fertility by a third. Such questionable science claims are part of a familiar litany that outrages scientists and prompts despairing comments about the sensationalist press and the outlandish world of science and medicine on the Internet.

Unfortunately for scientists, these kinds of claims reach beyond the calm rectitude of the scientific literature. Every day, medical helpline operators, pharmacists, and doctors meet with the consequences of bad science in the public domain: worried parents, patients frightened about their treatment, and people taking ineffective remedies. Take a recent call to the Meningitis Foundation: Does gargling with lemon juice kill meningococcal bacteria?

Government representatives also report a rising caseload from misinformation and hype about alleged scientific findings. The anxiety and energy expended on campaigns such as those against mobile phone towers and incinerators are putting a new kind of pressure on representatives to delve into scientific issues and form judgments. Good science is essential for putting resources to good effect and for public health to be effective. But how is good science defined and how are all these people in the frontline supposed to help the public to distinguish which claims are scientifically grounded and which are not?

The discovery of this substantial and growing pressure on frontline informers and services came about during our work this year with medical charities, parliamentarians, local pharmacists, schools, and medical practitioners, to find the language for explaining the importance of scientific peer review. It was pressure from these groups that pushed us to turn our "Short Guide to Peer Review" into "I don't know what to believe..." – an eight-page explanation of how scientists publish their research results and why that matters.

The aim of the guide, launched this month and available at http://www.senseaboutscience.org, is to popularize the quality checking and rigor that begins to separate scientific work first from conjecture and then from flawed work. It suggests that the first question to be asked is "Is it published?" The guide covers the kinds of things that scientific reviewers look for – validity, significance and originality – and describes the process of scientific publishing. It also tells people how to dig a little deeper for evidence that scientific findings are published in a peer-reviewed journal.

The need to popularize peer review is a drum that we have been beating for some time in scientific circles, but scientists have been disinclined to explain peer review. Why? Well, there's certainly a fear of being seen as naive or of whitewashing a system that is not fault-proof. What's more, there's an enticing vogue abroad in professional spheres for demonstrating an awareness of your limits, perhaps in response to criticism for over-confidence or a fear of being seen as complacent. More commonly, I suspect, aspects of the system just seem boring and difficult to explain.

But the preoccupation with the horror stories of peer review among the scientific community has created a blind spot to the very big need out there. "Boring" is just wrong – you might find getting published the more tedious part of your research program, but for everyone else information about the status of research findings is as important as the findings themselves. That's the bit of information that is missing as far as many members of the public are concerned. In all the workshops, interviews and discussions that have gone into producing the guide, there has been a wholly positive response to beginning to make sense of science stories through the prism of quality checking. The interested public is much more animated about peer review than most scientists are.

The reaction to the guide has already been overwhelming. The most common response from people at the interface of society with science has been that they "didn't quite get it before." Perhaps it is not so surprising that the public is responsive on this issue. Not only are they sensitive to their own vulnerability and the vulnerability of public life to scare stories and hype, they are keen for anything that gives them an ability to sift what they read. One workshop participant described knowing about peer review as "empowering," generally a phrase I'm inclined to avoid, at a time when every bit of official paper is designed to empower. But it's truly the case that once a non-scientist gets a flavor of the process, they start to look at things with a freshly critical eye. Now that is surely something that scientists should get excited about.

Tracey Brown (TBrown@senseaboutscience.org) is director of the UK charity Sense About Science, which promotes evidence in public debates about science.

Advertisement

Follow The Scientist

icon-facebook icon-linkedin icon-twitter icon-vimeo icon-youtube
Advertisement

Stay Connected with The Scientist

  • icon-facebook The Scientist Magazine
  • icon-facebook The Scientist Careers
  • icon-facebook Neuroscience Research Techniques
  • icon-facebook Genetic Research Techniques
  • icon-facebook Cell Culture Techniques
  • icon-facebook Microbiology and Immunology
  • icon-facebook Cancer Research and Technology
  • icon-facebook Stem Cell and Regenerative Science
Advertisement
Advertisement
Mettler Toledo
BD Biosciences
BD Biosciences