WRONG FIRST IMPRESSIONS:
Strength of association is shown as an estimate of the odds ratio without confidence intervals. At top are eight topics in which the results of the first study differed beyond chance (P < 0.05) when compared with the results of the subsequent studies. The bottom shows eight topics in which the first study did not claim formal statistical significance for the genetic association, but formal statistical significance was reached by the end of the meta-analysis. (Adapted from J.P. Ioannidis et al.,
The first published study linking gene to disease is often far from the last word on the subject. Marc-Antoine Crocq, a psychiatrist with the Centre Hospitalier de Rouffach in France, learned this firsthand after leading a 1992 study on a mutation in the dopamine D3 receptor in the brain.1 The study found that people with two copies of the mutation have...
Trikalinos and other researchers are working to understand why so many studies can't be replicated, and how to change this. The problem is pressing because current trends could exacerbate it, says Sholom Wacholder, senior investigator at the National Institutes of Health biostatistics branch in Bethesda, Md. New high-throughput analysis techniques, he explains, let researchers study many gene-disease associations quickly and cheaply, but also lead to more studies on associations that don't look especially likely at a study's outset. This tends to increase the likelihood of finding spurious links through chance occurrences. By contrast, he says, "In the old days, it was a big investment to study a hypothesis, and only the best candidates had a shot."
Wacholder suggests researchers revise their statistical methods to account for "prior probability," which is a subjective but reasonable measure of how plausible the gene-disease association in question looked before the study.6 Others propose different solutions. Kirk Lohmueller, a Georgetown University undergraduate student and first author of a letter in
"We found that studies with family-based controls and larger sample sizes are more likely to be replicated," Lohmueller says. Trikalinos disagrees that there is any clear way to predict which studies will be replicated. He suggests that researchers should treat any finding cautiously until it's replicated, preferably more than once.
No effort to address the problem is complete, researchers say, without a renewed call to publish more negative findings showing no gene-disease association. Such findings often go unpublished, bolstering false impressions of spurious gene-disease associations. "Every study provides a piece of evidence," says Wacholder, "and it needs to be made available somehow to people who are interested."