The man who'd prove all studies wrong

On the phone, John Ioannidis comes across much more cheerfully than you might expect from a man who has made a career out of pointing out the more questionable aspects of others' research endeavors.

Stuart Blackman
Sep 11, 2005
<p/>

On the phone, John Ioannidis comes across much more cheerfully than you might expect from a man who has made a career out of pointing out the more questionable aspects of others' research endeavors. But perhaps he has good reasons to be cheerful.

First, his is a career that has been shaped to a large extent by a long and rewarding romance. Second, if refuting erroneous hypotheses and data is a yardstick of scientific success, Ioannidis is arguably a particularly successful scientist – though that might depend on whether his recent claim that most research findings are false is not itself proved wrong.

Ioannidis has previously identified statistical and experimental design problems based on high-throughput techniques such as microarrays that can lead to gene-disease predictions being no better than chance (see the Dec. 20, 2004, issue of The Scientist). He has also followed the fate of research findings to quantify their falsification rate, demonstrating recently, for example, that five of the six most cited epidemiological studies since 1990 have already been refuted (JAMA, 294:218–28, 2005).

His latest effort, which appeared last month in PLoS Medicine (2[8]:e272, 2005), draws on such empirical evidence to make more sweeping claims. It is a simulation model of how poor experimental and statistical design, as well as vested interests and biases on the part of researchers, which can lead to "conscious, subconscious, or unconscious" manipulation, omission, or selection of data for presentation, combine to undermine a finding's validity. This approach has enabled him to put a figure on the probability of a randomly-selected, statistically-significant research finding actually being true. And for most fields, that figure is less than 50%, he says.

Ioannidis' long "love affair with mathematics" – that's the romance he's referring to, not his marriage of thirteen years and counting – first reaped rewards in 1984 when he won first prize in an annual Greek Mathematical Society competition that attracted 10,000 entrants.

His formal training, however, is in medicine. After graduating from the University of Athens in 1990 with an MD and a doctorate in biopathology, Ioannidis left his native Greece for the United States, where he undertook further studies in internal medicine at Harvard University and in infectious diseases and clinical epidemiology at Tufts University. Having held faculty positions at National Institutes of Health (in the HIV research branch), Johns Hopkins University, and Tufts, he returned to Greece, where he is now a departmental head at University of Ioannina School of Medicine, while maintaining a joint appointment at Tufts.

At Tufts, he combined his skills in math and medicine to work on the design of clinical trials, and it was also there that he was introduced to meta-analysis, a tool that he has put to use in his studies on the robustness of research findings, while accepting that it is also a common source of statistical errors.

Ioannidis doesn't intend to stop at showing up the faults in other's work. "There is no need for cynicism," he says. Rather he wants to improve the odds. Funding bodies could put more emphasis on replication rather than discovery, he says, and the establishment of consortia of labs working within fields might maintain healthy competition while promoting common adherence to agreed experimental and analytical standards.

But what are the chances that his own findings are true? This is, after all, a simulation. And as a PLoS Medicine editorial makes clear, Ioannidis's "calculations are based on assumptions about complex scenarios that we do not fully understand." He maintains that his estimates are supported across a wide range of input values and that the outputs compare well with the empirical data that do exist. But it takes a lot of encouragement before he'll offer a probability. "In the range of 70%," he says, reluctantly. But, he adds, "it might be better for someone else to do that."

Best of the blogs

Have you been reading The Scientist's blogs? If not, here's what you've been missing:

• The stocks of companies whose compounds are being discussed at ASCO – the American Society for Clinical Oncology – begin rising more than two weeks prior to the annual meeting, which suggests that the embargoed materials that journalists receive are making their way into the hands of traders. Journalists might not want to cast the first stone at this glass house.... More at http://media.the-scientist.com/blog/display/8/86/

• Today, well over 300,000 paternity tests are carried out annually in the US-many with the help of home test kits. As a recent paper points out, we're unprepared for the ensuing family disruptions and medical records messes.... More at http://media.the-scientist.com/blog/display/13/87/

• It's nice to see a Frank Zappa fan working in the Nature press office. A recent release extolling the current issue's report on space dust from meteorites was titled: "Who you jivin' with that cosmik debris?".... More at http://media.the-scientist.com/blog/display/4/90/

• So did President Bush really advocate teaching "intelligent design" in his interview with Texas reporters the other day? Or were his musings about exposing students to different ideas simply a better-than-average example of political weasel-speak?.... More at http://media.the-scientist.com/blog/display/13/85/

Keep up with all the blogs at http://media.the-scientist.com/blog/browse/