ABOVE: © ISTOCK.COM, BLACKJACK3D

John Ioannidis is a Stanford University epidemiologist known for his blunt critiques of the scientific enterprise, such as the 2005 paper “Why Most Published Research Findings are False.” In a new report out today (November 20) in PLOS Biology, he and colleagues follow up on an earlier study in which they found authors weren’t so good on measures of transparency and reproducibility, such as sharing their data and protocols and disclosing funders.

The new analysis, which scrutinizes 149 randomly selected biomedical research papers published between 2015 and 2017, finds some improvements. We spoke with Ioannidis to get his take on the results.

John Ioannidis
NORBERT VON DER GROEBEN/STANFORD SCHOOL OF MEDICINE

The Scientist: What did you set out to do in this latest study, and what, to you, are its main takeaways?

John Ioannidis: We had previously assessed the same indicators in...

The major improvement was seen in things like data sharing, where . . . we had seen hardly any data sharing in the 2000 to 2014 assessment, and now we have close to one out of five papers having statements about sharing information.

TS: Your assessment found that only one study linked to a full protocol. What is a full protocol?

JI: This is the disappointing component, the assessment of protocol-sharing—this doesn’t seem to happen still. And that could have multiple explanations. I think one could be that very often there’s no protocol to start with. . . . Or that there are still difficulties and still barriers and still not enough demand on the side of journals of asking for the full protocol to be available. 

A full protocol is a fleshed-out document that includes the rationale of the study, and detailed objectives, methods, how are these methods going to be implemented. Ideally, you should have also the analysis plan, along with what analyses are being contemplated and how are these analyses going to be run. If you have exploration research, and kind of blue-sky science, you cannot really have a full protocol to describe something prospectively. . . . However, if you have specific hypotheses, specific objectives, specific deliverables in what you do, then it’s not only possible—it makes perfect sense and it is indispensable that you should be able to specify those.

TS: Is that related to preregistering studies before they’re conducted?

JI: Many of the registration records do not have sufficient information to know exactly how the study is going to be done, so we know at least that the study is going to be done, but we don’t really know exactly how the analysis is going to be done. Sometimes even the major outcomes may be missing [from preregistration records] about how and why different outcomes were selected. So preregistration in registries like clinicaltrials.gov has some interface with protocol availability, but it is not the same thing. 

TS: Do you have a sense of what factors might be driving the improvements you saw?

JI: I think that there’s multiple initiatives at the moment, and lots of people, lots of stakeholders—funding agencies, journals, scientists, and their societies—are pushing for more transparency. For example . . . multiple stakeholders are pushing in the direction of making raw data, individual-level data, more massively available, and we do see movement in that regard. We also see that . . . journals are becoming more demanding, more commonly asking for . . . the sources of funding for the research that is being presented. 

TS: What further steps could be taken to improve these indicators?

JI: I think that journals and funders are particularly well-suited to move these indicators through more complete availability of protocols, raw data, and disclosures. And I think that since many journals are already doing that, and several funders are already asking for these things to be in place, there’s no reason that others cannot follow suit. 

Of course, institutions, like universities and research institutions, could also play the role . . . if they, for example, judge and appraise and evaluate scientists for hiring or for promotion based on their performance on these indicators. Currently, we probably pay a lot of attention to productivity, but both myself and others have argued that we need to pay also a lot of attention to how transparent and sharing and open a scientist is in their work.

TS: Is there anything else from the paper that you’d like to highlight?

JI: We do see some movement for replication, but [replications are] still a minority of the literature. I think that we probably need more than that, and we need some wider understanding on why replication is important, especially with zillions of discoveries being claimed, but many of those not really getting very far.

J.D. Wallach et al., “Reproducible research practices, transparency, and open access data in the biomedical literature, 2015–2017,” PLOS Biol, 16:e2006930, 2018.

Editor’s note: This interview was edited for brevity.

Interested in reading more?

abstract image suggesting data storage

The Scientist ARCHIVES

Become a Member of

Receive full access to more than 35 years of archives, as well as TS Digest, digital editions of The Scientist, feature stories, and much more!