In the middle of a pandemic, relevant scientific data reliable enough to support subsequent research and action are a precious commodity. Therefore, we are dismayed to note how easily flawed studies about coronavirus patients, derived from an opaque database provided by a company called Surgisphere, reached the pages of The Lancet and the New England Journal of Medicine (NEJM)—and went on to shape global policy about COVID-19 treatments. Ultimately, concerns about the legitimacy of the database led to the papers’ retractions and the reversal of some policy decisions based on them.
Were the publication of such flimsy data in international journals an infrequent or freak event, we could regret this incident and move on. However, this is clearly not the situation. Over the last few years, we have seen a dreary procession of cases where authors under pressure have admitted...
The publication of these studies is not altogether the fault of international journals, for their quality control relies on a peer-review process that is conducted voluntarily by other scientists. Peer review concentrates essentially on the scientific arguments put forward in an article, not on the authenticity of the data. Journals do not normally have access to the raw data, which stays with the main author and his or her research institute. We are in entire agreement with The Guardian’s article published on June 5, 2020, written by James Heathers, where he very convincingly points out the inadequacy of most peer review processes.
The present fiasco and ensuing crisis should surely provoke a strong reaction from the agencies and journals and research centers that are now on the front line.
However, the vast amount of data claimed to be available from Surgisphere (from 96,000 patients around the globe) and the short timeline for defining, collating, consolidating, and analyzing the data (from the World Health Organization’s declaration of a pandemic in March 2020 to publication in May 2020) should perhaps have alerted the Lancet and NEJM to possible anomalies.
The real problem lies squarely with the research centers generating the data, because they rarely have procedures for institutional data quality management. The solution to assuring data integrity and reliability is upstream of the submission of the article and comprises organizing appropriate internal control processes and (routine) audits of all the data generated by a given institution. In the Surgisphere case, the proposed audit would no doubt have helped to sort out the mess, but had proper control procedures already been in place, there would not have been a mess. However, without access to the data, which we understand has been refused by Surgisphere, this study will forever remain disqualified. In an ideal situation, a company like Surgisphere would bring on board an independent quality assurance (QA) consultant to audit the data contents of the article against the real raw data of the study.
International journals and the general public must realize that the reliability and credibility of studies rely on two relatively separate factors:
First, the quality of the scientific study design, naturally followed by cogent scientific discussion and conclusion: in a word, the soundness of the science. But however sound the scientific intention and arguments, all counts for nothing if based on study procedures that are shown to be inadequate or founded on unreliable, frail, or, at worst, fraudulent data.
Second, the soundness of the data. If QA procedures are implemented, they ensure that the scientific studies generate data of the highest standard. This is achieved by assuring that research work is well planned, well executed and recorded, and that the information reaching study reports, or articles for publication, accurately represents what actually happened during the experimental phase.
Meta-analyses, that is, studies based on aggregated and sometimes anonymized data, do not escape the requirements for transparency and traceability of data. Ultimately, despite sophisticated IT techniques or contractual confidentiality obligations, the published data must be traceable to the original raw data, and, finally, to actual events in the physical world.
As early as 2001, in discussion with the World Health Organization (WHO), we argued that accelerating health challenges during the coming decades—among others, emerging diseases, changing patterns of agriculture, and climate change—would lead to an increased need for new drugs and new principles for treatment. As funding is not infinite, it is essential that basic scientific and biomedical research be conducted in a proper fashion, thus economizing resources and minimizing the production of unreliable and misleading results. We underlined the possible dangers to health of the collection and eventual publication of dubious data, arguing that there is an urgent need for institutional action to implement procedures to ensure data quality and integrity. A handbook, Quality Practices in Basic Biomedical Research (QPBR), which the WHO commissioned us to write, was published in 2006 (it is available for free and we receive no royalties from it).
The intention was that the WHO should set the standard worldwide by imposing compliance to QPBR on institutions receiving WHO research funding. Clearly, this intention has not been carried through; training in QPBR, though planned and with training material available, was never deployed extensively and research institutions have not been required to implement QPBR prior to receiving funds. We appeal to all funding bodies (WHO, National Institutes of Health, European Commission, Medical Research Council, Gates Foundation, government agencies, etc.) to tie their funding to the implementation of QPBR (or equivalent guidelines) as a first step to assuring that the studies they support generate quality data, that is, accurate, reliable, reproducible, and auditable.
An opportunity to start setting standards for data quality management within research institutions was missed. Plainly, an international initiative focused on improving the quality, integrity, and reliability of research data is now both essential and overdue. The QPBR handbook would be an excellent starting point for any international organization wishing to improve the quality of data being supplied by research centers.
In addition, the international journals should insist on receiving, along with any article submitted, a certified document from the research institute stating their compliance with QPBR (or equivalent) and that the data submitted have been audited by a specialist internal QA group or an external QA consultant. Many journals already require similar statements about the ethical standards applied during studies.
This approach would mirror to some extent the Good Laboratory Practice regulations (GLP) already in place for non-clinical studies to test the safety of experimental medicines. Under the auspices of the Organisation for Economic Cooperation and Development (OECD) these GLP regulations, applied because of incidents of fraud and data manipulation reported in the 1970s, have practically eliminated cases of study malpractice that were so prevalent until their enforcement.
Similarly, the International Council on Harmonisation (ICH) guidelines on Good Clinical Practice (GCP) employed to ensure the reliability of the data coming from clinical trials have ensured a common framework for clinical data generation, management, and reporting, rendering clinical data accessible and available to audit. The observance of GCP has become commonplace within the pharmaceutical industry and should nowadays be the gold standard for all clinical research.
Poor data collection, selective or inaccurate data reporting, data manipulation, sometimes even deliberate fraud, and ethical issues were the reasons for implementing GLP and GCP. The amazing issue of this hydroxychloroquine study and its deleterious knock-on effects should now provide an incentive for the implementation of QPBR in areas not covered by the regulations so far.
The present fiasco and ensuing crisis should surely provoke a strong reaction from the agencies and journals and research centers that are now on the front line. If nothing is done now, we will see an even longer dreary series of cases of poorly controlled studies or of scientific misconduct to further shake public confidence in the reputation of science itself.
Nadya Gawadi Heywood is the retired director of a consultancy company. Email her at nadyaheywood@gmail.com. David Long is the director of Long and Associates International Consultancy Ltd. Email him at davidlong75@gmail.com.