High Risk of Bias in Early COVID-19 Studies: Meta-Analysis
High Risk of Bias in Early COVID-19 Studies: Meta-Analysis

High Risk of Bias in Early COVID-19 Studies: Meta-Analysis

Few peer-reviewed clinical papers on the pandemic contained original data, and many of those that did had poor experimental design.

Max Kozlov
Jan 14, 2021

ABOVE: © ISTOCK.COM,
BRIANAJACKSON

As scientists led initial investigations into the novel coronavirus last winter and spring, journal publishers saw an enormous surge in COVID-19 publications. A study published January 4 in BMC Medical Research Methodology reports that the majority of early clinical studies on the pandemic lacked original data, and those that did were rushed and did not include the appropriate measures to reduce bias.

See: “WHO Leads in Using Solid Science to Draft COVID-19 Policy: Study

The researchers evaluated more than 10,000 COVID-19–related medical papers published in English or Chinese before May 2020. Among peer-reviewed papers, the researchers found that 56.1 percent were opinions that did not contain any new data. Original articles that included patient data represented only 9.6 percent of the peer-reviewed studies. The researchers then evaluated the quality of research of the original articles using validated tools to assess study design and concluded that 80 percent were at risk of bias, mainly because of few participants, short follow-up duration, unrepresentative patient selection, or poor data interpretation. 

The Scientist conducted an interview over email with Paris Transplant Group nephrologist and epidemiologist Alexandre Loupy and Paris Transplant Group statistician Marc Raynaud, coauthors of the new study.

Marc Raynaud (left) and Alexandre Loupy (right)
PARIS TRANSPLANT GROUP




The Scientist: What was the motivation for undertaking this meta-analysis?

Alexandre Loupy: Since the COVID-19 pandemic started, we all have seen the spectacular increase in the number of publications that were COVID-19 related. As the virus was spreading around the world, it generated a global sense of urgency which had consequences for medical research. We saw that most of the first publications were mainly people giving advice and their personal reaction to the pandemic. What I feared was to see the standards of medical science getting lower, and the peer-review system losing quality. . . . We were saddened to observe that the studies published during the first wave of the pandemic were far from meeting conventional standards of scientific quality.

This risk of expedited science has been voiced by many other researchers, as many key public health decisions were made based on these early studies. Besides, as you know, there are well-known specific examples of publications that were retracted—with direct consequences on public health, as it was the case with the ‘Lancet Gate.’ In that context, we felt confident that a thorough review of scientific trends and robust, honest criticism was warranted. This was our principal motivation: reviewing, categorizing and critically appraising all COVID-19 research. Such [an] investigation had not been carried out by other teams, given the vast amount of work needed to critically appraise all the medical literature. 

See “The Surgisphere Scandal: What Went Wrong?

TS: How did you decide which studies to include in your analysis?

Marc Raynaud: This was actually quite straightforward: since we wanted to take a broad view of all the COVID-19 research, we needed to identify and review all publications that were related to COVID-19, regardless of the design or the type. Therefore, there were no restrictions on the publications identified in our study. I do think this approach is one of our big strengths, in that our review is the only one encompassing all study types and designs—both preprints and peer-reviewed articles. 

We primarily focused on studies published during the first six months of the pandemic. We believed that this was a key period to evaluate the quality of scientific production, as many extremely important decisions made by governments and agencies including the World Health Organization (WHO) were made based [on] this early scientific literature.

The first step was thus to create an international consortium with experts from France, the US and China, including experts on methodology, systematic review, epidemiology, public health, internal medicine, surgery and pathology. Then we could start the very long task of reviewing all COVID-19 publications.

TS: Were you surprised by how few studies with original data you found?

AL: This confirmed the impression we all had: ‘without-data’ papers drove much of the huge increase of COVID-19 publications. To give you an idea, in peer-reviewed research, almost 60% of COVID-19 publications were without-data papers. This is unprecedented! It shows the readiness and haste of many scientists who preferred to immediately comment on the pandemic rather than designing appropriate studies. But I want to underline that the study is not to blame anyone, but instead is a way for the field to reflect and perhaps better prepare for similarly difficult future situations. There was a race to publish, and the journals were faced with a flood of papers. It was an epidemic of scientific papers within a global pandemic. It was important to highlight this phenomenon with actual data and put number and statistics on it.

TS: You make a distinction between studies with original data as compared to case studies or opinion pieces. What role do those other pieces play in the scientific publishing landscape during a pandemic?

AL: We think case studies may have several roles. Sometimes, the readership of some medical specialty—such as cardiology—have asked for case studies to improve and adapt their clinical practice because they were facing a huge increase of the number of COVID-19 patients. But many case studies had contradictory results, and we feared it may have drawn attention away from more valid and better designed studies. We also think that many physicians and researchers wanted to contribute, which prompted many to publish early findings, without taking time to correctly design [their studies] and analyze their data.

The problem was the same with the numerous opinion pieces we identified; many wanted to comment on the pandemic. Even though some valuable advice and recommendations have been published, I think this has brought confusion overall. I know many physicians who did not know who to listen to, which study to trust. They got lost in the maze of contradictory medical research.

TS: Preprints played a big role in the dissemination of scientific knowledge during the pandemic. What were your findings about preprints?

MR: Overall, there was an important increase of the number of preprints; we found that almost a third of COVID-19 research was preprints. Our findings show that most of them had original data. But they should be considered cautiously; a preprint [is] a study not reviewed by experts. One preprint might eventually wind up in the New England Journal of Medicine, while another might never be published anywhere because of extensive flaws. The peer review process has value, after all. Other studies have highlighted that most of preprints remain at their preprints state, and do not move to the peer-review arena. I thus agree that preprints had a substantial influence, but we should consider those studies with adequate skepticism.

TS: In your analysis, you talk about the fine balance of velocity versus quality of publishing during a public health emergency. Did the scientific community prioritize one over the other during this pandemic?

MR: As we have underscored in our study, the assessment of research quality is a fundamental step in the advancement of scientific knowledge. However, in a pandemic, pressing time, converging towards scientific thoroughness appears to be particularly difficult. The balance between the velocity and the rigor of science is hence very hard to strike. What our study tells is that 80% of original studies are at middle to high risk of bias, according to robust and widely adopted evaluation tools and metrics. Therefore, I’d say that our study reveals a great deal of haste in conducting and accepting studies during the first six months of the pandemic. Even though we see that the quality of research is finally (and slowly) improving now, many researchers have rushed to publish poorly designed stud[ies] with scarce data, and regrettably, some still do. I think our study offers an important cautionary tale about the risks of rushing science to publication.

TS: Looking forward, what advice would you have for scientists hoping to publish a paper during a public health emergency?

AL: First, I’d advise to take time to read what has already been published on the topic. Second, I suggest being deliberate in correctly formulating their research hypothesis and study design. Third, scientists must make sure they have enough and relevant data. Fourth, it is important to apply appropriate statistical analyses. I believe that the pandemic has shown that too many lack skills in methodology, so in my opinion these four points are a good way to start.

Last, as a physician, I would like to make this final diagnosis: even though we have now developed vaccines for the COVID-19, some scientists are still carrying a harmful virus—the ‘rush to publications’ virus, whose only vaccine is criticism and then a commitment to good research hypothesis and design, appropriate data, and robust analyses.

M. Raynaud et al., “COVID-19-related medical research: a meta-research and critical appraisal,” BMC Med Res Methodol, doi:10.1186/s12874-020-01190-w, 2021.

Editor’s note: The interview was edited for brevity.