Retraction, a process in which a journal withdraws research after publication, is an essential tool for pruning flawed or fraudulent studies from the scientific literature. But a preprint posted to medRxiv on June 30 shows that retraction may not be functioning as intended: Retracted papers on clinical COVID-19 research have been cited more than 1,000 times, largely uncritically, indicating that conclusions drawn from untrustworthy research may continue to affect the literature and scientists’ understanding of the disease.
Furthermore, many of these citing papers were submitted for publication after the original papers were retracted, raising concerns about authors’ and journals’ standards for citations. Research published in PNAS on June 14 similarly found that most papers that are later retracted have already been widely disseminated on news sites and social media before they’re removed from the record, further fueling worries that the tool is ill-equipped to limit the spread of bad information.
The Scientist spoke with preprint coauthor Gideon Meyerowitz-Katz, an epidemiologist at the University of Wollongong in Australia, about the implications of citations of retracted papers and potential solutions, as well as broader issues around research production and publication.
The Scientist: In this preprint, you reported that retracted COVID-19 papers are still getting cited. How frequently is this happening? Were you surprised by the results?
Gideon Meyerowitz-Katz: What we found is that the modal number—the most common number—of citations that a retracted paper got was zero. But the median was between four and seven, depending on how you slice it up. Most retracted papers were cited at least once. The citation of retracted papers is very common and rarely critical.
While scientists are seekers of truth to some extent, we are also human and can fall prey to confirmation bias.
This [preprint] was specific to COVID, so it’s hard to draw inferences from this to every other kind of retracted paper out there. It’s plausible that there is a difference and that this is specific to the pandemic that we find ourselves in. But I would say from my experience, this seems to be quite common for non-COVID research as well, so I wasn’t surprised. I think citation tends to be cavalier, in a way. And I think it’s extraordinarily rare for people to actually look through their citations and check to see if there are any issues with the papers or if they’ve been retracted before they submit it.
TS: In the preprint, you mentioned that even before these studies are retracted, some of them are obviously low quality or even include impossible data values—why, do you think, are they still getting cited?
GM-K: I perhaps have a somewhat cynical viewpoint, which is that people aren’t very careful about citations in scientific literature . . . and it’s very rare that people actually check your reference list closely during the peer review process. And I just don’t think people notice if papers are bad, so long as they agree with their opinions. While scientists are seekers of truth to some extent, we are also human and can fall prey to confirmation bias.
To me, the biggest issue is not that people are citing retracted research—because I can very much see how you read a research paper and you put it into your citation manager or you download the PDF and you never go and check whether it has been retracted in the future. I mean, why would you? You just assume that it’s not going to be retracted. However, the fact that people are citing research that is so bad that it’s obviously going to be retracted very shortly—and people are doing that commonly, with absolutely no quality control in that process—that’s pretty worrying, especially because the majority of papers that should be retracted probably aren’t.
TS: What are some of the potential implications of this, especially during COVID, in terms of both scientific knowledge and clinical practice?
GM-K: There has certainly been a real impact of all this during COVID. . . . Some of these papers were clinical research that was cited in systematic reviews or meta-analyses that informed clinical practice. There were a number of papers on repurposed medications that have been retracted, such as favipiravir, hydroxychloroquine, and ivermectin, and those certainly got people to actually prescribe medications.
Even if we completely fix the citation of retracted papers, it’s not going to fix the underlying issue, which is that there are a lot of very low-quality papers.
TS: What are some strategies that either researchers or journals could use to make sure that these retracted papers aren’t getting cited? And what are some potential barriers to implementing those strategies?
GM-K: In theory, if researchers check their references carefully before they submit [the paper], this wouldn’t happen; you could prevent it very easily. But in practice, it’s a systemic problem, and I think it would require systemic solutions. [One solution] that is actually offered already by some journals is an automatic note. Whenever someone submits [a] manuscript . . . this automatic mechanism simply sends an email saying, “You have cited retracted papers, are you sure you would like to cite these papers?” The scientific publication process already requires a lot of effort from scientists—[unpaid] work with little to no benefit. So having an automated tool that doesn’t require [more work] is vital if we want to stop this from happening.
TS: There’s also the perhaps trickier problem of these low quality but not necessarily retracted papers that are getting cited. How can we address that problem?
GM-K: I think that is a much trickier problem because the citation of retracted research, ultimately, demonstrates that there is a lot of low quality work in the scientific literature now. But even if we completely fix the citation of retracted papers, it’s not going to fix the underlying issue, which is that there are a lot of very low-quality papers. There are papers that are still published that are provably false, that demonstrably never happened, or at least where the datasets have mathematical impossibilities within them . . . that editors have stopped replying to emails about and that will probably never be retracted. It’s not even that uncommon.
Our paper shines a light on one specific issue, but the broader problems with the scientific literature, I think, are much harder to solve.
TS: Are there ways to bring large-scale awareness to this without further undermining trust in science?
GM-K: I think that there’s quite good awareness of this because it’s a meta research question that’s been studied for a long time. It’s just that journals don’t actually have a responsibility for ensuring that the content they publish is true and real. They have a responsibility to ensure that it’s peer reviewed. But that can mean lots of different things. And for some journals, that’s not a very high bar to meet. Peer review rarely checks for fabrication. . . . Some of the bigger journals like Science and Nature do employ statistical reviewers [who] are expected to check through numbers to make sure that they make sense. But if you go to the average journal, it’s unlikely anyone will ever add up the columns in a table to make sure that they add up, which is something that I’ve found many papers don’t do correctly.
It ties into the issue in our paper, which is that there’s no individual or group whose responsibility it is to make sure that retracted research isn’t being cited. Editors are there to ensure that research is peer-reviewed and sufficiently interesting to go in the journal, but peer review is completely not designed to catch issues of this kind.
Editor’s note: This interview has been edited for brevity.