It sounds absurd that an obscure US company with a hastily constructed website could have driven international health policy and brought major clinical trials to a halt within the span of a few weeks. Yet that’s what happened earlier this year, when Illinois-based Surgisphere Corporation began a publishing spree that would trigger one of the largest scientific scandals of the COVID-19 pandemic to date.
At the heart of the deception was a paper published in The Lancet on May 22 that suggested hydroxychloroquine, an antimalarial drug promoted by US President Donald Trump and others as a therapy for COVID-19, was associated with an increased risk of death in patients hospitalized with the disease. The study wasn’t a randomized controlled trial—the gold standard for determining a drug’s safety and efficacy—but it did purportedly draw from an enormous registry of observational data that Surgisphere claimed to have collected from the electronic medical records of nearly 100,000 COVID-19 patients across 671 hospitals on six continents.
The study was a medical and political bombshell. News outlets analyzed the implications for what they referred to as the “drug touted by Trump.” Within days, public health bodies including the World Health Organization (WHO) and the UK Medicines and Healthcare products Regulatory Agency (MHRA) instructed organizers of clinical trials of hydroxychloroquine as a COVID-19 treatment or prophylaxis to suspend recruitment, while the French government reversed an earlier decree allowing the drug to be prescribed to patients hospitalized with the virus.
Before long, however, cracks started appearing in the study—and in Surgisphere itself. Scientists and journalists noted that the Lancet paper’s data included impossibly high numbers of cases—exceeding official case or death counts for some continents and coming implausibly close for others. Similar data discrepancies were also identified in two previous studies that had relied on the company’s database. Inquiries by The Scientist and The Guardian, meanwhile, failed to identify any hospital that had contributed to the registry.
It also emerged that, for a company claiming to have created one of the world’s largest and most sophisticated patient databases, Surgisphere had little in the way of medical research to show for it. Founded by vascular surgeon Sapan Desai in 2008 and employing only a handful of people at a time, the company initially produced textbooks aimed at medical students. It later dabbled in various projects, including a short-lived medical journal, before shooting to fame this year with its high-profile publications on health outcomes in COVID-19 patients.
The provenance of Surgisphere’s database—if it even exists, which many clinicians, journal editors, and researchers have questioned—has yet to become clear. Most of Desai’s coauthors admitted to having only seen summary data, and independent auditors tasked with verifying the database’s validity were never granted access, leading to the June 4 retractions of the Lancet study and a previous paper based on the database in The New England Journal of Medicine. Over the following days, The Scientist and other media outlets pointed out inaccurate claims made on Surgisphere’s website, which it had launched in February and gradually erased as accusations of fraud mounted. Desai, who spoke to The Scientist at the end of May, is no longer responding to requests for comment.
Despite the brevity of Surgisphere’s moment in the limelight, the repercussions of the company’s actions have been far-reaching. While the WHO quickly resumed hydroxychloroquine testing following criticisms of the Lancet paper, at least one international trial was delayed more than a month. A now-removed preprint of one of the company’s earlier studies, which linked the antiparasitic medicine ivermectin to better survival in COVID-19 patients, was used by national and regional governments in Latin America to help justify including the drug in clinical guidelines for disease treatment and prevention—decisions that have not been reversed since the paper disappeared. A nonprofit organization in Africa that had partnered with Surgisphere to develop diagnostic tools for COVID-19 watched months of work disintegrate after the company and its database fell into disrepute.
Despite the brevity of Surgisphere’s moment in the limelight, the repercussions of the company’s actions have been far-reaching.
As the initial shock faded, the medical and scientific communities sought to make sense of how something so damaging could have happened so quickly—and whether it could be prevented from happening again. While a heightened sense of urgency during the pandemic undoubtedly contributed to the problem, there were many people and institutions that theoretically could have prevented Surgisphere’s effects on science and public health, notes Rachel Cooper, the director of the Health Initiative at the nonprofit organization Transparency International.
Desai’s astonishing influence on COVID-19 policy was dependent on multiple parties, Cooper notes, from the institutions that employed him to the coauthors on his research studies, the journals that published the work, and the organizations that issued public health decisions based on his research. Seen that way, the scandal represents “a perfect storm of issues that have always been there,” she says.
An investigation by The Scientist points to a series of missed opportunities to halt Surgisphere’s progress—in some cases stemming from people’s failure to check implausible claims made by Desai or from a pattern of ignoring warnings of problematic data or behavior. While a few parties have since accepted some responsibility and outlined plans to avoid similar situations in the future, the majority have not.
Desai’s medical career went largely unchecked
From the time he founded Surgisphere in 2008 as a surgical resident at Duke University, Desai spent 12 years working as a vascular surgeon in various US states. The Scientist learned of serious concerns about Desai’s integrity and his conduct as a physician spanning that time.
After he left Duke in 2012, Desai trained or worked at the University of Texas Health Science Center, Southern Illinois University (SIU), and Northwest Community Hospital (NCH) in suburban Chicago. At the latter two institutions, he held senior positions: director of a new surgical skills lab and vice chair of research for surgery at SIU, and director of performance improvement at NCH, which he joined in 2016.
While vascular surgery was not the focus of Surgisphere’s work during the pandemic, Desai emphasized his background as a doctor in press materials and interviews, calling the company “physician-led.” The Scientist has since spoken to five of Desai’s former colleagues—ranging from medical trainees to supervisors—spanning his medical career.
The colleagues, who asked to remain anonymous for fear of repercussions, recount similar concerns about Desai during the time they had worked with him. All five describe experiences of Desai making exaggerated claims about his personal achievements, and three people at two separate institutions say they had firsthand experiences of Desai providing inaccurate information about patients. They accuse him of describing patient data that didn’t match patient charts, for example, or saying he’d attended to a patient when nurses and other staff confirmed he hadn’t. Those three also indicate that Desai’s unreliability was an open secret, with staff members regularly checking the veracity of his claims with other members of the institution.
Asked why concerns about Desai’s conduct hadn’t been followed up on, some former colleagues say they’d felt too junior or hadn’t had enough proof to make a formal complaint. Others reference the risk of Desai pursuing legal action, or of retaliation from their institutions, which might have suffered reputational damage should those concerns be made public. Some say complaints were made internally at their institutions, but weren’t acted upon.
By the time Desai left NCH this February, he’d been accused of medical malpractice in at least three lawsuits—two of which involved permanent damage following surgery and one that involved a patient death. Those cases are ongoing and Desai told The Scientist earlier this year that he deemed any lawsuit naming him to be unfounded.
Contacted by The Scientist, most people in charge of departments where Desai worked declined to comment. NCH, which continued to list Desai in its online physician directory until June, tells The Scientist in a statement that Desai had left voluntarily for “personal reasons.” Desai, who is still registered with the American Board of Surgery and has an active medical license in Illinois, told The Scientist in May that he would consider returning to clinical practice in the future, although his former colleagues say that after what’s happened he’d be unlikely to find a job in vascular surgery at a major institution.
Coauthors helped Surgisphere publish dodgy COVID-19 research
Surgisphere went through many guises and was repeatedly reregistered in different states as Desai moved from institution to institution during his medical career. Only in the last couple of years did the company begin redefining itself as a data analytics firm. It was in this capacity that Surgisphere would soon claim it had amassed a medical database of almost unprecedented proportions and complexity, one that could offer crucial insights during this year’s pandemic.
By the time of the Lancet publication, Surgisphere had provided data for two other studies of COVID-19 patients. The first, posted as a preprint on SSRN in early April, linked ivermectin to improved outcomes in hospitalized COVID-19 patients. The second, published on May 1 in NEJM, reported an association between cardiovascular disease and COVID-19 patient mortality, but no elevated risk associated with certain heart drugs feared to be harmful in patients hospitalized with the virus. For the three studies, Desai had collaborated with various combinations of six other people—five who would later say they had not seen the raw data on which the studies were based, and three who received (but didn’t act upon) warnings from other researchers about possible problems with Surgisphere’s data.
All the authors reviewed the manuscript and vouch for the accuracy and completeness of the data provided.—Statement included with the retracted article
in The New England Journal of Medicine
Weeks before the Lancet study was published, data scientist Joe Brew and medical researcher Carlos Chaccour—both of whom are involved in a clinical trial at ISGlobal in Barcelona testing ivermectin’s use to reduce COVID-19 transmission—wrote to Desai and his preprint coauthors, Mandeep Mehra of Brigham and Women’s Hospital and Harvard Medical School, David Grainger of the University of Utah, and Amit Patel, who formerly held a teaching post at Utah, about discrepancies in the ivermectin data. These discrepancies were similar to those that would later be raised for the Lancet paper—specifically, there appeared to be more cases in the Surgisphere dataset than official records captured, suspiciously high numbers of hospitalizations on continents where electronic medical records are rarely used, and surprisingly large effect sizes given what was known of the drugs in question.
Mehra—who, along with Patel, would coauthor all three Surgisphere studies—responded to Brew and Chaccour that he shared doubts about the “implausibly high” effect size and forwarded their concerns to Desai and Patel. Desai also replied but did not assuage the researchers’ concerns, Brew and Chaccour tell The Scientist.
Asked about the exchange, Grainger tells The Scientist in a statement that Mehra handled all the correspondence about data sourcing and that he’d never been in contact with Surgisphere. Mehra tells The Scientist in a statement that he wasn’t aware of potential discrepancies in the dataset before the Lancet paper was published, and that all correspondence on the preprint should be directed to first author Patel. Patel, who the Lancet study stated had “full access to all the data in the study” and who revealed on Twitter that he is related to Desai “by marriage,” did not reply to specific questions by The Scientist on this issue.
Desai’s remaining three collaborators, like Grainger, each worked on only one paper. Frank Ruschitzka of University Hospital Zurich, a coauthor on the Lancet study, says in a statement that Mehra had recruited him at the “manuscript stage in this Harvard-led registry analysis” and that he had no role in data acquisition.
The Scientist also contacted authors of the NEJM paper, which included the statement that “all the authors reviewed the manuscript and vouch for the accuracy and completeness of the data provided.” SreyRam Kuy did not respond, and her institution, Baylor College of Medicine, tells The Scientist that she is unavailable for comment. Timothy Henry of Christ Hospital in Cincinnati acknowledges he hadn’t seen Surgisphere’s data when the team submitted the NEJM paper, but tells The Scientist in an interview that it’s common practice for coauthors on clinical research to review only summary data, and that there was nothing suspicious about Surgisphere’s dataset at the time. He says he doesn’t believe the data were fabricated and that he thinks NEJM retracted the paper too quickly, adding that the paper’s conclusions have since been “proven to be correct,” suggesting that the problems lie with the data source rather than “data accuracy.”
The scientific community is often unclear on how to treat the coauthors of researchers accused of fraud or other misconduct, says Stefan Eriksson, who directs the Centre for Research Ethics and Bioethics at Uppsala University in Sweden. But the situation is more black-and-white for authors who formally vouch for a published study, as all five NEJM authors did and as Patel did on the Lancet paper. “You can’t escape your responsibility” in this case, Eriksson says. By assuring journals of the veracity of the dataset without having taken the necessary steps to confirm it, a researcher has effectively “betrayed the publishing culture, and science in a sense, as much as if you were part in the making up of data.”
The University of Utah terminated Patel’s employment as an unpaid adjunct member of faculty in early June. Asked whether Mehra was under investigation, Harvard Medical School tells The Scientist that it is “fully committed to upholding the highest standards of ethics and to rigorously maintaining the integrity of our research. Any concerns brought to our attention are reviewed thoroughly in accordance with our institutional policies and applicable regulations.”
The Timeline of a Scandal
Surgisphere Corporation had a busy few months in 2020, overseeing the launch of a new website, the publication of high-impact papers on treatments and outcomes in COVID-19 patients, and eventually a collapse of its reputation.
Peer review missed major problems with Surgisphere’s dataset
In the 13 days between publication and retraction of the hydroxychloroquine study, The Lancet faced intense criticism—initially for allowing the paper to remain online after flaws were uncovered in the database, and later for having allowed the paper to be published at all.
Many scientists, including Brew and Chaccour, wrote to the journal within days of the paper’s publication highlighting concerns about the patient numbers and the formidable challenges of collecting high-quality electronic medical data from hundreds of hospitals in such a short time. Those concerns were also discussed by researchers in blog posts and on PubPeer. On May 28, statistician James Watson of the Bangkok-based Mahidol Oxford Tropical Medicine Research Unit (MORU), which was involved in one of the hydroxychloroquine trials suspended following the paper’s publication, posted an open letter to the journal and the study authors on behalf of more than 100 signatories, listing 10 major concerns about the study’s methods and data. (He later organized a second letter listing concerns about the NEJM study.)
On May 30, The Lancet issued a brief correction, in which the authors revised data from Australia—where the paper’s recorded deaths had exceeded official counts—and modified one of the paper’s supplementary data tables. The Lancet told The Scientist in an emailed statement that the study’s conclusions were unchanged. But on June 2, both The Lancet and NEJM published expressions of concern, and just two days later, after independent auditors failed to obtain access to Surgisphere’s dataset, both of the studies were retracted.
Some scientists expressed frustration that the journals didn’t act sooner; Watson wrote in an email to The Scientist at the end of May that “by allowing the authors to post [a] correction and not address any of the other concerns, The Lancet appear to [be] stating that so far they are not worried about the reliability of the study.”
Editor-in-chief Richard Horton has repeatedly defended the journal’s actions, telling The Scientist that editors followed proper editorial processes and that the journal acted swiftly to evaluate and then retract the paper. As for whether The Lancet should have prevented Surgisphere’s work from being published at all, he notes that “peer review is not an effective system for detecting fraud,” because editors and reviewers typically trust that they’re reviewing genuine research. He denied that hype around hydroxychloroquine had unduly influenced the editorial process for this study.
The Lancet and NEJM have both said that they’ll aim to improve paper acceptance procedures. “We strive not to repeat mistakes,” NEJM tells The Scientist in a statement. “As occurs with every retraction, we make changes to our system, test them, then reevaluate to see if they’re having an impact.” For example, the journal plans to include reviewers with better expertise in “big data” for similar studies in the future, a NEJM spokesperson tells The Guardian.
At The Lancet, part of the response will entail including questions about possible breaches of research integrity as part of peer review, Horton says, as well as asking authors more-specific questions about the data’s accuracy and reliability. “You’re trying to tack between learning the lessons but also not overresponding, because you don’t want to impose another layer of bureaucracy on science that actually makes it more difficult either to do science or to publish science,” Horton says. “You’re trying to minimize harm and maximize the efficiency of the system—that’s a very difficult balance.” [Update: After the print version of this article went to press, The Lancet announced these changes and more in a published comment.]
Regulatory agencies reacted swiftly to observational research
The Lancet study had rapid, widespread effects, partly due to the dramatic responses of organizations overseeing hydroxychloroquine research. Within hours of its publication on May 22, the head of MHRA’s clinical trials unit, Martin O’Kane, wrote to organizers of COPCOV—a large international trial investigating hydroxychloroquine and the related molecule chloroquine as a preventive therapy for health workers exposed to COVID-19—saying that trial organizers were expected to “immediately cease recruitment.” Three days later, the WHO publicly announced it was suspending the hydroxychloroquine arm of its Solidarity Trial, which was testing several potential treatments for hospitalized COVID-19 patients, in light of the safety concerns.
The Lancet study had rapid, widespread effects, partly due to the dramatic responses of organizations overseeing hydroxychloroquine research.
Although both organizations responded quickly to the Lancet study’s publication, the WHO was swifter to react as concerns about the paper were raised. On June 3, after The Lancet issued an expression of concern but before the paper’s retraction, the agency reinstated recruitment to the hydroxychloroquine arm after finding “no reasons to modify the trial protocol,” the WHO’s director-general Tedros Adhanom Ghebreyesus told reporters at the time. The study would subsequently conclude that the drug was ineffective in hospitalized COVID-19 patients; citing interim data, the WHO dropped the hydroxychloroquine arm for good on June 17. By then, another large study, the UK RECOVERY trial—which had continued testing hydroxychloroquine in late May and early June—had reported similar findings. The WHO did not respond to repeated requests for comment.
The COPCOV trial was plagued by greater delays, despite pleas from the research community. Scientists running COPCOV had responded to MHRA’s May 22 communication with pages of documents explaining why the trial shouldn’t be suspended, why the Lancet study was flawed, and how organizers could implement additional safety precautions. Nevertheless, MHRA proceeded with a formal suspension of the trial on June 8, days after the Lancet paper had been retracted. It wasn’t until June 26, more than a week after MHRA received the trial organizers’ formal response to the June 8 suspension, that the agency allowed COPCOV to resume. By then, other studies had been published on the issue, including a small trial from researchers at the University of Minnesota that reported on June 3 that hydroxychloroquine was ineffective as postexposure prophylaxis, although it hadn’t detected any safety issues.
COPCOV organizers note that the five-week delay—not to mention the decrease in active cases and the negative public opinion about hydroxychloroquine that developed in the interim—may have permanently hobbled the trial. MORU’s Nick White, COPCOV’s co-principal investigator, says he was surprised at MHRA’s lightning fast action to suspend research on hydroxychloroquine as a preventive therapy based on observational findings in hospitalized patients. The agency “didn’t follow their normal principles,” White says. Noting similarly dramatic responses to the Lancet study by regulators in other countries, he adds, “I think they all bent under the intense political pressure and the natural media hype.”
MHRA defended its decisions in a statement to The Scientist. “When presented with a body of evidence—even after retraction of the Lancet paper—that represented a fine balance between the potential risks and potential benefits of hydroxychloroquine, MHRA rightly made regulatory decisions [with participant protection] as our prime concern.” It added that “the situation surrounding the publication and subsequent retraction of the Lancet study . . . is unfortunate, and there will be lessons learnt across the research system.” It did not provide further detail about what those lessons were or whether the agency would be implementing any new measures.
A legacy for science and public health?
Many questions about Surgisphere have yet to be answered, such as how the company’s data were assembled and what motivated Desai, who has not admitted to wrongdoing, to produce the studies in the first place. Now that public interest in the company has subsided, with debates on other scientific and political issues taking center stage, those questions may never be answered. Some scientists have since argued that some amount of flawed research is an inevitable, even acceptable, price to pay for the accelerated pace of science during the pandemic. Others have argued the opposite, saying that what’s needed right now is more-rigorous science and science-based decision making—an opinion echoed in a July editorial in The Lancet Global Health.
For the people directly affected by Surgisphere’s actions, the discussion is far from academic. In July, Lee Wallis of the African Federation for Emergency Medicine, the nonprofit that partnered with Surgisphere to develop COVID-19 diagnostic aids for clinicians in low-resource settings, told The Scientist the organization’s work had been delayed by at least six months, a cost that would be felt by African patients. Meanwhile, Patricia García, a Solidarity Trial investigator and the former health minister of Peru—one of the countries in which the ivermectin preprint has been widely cited in recommendations for COVID-19 treatment—expressed anger that Surgisphere had been able to damage public trust in scientists at a time when scientific expertise is needed most.
“Now people are so confused about what science can give you—whether hydroxychloroquine works, it doesn’t work, it’s fake, it’s not fake—that it’s going to be very difficult for us scientists then to use any type of article or publication,” says García. “Now that they know scientists can lie, who will believe us again?”