MIT Press announced today (June 29) that it will be launching a new journal called Rapid Reviews: COVID-19 with the explicit purpose of reviewing preprint articles published about the pandemic.
Servers such as bioRxiv and medRxiv offer researchers an opportunity to quickly publish results and get feedback before their reports undergo peer review.
“Preprints have been a tremendous boon for scientific communication, but they come with some dangers, as we’ve seen with some that have been based on faulty methods,” Nick Lindsay, the director of journals at the MIT Press, tells STAT. “We want to debunk research that’s poor and elevate research that’s good.”
For instance, STAT reported in February on the withdrawal of a paper published on bioRxiv that suggested SARS-CoV-2 had been engineered from HIV, while an influential preprint on the use of ivermectin for COVID-19 was removed amid a scandal involving the company that provided the underlying data.
See “How (Not) to Do an Antibody Survey for SARS-CoV-2”
Rapid Reviews (RR:C19) will use both artificial intelligence (AI) and an army of volunteer reviewers to parse out the most “important” preprint studies in need of review. Stefano Bertozzi, a professor of health policy and management at the University of California, Berkeley, and the journal’s editor-in-chief, tells STAT the AI will help prioritize where to direct the human effort.
“There is such a huge volume of material every day, our goal is to do rapid reviews on preprints that are most interesting . . . as well as those that need to be validated or debunked, especially if they’re getting a lot of attention in the media or social media,” Bertozzi says. As of today, medRxiv and bioRxiv include some 5,900 papers.
Artificial intelligence designed by the Lawrence Berkeley National Laboratory will first categorize studies by discipline and novelty based on their importance to health officials, clinicians, and the public. These decisions will be checked by volunteer graduate students, who will in turn pass their recommendations on to a pool of 1,600 reviewers. Each article will be independently evaluated by up to three experts, and reviews will be published without the permission of the study’s authors (although they will be notified).
The RR:C19 team will publish its first reviews by mid-July, aiming to have commentary out within a week or so of a paper first appearing online, although Bertozzi acknowledges to Inside Higher Ed that the goal is an ambitious one. “When you’re rushing, mistakes happen,” Bertozzi says. “I’m sure we’ll have egg on our face, too, before too long.”
Bertozzi also says it will be interesting to see whether this new initiative prompts changes to how peer review is carried out at traditional journals, which are not themselves immune to error. According to the watchdog organization Retraction Watch, more than 20 peer-reviewed papers have been pulled from journals during the pandemic, including two high-profile retractions from The New England Journal of Medicine and The Lancet.
See “Lancet, NEJM Retract Surgisphere Studies on COVID-19 Patients”
“We are confident the RR:C19 journal will quickly become an invaluable resource for researchers, public health officials, and healthcare providers on the frontline of this pandemic,” Vilas Dhar, a trustee of the Patrick J. McGovern Foundation, which is funding the new publication, says in a news release. “We’re also excited about the potential for a long-term transformation in how we evaluate and share research across all scientific disciplines.”