Shortly after, the tweets arrived. “Preprints without methods are ads not scientific manuscripts and should be treated as such,” Michael Eisen, a biologist at the University of California, Berkeley, tweeted.
“The potential use of preprints to establish priority without full disclosure of methods or else is worrying and counterproductive,” tweeted Timothée Lionnet, a biophysicist at New York University.
The paper in question, which described a new RNA-sequencing technique, was published by a group of researchers at MIT’s Broad Institute. One of the study’s coauthors, Aviv Regev, a computational biologist, quickly responded to Preprint Now’s comment, indicating that this was an accidental omission and posting an updated version of the manuscript. “We appreciate your comment and completely agree with it. Methods sections should be included,” Regev wrote on bioRxiv.
Although the authors quickly corrected their mistake, the event has sparked a discussion about what to do about preprints that fail to meet scientists’ standards.
There was an omission, this was identified, and within 24 hours the author had posted a revision that dealt with the problem.—Richard Sever,
“I think the big question that we face is, what is the best way to deal with that?” Eisen says. “Should bioRxiv be checking that papers are complete in all the ways we think they should be complete, or is the world moving into [a system] where we expect the community of readers to judge work?”
Eisen adds that he, personally, is “leery of posing as gatekeeper because . . . there are a lot of different ways to do and communicate science, and one of the nice things about this world is putting back into the hands of the authors the means to describe their work in a way that they think is the most appropriate.”
In this situation with Regev’s paper, those involved agree that it was an example of community-policing working well. It’s a “great demonstration of the system working as intended,” says Richard Sever, a cofounder of bioRxiv. “There was an omission, this was identified, and within 24 hours the author had posted a revision that dealt with the problem.”
“The whole thing worked much better than I ever imagined,” one of the two postdoctoral researchers behind Preprint Now tells The Scientist. (The commenters asked to remain anonymous.) “I thought that [the exchange] was very good and productive—[however], I don’t think it’s representative,” the postdoc adds, referencing another recently published bioRxiv manuscript without methods by a different group reporting a similar technique.
The authors of that work had not responded to a request on the site asking for the methods after two weeks. However, they uploaded a new version of the paper last Friday as soon as they became aware of the comment after being contacted by The Scientist.
Hao Wu, a geneticist at the University of Pennsylvania and coauthor of the second paper, tells The Scientist in an email that the group planned to post the complete paper after it was sent out for peer review. However, it was rejected without review by several journals. “We agree that the complete manuscript should be uploaded to the preprint server,” he adds. “This is our first preprint from my new lab, so I did not really know what the standard practice is in the field.”
One of the main worries scientists have about this type of behavior, according Preprint Now, is that some researchers may take advantage of the preprint system by trying to stake claims for incomplete work.
Sever says the fact that all submissions to bioRxiv are date-stamped and all versions publically available makes it difficult for researchers to “scoop” another’s work or try to claim an incomplete discovery. “It will not serve the researchers who behave like that, because the community will identify what they’re doing,” he adds, pointing out that Paul Ginsparg, the physicist who launched ArXiv, raised these same arguments in a 2013 commentary addressing concerns about preprint servers.
A coarse filter
Preprints are becoming increasingly popular. BioRxiv has seen steadily increasing submission rates since its launch in 2013. This past June alone, the site received 1,096 new manuscripts, according to Sever. That’s up from 128 in June 2015. Even funding agencies are warming to preprints—earlier this year, the Wellcome Trust, the UK Medical Research Council, and the National Institutes of Health announced that these pre-peer-reviewed articles were welcome in grant proposals.
“Now that we’re at the beginning stages of starting to use preprints in biology, we should try to tackle this issue,” Eisen says. “Not because it’s a raging problem right now, but because it’s a potential issue [and] we would strengthen the preprint world if we dealt with it.”
In their exchange, both Preprint Now and Regev suggested ways that bioRxiv could better screen for standards. For example, by refining submission guidelines or providing a checklist of key components in order to prevent authors from uploading substandard preprints.
Now that we’re at the beginning stages of starting to use preprints in biology, we should try to tackle this issue. Not because it’s a raging problem right now, but because it’s a potential issue [and] we would strengthen the preprint world if we dealt with it.—Michael Eisen, University of California, Berkeley
BioRxiv already has a basic screening process in place. When a paper is submitted, one of around 70 affiliates—all principle investigators—will scan that article for issues such as non-scientific content, plagiarism, and material that might pose health or biosafety risks (for example, an article suggesting that vaccines cause autism).
“It’s a coarse filter,” Sever says. “What it’s not doing is providing any assurance about the scientific quality or completeness.” Adding additional criteria, even one as simple as requiring a methods section, is not that trivial, Sever says, because some manuscripts, such as theoretical papers, may not need them.
Lenny Teytelman, the cofounder of protocols.io, an open access repository of scientific methods that also provides a preprint service, shares a similar view. “When we ask [for] a group of people that will read through the paper and see if it’s missing the data . . . at this point you’re asking for peer review,” he says. “My thought is, that’s what journals do, and the whole point of a preprint is for rapid dissemination—they’re not an editorial service and they’re not a peer-reviewed platform.”
Teytelman also thinks that adding better guidelines may have minimal impact. In his experience with protocols.io, authors often ignore instructions. “[Standards] can’t hurt, but I also don’t expect there to be a radical [change],” he says.
For now, many agree that the recent exchange was an example of the ideal situation where an issue is called out then immediately addressed. “I am glad this raised an important discussion,” Regev says in an email to The Scientist.
“I hope that our comment and the response from Aviv has given at least some indication that if people step forward and do something, that can change things, even on a very small level,” Preprint Now says. “Hopefully that will happen next time when another issue arises.”
Clarification (August 4): We have updated the article to note that Wu updated the preprint upon becoming aware of the comment on bioRxiv when The Scientist reached out.