© PATRICK GEORGE/IKON IMAGES/CORBIS

Sometimes even the best-known stories have hidden subplots. This January, Nature published two papers describing an astonishing new way to make stem cells: simply grow blood cells from adult mice in acidic media.1,2 The researchers behind the work—a team from the RIKEN Center for Developmental Biology in Japan and Harvard Medical School—called it stimulus-triggered acquisition of pluripotency, or STAP. These stress-induced stem cells were even more malleable than induced pluripotent stem cells (iPSCs), and, even better, they could be produced without the addition of transcription factors. Naturally, the press was abuzz with the promise of STAP to accelerate stem cell research. But in the less well-lit corners of the Web, some were already raising doubts.

Leading the way was Paul Knoepfler, a stem cell researcher at the University of California, Davis. “I quickly had the feeling this might be entirely wrong,” he...

Lee and Knoepfler weren’t the only ones with concerns. Knoepfler ran an online poll to gauge opinion, revealing rapidly dwindling confidence in STAP, and created a Web page where people could post results from their replication attempts. Lee penned a review detailing his results for ResearchGate’s brand-new Open Review site, and a Japanese blogger discussed specific problems with the papers’ figures. Commenters on PubPeer, a postpublication peer review website, raised further concerns. On February 14, RIKEN initiated its own investigation and, six weeks later, announced that Haruko Obokata, first author on both studies, was guilty of scientific misconduct. On July 2, Nature retracted the papers.

It’s early days, but some researchers have embraced digital discourse with open arms—and quick-typing fingers.

Knoepfler says social media played an influential role in righting the literature. “The momentum started on blogs and Twitter, and it took off from there,” he says. “I believe that without social media, right now the STAP papers wouldn’t have been retracted.”

The STAP saga—which took a tragic twist when Obokata’s supervisor at RIKEN, Yoshiki Sasai, committed suicide in August—is just the freshest example of scientists turning to blogs and social media to question and refute published findings. Back in 2011, University of British Columbia microbiologist Rosie Redfield made a splash when she live-blogged her attempts to replicate a study reporting the discovery of bacteria that could incorporate arsenic in place of phosphorus into their DNA. Beyond such high-profile cases, a small but growing band of scientist bloggers are hoping to accelerate research evaluation and make science more transparent. Ultimately, they argue, rapid-fire open critiques will enrich the scientific process.

“[Social media is] introducing a robust culture of community-driven postpublication peer review, and that’s hugely valuable,” says Chris Chambers, a neuroscientist at Cardiff University in the U.K. and a blogger for The Guardian. “It chips away at this idea that something must be true because it’s in a peer-reviewed journal and replaces it with the idea that your work is out there to be poked and prodded.”

Not all scientists are so enthusiastic. Many are apathetic about social media, and some are cautious of new pitfalls, not least the potential for undeserved reputational damage. But as the scientific generations turn over, social media is on track to become a central part of research evaluation.

“Whether you like it or not, this is an unstoppable trend,” says Knoepfler. “It’s the new reality for today’s researchers. Your papers, particularly high-impact ones, are going to be subject to continuous feedback in real time.”

Real-time replication

DON'T BELIEVE THE HYPE

© ILYAST/ISTOCKPHOTO.COM

In addition to scrutinizing each other’s work, researchers who are active online are also taking aim at exaggerated or inaccurate science reporting. “This is an area where social media can be incredibly powerful if scientists do it right,” says Jonathan Eisen of the University of California, Davis, who regularly takes down dodgy science reporting on his Tree of Life blog. Many scientist bloggers first engaged in the activity specifically to counter misleading information being peddled by the media.

Over the past couple of years, Eisen has trained his sights on coverage of microbiome research. In August 2010, incensed by reports claiming that each new study would lead to a cure for this disease or that, he started dishing out the “Overselling the Microbiome” award to offending journalists and PR departments. It’s kept him busy. Eisen has blogged about 23 of these awards and has given out even more on Twitter, without going into as much detail. (He also doles out awards for overselling genomics.)

In May this year, Eisen launched a forensic dissection of the reporting from Science and The New York Times on a paper that characterized the microbial community living in the placentas of 320 pregnant women. His main gripe was the claim that the study suggests a causal link between oral health, the placental microbiome, and premature birth. “I see no evidence presented anywhere of the importance of oral health or any causal connection between oral health and the placental microbiome or risks to pregnancies,” wrote Eisen. “The claims made about this here in this news story are irresponsible.” It’s speculation, he said, and that must be made clear.

Although the vast majority of the people who read the original stories will not have seen the corrective, Eisen insists it’s a worthwhile exercise. “Reporters have told me they’re more careful because they don’t want one of my awards,” he says. And it’s not just reporters, he adds. PR departments at Cedars-Sinai Medical Center in Los Angeles and the University of Bern have received Eisen’s microbiome-hype awards.

But Eisen says he wants to do more than make sure the public isn’t misled; he also strives to protect the reputations of scientists, who can suffer the consequences when a field inevitably doesn’t deliver on trumped-up promises. “I remember seeing [President Bill] Clinton talking about how sequencing the human genome is going to cure cancer, and it was just completely overselling it,” says Eisen. “Now, in the last few years when science programs have asked for more money, the response is, ‘Oh yeah, well, you said the genome was going to cure cancer, and it hasn’t done shit.’ I don’t want that to happen with microbiome research.”

According to Paul Knoepfler, a stem cell researcher also at UC Davis and a dedicated blogger, scientists have a responsibility to skewer hype. “It’s particularly important in a field like stem cells, where the potential clinical applications are huge and the public is very engaged,” he says. “There is a lot of misinformation out there, but we can offer a dose of reality on our blogs. Who else is going to do it, if not us?”

Rapid-fire feedback is not new to science. In the 17th and 18th centuries, gentleman scientists shared their results at scientific societies and faced criticisms on the spot. That still happens at conferences today to some extent, but the modern scientific process came to be dominated by private and anonymous peer review. Once published, data and conclusions were rarely questioned or discussed outside of the formal confines of academic journals, says Chambers. “If you saw a paper you thought was bullshit, you would probably discuss it with colleagues and leave it at that,” he says. “You could try to send a letter to the journal, but that’s very slow, and there’s no guarantee they’ll publish it anyway.”

Online publishing and social media has changed all that. Now anyone can share their opinions with the world. For scientists, that provides an unprecedented opportunity to accelerate discussions once mediated by journal editors. It’s early days, but some researchers have embraced digital discourse with open arms—and quick-typing fingers.

Chambers, who uses brain-imaging techniques and transcranial magnetic stimulation (TMS) to study cognitive control in the human brain, started blogging in 2011. He reviews new papers, offers thoughts on how to improve research practice, and occasionally shares sharp criticisms of other people’s work. In March 2012, for example, Chambers posted a detailed critique of a study from the University of Sydney’s Richard Chi and Alan Snyder, who concluded that a form of electrical brain stimulation helps people solve tricky puzzles. “I’ve read their paper several times now, back to front, even sideways a couple of times,” he wrote. “And I still can’t find any evidence to support this claim. Instead all I found was a long list of flaws.”

A dozen or so other researchers have joined Chambers among the ranks of dedicated scientist bloggers, and on several occasions their posts have made news. Redfield rocketed to relative fame in 2011, when she publicly refuted the NASA-funded study apparently demonstrating that bacteria from California’s Mono Lake could survive without phosphorus, instead incorporating arsenic into their DNA.3 The finding, published online in Science in December 2010, would have profound consequences for astrobiology, suggesting that environments lacking phosphorus, an element thought to be essential to all organisms, might support life after all. NASA teased the paper for a few days before it was released, touting “an astrobiology finding that will impact the search for evidence of extraterrestrial life,” and the press was all over it. Redfield, on the other hand, was not impressed.

“I thought it was garbage,” she recalls. She posted a critique on her blog RRResearch detailing potential flaws. The authors had not ruled out the possibility that phosphorus had contaminated the medium on which the bacterium, called GFAJ-1, was grown, argued Redfield, and the arsenic they detected may have come from something other than DNA. “Basically, it doesn’t present ANY convincing evidence that arsenic has been incorporated into DNA,” she wrote. “Lots of flim-flam, but very little reliable information.”

The post went viral, kick-starting an online orgy of criticism and counter-criticism. Six months later, Science published eight “technical comments” about the paper, including one from Redfield, and a response to the comments from first author Felisa Wolfe-Simon, now at Lawrence Berkeley National Laboratory in California. By that time, Redfield had begun live-blogging about her attempts to replicate the results in her lab. “It was a great chance to do open science while people were actually watching,” she says.

In the end, Redfield and colleagues from Princeton University, who had reached out to Redfield via her blog, failed to replicate Wolfe-Simon’s results. In February 2012, Redfield posted data demonstrating that there was no arsenic present in the DNA of GFAJ-1 bacteria taken from Mono Lake and grown in a low-phosphate medium. They uploaded their report to the preprint server arXiv immediately and waited for a response from Science.

Despite the new results, Wolfe-Simon and her colleagues stood by their conclusions and even denounced Redfield’s approach. “We do not fully understand the key details of the website experiments and conditions,” Wolfe-Simon told Nature News at the time. “So we hope to see this work published in a peer-reviewed journal, as this is how science best proceeds.” They got their wish when Science published Redfield’s paper in July 2012.4 A few months later, it was followed up with a Nature paper from another group demonstrating that GFAJ-1 has a high preference for phosphorus over arsenic.5 Scientists in the field have all but dismissed the original results, though the paper has not been retracted.

The episode serves as a dramatic example of how social media can speed up science’s oft-boasted ability to self-correct. It also shows how scientist bloggers can set the record straight in a highly visible way. “If you Google ‘arsenic DNA,’ most of the top hits are about the refutation rather than the original result,” says Redfield. “If we’ve got Google serving up the truth, then I think that represents some level of success for the approach.”

Rethinking review

Open-science enthusiasts will point to the rising numbers of retractions, cases of misconduct, and problems with reproducibility as evidence that research must be critically examined even after publication. “There is the wrong impression that [peer review is] infallible,” says Jonathan Eisen, an evolutionary biologist at the University of California, Davis, who writes the Tree of Life blog. “That’s not how science should work. We should be evaluating things continuously, and I believe dynamic online discussion is the best way.”

There is this very vocal community online, but most of the iceberg is still under water. Most people are either skeptical of social media or just conservative in that they don’t want to change what they’re doing and adapt to the new landscape.—Jonathan Eisen,
University of California, Davis,
Tree of Life blog

But while online communication has the potential to accelerate postpublication review and open a public window on the scientific process, the speed and reach of social media also harbors dangers for scientists. In an editorial published in June 2013, Current Biology editor Geoff North pointed out that “[online] critics are less accountable than in the more ‘traditional’ system of peer-reviewed journals,” and that hastily posted criticisms, often penned in a fit of pique, can cause unwarranted reputational damage. “And once a scientific reputation has been tainted, it can be hard to restore confidence,” he wrote.

Aware of such risks, scientist bloggers emphasize the importance of self-control. “Things that feel cathartic to write often don’t feel good to read,” says Chambers. “You have to be very careful with tone.” In the discussions of Mono Lake’s arsenic-DNA bacteria, for example, refutation by blog spilled over into personal attacks on Wolfe-Simon. On that occasion, Eisen took to his own blog to call for commentators to focus on the data. He should know; in 2009 Eisen himself posted a missive against another scientist for not citing a nearly identical study from his own group. “On reflection, I was way too aggressive, and I retracted the post,” he says. “The lesson was to stick to commenting on the work, rather than speculating about motivations.”

If kept professional, though, open conversation can be a shortcut to clarification. Following public questioning of the new STAP method, Harvard’s Charles Vacanti, a senior researcher on one of the papers who has since announced that he will take a one-year sabbatical, published a new protocol stipulating that the cells should be squeezed through tiny pipettes before being dunked in acid. Having been nudged on Twitter by Knoepfler, Chinese University of Hong Kong’s Lee again took up the challenge. This time he live-blogged his efforts, posting daily updates and photos on ResearchGate.

Again, he could not replicate the results, but his efforts did yield some interesting findings—and highlighted another possible pitfall of the online approach. Lee noticed the negative control cells he’d squeezed through his narrow pipettes, but had not dunked in acid, did show some expression of the genes associated with pluripotency. Lee was cautious, of course; he knew he’d need to repeat the experiment to validate what he’d seen, and he wrote as much. But several journalists saw the post and, without speaking to Lee, reported that he’d validated the STAP technique. “These things can quickly take on a life of their own,” he says.

So how is the scientific community as a whole handling the double-edged sword of instantaneous research evaluation? Many are ignoring it altogether, according to several scientist bloggers, and among researchers who do acknowledge it, the majority holds firmly to the belief that all scientific debate should take place in the pages of scientific journals. “There is this very vocal community online, but most of the iceberg is still under water,” says Eisen. “Most people are either skeptical of social media or just conservative in that they don’t want to change what they’re doing and adapt to the new landscape.”

Going mainstream

Although open online discussion of peer-reviewed work remains the exception rather than the rule, websites dedicated to postpublication peer review are beginning to sprout, typically tended by younger scientists who have grown up with the Web.

Among the most popular is PubPeer, launched in October 2013 by anonymous researchers who describe themselves as “early-stage scientists.” The site allows people to comment on any scientific article with a DOI or those published as preprints in arXiv. “Many people write blogs, but even in the Google age it is quite difficult to search for comments in any systematic way,” a spokesperson for PubPeer told The Scientist in an e-mail. Indeed, while some journals allow comments on online articles, researchers have to post and view them on a journal-by-journal basis. PubPeer aims to change that by providing a centralized repository for comments, which are kept anonymous to assuage researchers concerned that critical reviews could damage their careers.

Others are taking a more open approach. PubMed Commons, launched by the National Center for Biotechnology Information (NCBI) database in December 2013, invites PubMed-indexed scientists to post comments, along with their name and institution, at the bottom of the 23 million (and counting) papers in the literature repository. And in March 2014, in the midst of the STAP debacle, ResearchGate joined the postpublication peer review movement by launching Open Review, which asks authors to provide slightly more formal evaluations of published studies—and to put their names to their comments. That’s important, says Ijad Madisch, a Berlin-based physician with a PhD in virology who cofounded ResearchGate in 2008. “The main benefit of postpublication peer review using social media is that researchers can engage in discussions about their work and get feedback on it in real-time,” he says. “But this only works if the process is open and transparent and researchers use their real names.”

The approach certainly seems to be popular: ResearchGate now boasts 5 million members, and researchers have posted more than 12,000 reviews on Open Review. What’s more, in a recent Nature survey, 88 percent of 3,500 scientists and engineers polled said they were aware of ResearchGate, and 1,589 of those researchers said they visited the site regularly. Still, the survey suggests the number of researchers who actively discuss research remains low, with just 14 percent of regular visitors saying they have posted comments to the site.

To encourage more researchers to post critiques of each other’s work, Eisen suggests attaching DOIs to constructive comments, so that each comment can itself be cited. “We probably need to make it more formal and offer rewards if we’re to get scientists to really embrace postpublication review,” he says.

Most agree that traditional peer review, for all its problems, will retain a central role in science in the 21st century. But at this point it seems almost inevitable that social media will have a big impact on what happens after publication. “Transformative is a strong word,” says Knoepfler, “but I think it applies here.” 

Daniel Cossins, a former associate editor of The Scientist, is a freelance writer living in London.

References

  1. H. Obokata et al., “Stimulus-triggered fate conversion of somatic cells into pluripotency,” Nature, 505:641-47, 2014.
  2. H. Obokata et al., “Bidirectional development potential in reprogrammed cells with acquired pluripotency,” Nature, 505:676-80, 2014.
  3. F. Wolfe-Simon et al., “A bacterium that can grow by using arsenic instead of phosphorus,” Science, 332:1163-66, 2011.
  4. M.L. Reaves et al., “Absence of detectable arsenate in DNA from arsenate-grown GFAJ-1 cell,” Science, 337:470-73, 2012.
  5. M. Elias et al., “The molecular basis of phosphate discrimination in arsenate-rich environments,” Nature, 491:134-37, 2012.

Interested in reading more?

Magaizne Cover

Become a Member of

Receive full access to digital editions of The Scientist, as well as TS Digest, feature stories, more than 35 years of archives, and much more!