Coming to Grips with Coauthor Responsibility

The scientific community struggles to define the duties of collaborators in assuring the integrity of published research.

By | May 1, 2017

© ISTOCK.COM/STROBLOWSKI

When cancer researcher Ben Bonavida accepted a visiting graduate student from Japan into his lab at the University of California, Los Angeles (UCLA) just over a decade ago, he treated Eriko Suzuki like every other student he had supervised for the past 30 years. “I met with her regularly,” Bonavida recalls. “We went over her data, she showed me all the Westerns, all the experiments.” After months spent working on the cancer therapeutic rituximab’s mechanism of action, “she presented her findings to me and the other collaborators in the lab, and based on that we published a paper in Oncogene.”

Appearing in 2007, the paper accrued nearly 40 citations over the next seven years. But in April 2014, the study gained a less favorable mention on PubPeer, a website where users anonymously discuss research articles, often raising possible causes for concern. One user noted that some of the Western blots used to support the paper’s conclusions looked suspicious. In particular, one figure appeared to contain a duplicated and slightly modified part of another image.

PubPeer’s readers didn’t have to wait long to find out if their suspicions were grounded. Within the week, Bonavida’s visiting student—by then an assistant professor at Tokyo University of Agriculture and Technology—had confessed to image manipulation, and the paper was eventually retracted in 2016, with a brief statement citing “data irregularities.” In UCLA’s ensuing investigation, Bonavida was cleared of wrongdoing; nevertheless, he says, he was left in shock. “It affected me very deeply,” he says. “I have trained over a hundred students through my career. Nobody has done something like that with my work before.”

These days, Bonavida’s experience is becoming all too familiar. Scientific retractions are on the rise—more than 650 papers were pulled last year alone—and, more often than not, they’re the result of misconduct, whether image duplication, plagiarism, or plain old fraud. The pressure is now on the scientific community to address the issue of research integrity—and the role of coauthors like Bonavida in maintaining the veracity of research to which they contribute and ultimately support for publication. Even when coauthors have no involvement in the misconduct itself, is there something they should have done differently to avoid publication of the research in the first place?

The answer depends on who you ask, says Hanne Andersen, a philosopher of science at Aarhus University in Denmark. While some papers containing misconduct are the work of serial fraudsters who have deliberately duped their coauthors, many cases are not so clear-cut, and there’s a whole spectrum of opinions as to the level of the collaborators’ responsibility to verify the authenticity of all elements of the research project, not just their own contributions. In short, Andersen says, “the scientific community doesn’t have a uniform view.”

Risky business

Over the past century, the average number of coauthors on a paper has climbed from essentially zero to between two and seven—with one of the most rapid increases seen in the biomedical sciences (PLOS ONE, doi:10.1371/journal.pone.0149504, 2016). “Multiauthored papers, often with more than 10 authors, are becoming commonplace,” wrote David Goltzman, a professor of medicine and physiology at McGill University, in an email to The Scientist. “In many cases, it is a major advantage to bring the expertise of scientists who have different research focuses together. [It] facilitates tackling scientific problems which could otherwise not be addressed.”

But this rise in coauthorship also exposes a vulnerability inherent to scientific research—that collaborations are fundamentally based on trust. “Trust is needed in science,” says Andersen. “If we didn’t trust each other, we would need to check everything everyone else did. And if we needed to check everything everyone else did, why collaborate in the first place?”

Carlos Moraes, a neuroscientist and cell biologist at the University of Miami who found himself in a similar position to Bonavida when a colleague’s misconduct led to the retraction of multiple coauthored papers, agrees. “If you are the main author of a ‘several pieces’ type of work, you can do your best to understand the raw data and the analyses,” he wrote in an email to The Scientist. “Still, trust is a must when the technique or analysis is beyond your expertise.”

If we didn’t trust each other, we would need to check everything everyone else did. And if we needed to check everything everyone else did, why collaborate in the first place?—Hanne Andersen, Aarhus University

But trust between collaborators can be violated, and when papers turn out to contain errors or falsified data, the damage is not limited to the guilty party. While scientists who issue corrections quickly and transparently may be unscathed or even rewarded for doing the right thing (see “Self Correction,” The Scientist, December 2015), recent research suggests that a coauthor’s career can take a hit after retractions—particularly if misconduct is involved—even if they are cleared of wrongdoing (J Assoc Inf Sci Technol, doi:10.1002/asi.23421, 2015). In cases where one or a few researchers commit fraud, “other authors are in effect ‘victims’ of the scientific misconduct,” says Goltzman, who has had his own experience of retraction fallout after a colleague was found to have falsified large amounts of data.

Some see the issue as more nuanced, however. “It’s quite odd that you would consider authors of a fraudulent paper to have no responsibility,” says Daniele Fanelli, a Stanford University researcher who studies scientific misconduct. “But that’s because we’re in a system that those authors would be getting undue credit for that paper if the problems hadn’t been discovered.” In Fanelli’s view, the issue boils down to ambiguity about what coauthorship entails, particularly when ensuring the manuscript is accurate and complete. It’s a subject that has “almost willfully been ignored,” he says.

Defining responsibility

WEB OF RETRACTIONS: One author’s misconduct can have profound effects on the research community. The eight researchers with the highest individual retraction counts in the scientific literature—many of them for misconduct—have together coauthored problematic papers with more than 320 other researchers (circles, sized by retraction count and colored by continent of primary affiliation). The number of retraction-producing collaborations (black lines) between any two researchers varies, but in several cases, researchers produce multiple problematic papers with the same individuals or groups, leading to highly interconnected clusters of scientists linked by their retraction history. View larger infographic.ROMANO FOTIIndeed, despite the growing abundance of collaborations in the global scientific community, the duties of individual researchers and their role in upholding a study’s integrity are rarely defined. During the UCLA investigation, for example, Bonavida says he and his colleagues realized that, even though Bonavida was not only a coauthor but the lab head, the university had no protocol outlining his responsibility for verifying the paper’s results. “They didn’t have any rules for the faculty that you need to keep documents and original data for so many years, and so forth,” he says. “They never made any such guidelines.”

A similar lack of procedure is also true of the journals that publish the research. Although some journals now require authors to itemize their contributions, there are no hard-and-fast standards about what coauthorship entails. “It’s dicey,” says Geri Pearson, co-vice chair of the Committee on Publication Ethics (COPE), a nonprofit organization that provides guidelines to journal editors on how to handle disputes in scientific publishing. “There’s a lot of fuzziness about authorship.”

Some journals have maintained that authors should accept equal responsibility for a paper—meaning both credit for its success or blame for its flaws. In 2007, an editorial in Nature suggested an alternative—journals should require at least one author to sign a statement vouching for the paper and claiming responsibility for any consequences should the study be found to contain “major problems.”

But such “solutions” are generally criticized as unrealistic. Nature’s proposal attracted dozens of responses on its site, almost all of them negative. “What does it even mean?” Ferric Fang, a microbiologist at the University of Washington who also studies scientific misconduct, tells The Scientist. “That there should be an individual who flies around to each person’s lab and does an inspection? Even then, how could you be sure that someone wasn’t doing something unethical? . . . To act as if we can declare that [one person is] fully responsible and that makes it so, I think it’s kind of ridiculous.”

In cases where one or a few researchers commit fraud, other authors are in effect “victims” of the scientific misconduct.—David Goltzman, McGill University

Rather than making a single, broad definition of coauthor responsibility, then, some researchers instead argue for complete transparency when a paper is found to contain flaws. Retracted papers are notoriously persistent in the literature, continuing to accumulate citations long after their findings have been debunked. (See “The Zombie Literature,” The Scientist, May 2016.) The UCLA group’s Oncogene paper, for example, was cited at least 15 times between being flagged on PubPeer in 2014 and being retracted two years later. Moreover, retraction notices themselves are often opaque, making it unclear what exactly led to a paper’s retraction, or how authors behaved during the process.

To address this problem, some researchers have proposed standardized retraction forms (see “Explaining Retractions,” The Scientist, December 2015), and in 2015 the Center for Open Science and the Center for Scientific Integrity, the parent organization of Retraction Watch, announced their joint effort to create a retractions database, searchable by various classifiers, including all coauthors, journal of publication, and the reason for the retraction. The tool, a preliminary version of which went live at retractiondatabase.org in December 2016, could aid the monitoring of published research itself, as well as help identify labs or individuals who are continually linked to misconduct, notes Andersen. “If you’re associated with it once, it would be a pity if you are punished for what someone else did,” she says. “But if you’re repeatedly associated with it, maybe that’s not a great lab for training young scholars.”

Addressing the cause

Even without a solid definition of coauthor responsibility, most researchers agree that scientists themselves can help combat misconduct with a more prudent attitude towards collaboration. “You see reports afterwards where people say, ‘Well, this looked almost too good to be true,’” says Andersen. “But nobody intervened.” Individual researchers could be more vigilant, she adds, particularly in the supervision of junior researchers. Bonavida says that he now takes more effort to explain to graduate students how to correctly present their data. And Moraes says he has become “a lot more careful when scrutinizing the raw data.” His advice: get all the data, “including the so-called ‘unimportant controls,’ and not only the final bar graph.”

Researchers can also help combat misconduct by making adjustments to the way they organize their collaborations. Goltzman wrote that his group, part of a multicenter study on osteoporosis that uses considerable volumes of medical data, has now adopted procedures that encourage greater transparency. For example, “we previously allowed each investigator to mine all the data they deemed necessary for their study by access to a central database,” he explained. “We are now asking each investigator to request the data they need from a statistician . . . so that we know exactly what data is required and provided.”

Of course, while these measures may make getting away with misconduct more difficult, there’s only so much collaborators can do. Preventing misconduct altogether is a challenge that many argue requires a long hard look at the scientific community in general, including the pressures it places on researchers. Misconduct and retractions “are just symptoms of a process that’s not working at optimal efficiency,” says Fang. “What’s really needed is a more wholesale rethinking about how scientists are supported.” Solutions that don’t address the related problems of too little funding for too many researchers and the publish-or-perish mentality that still pervades the academic community are mere tweaks to a flawed system, he adds.

In the meantime, though, there’s a growing appreciation that research integrity is not black or white. “It’s not ‘Everything is well and good,’ or, ‘We’re moving into misconduct,’” says Andersen. In recognition of the gray areas of research conduct, there are now initiatives aimed at getting wayward scientists back on track. A National Institutes of Health-funded researcher rehab, for example, is currently working with scientists whose misconduct, or oversight of misconduct, has led to the publication of problematic papers. The organizers of the program, which includes a three-day workshop and follow-up contact over three months, claim that participants show tangible improvements in the way they manage their labs and conduct research.  

Such efforts mark a move in the right direction, notes Andersen. “If we can catch [questionable conduct] early on, and train people, and make sure they realize that this is questionable, we can make them better scientists,” she says. “That would be far better than catching it late—so late that we end their careers.” 

Correction (May 1): This story has been updated from its original version to correctly state that Aarhus University is in Denmark, not in the Netherlands. The Scientist regrets the error.

Add a Comment

Avatar of: You

You

Processing...
Processing...

Sign In with your LabX Media Group Passport to leave a comment

Not a member? Register Now!

LabX Media Group Passport Logo

Comments

Avatar of: JonRichfield

JonRichfield

Posts: 131

May 1, 2017

"Nuanced approach" of one sort or another certainly is necessary. Where teams from disciplines cooperate in research it is perfectly possible for some members, fully competent in their own disciplines, to be veritable laity in each other's fields, which might well be why they cooperate in the first place.

Suppose that say, a biologist works on material that requires the application of  a new kind of scanning tunnelling particle microscope in its developmental stages for the required visualisation; if the biologist cheats, can we blame the physicist for not being familiar with the lastest molecular-biological research? Or if the microscope did not (yet?) deliver the promised results, do we blame the biologist with no functional understanding of QM or information  theory if the physicist quietly enhances the images with disastrous results?

Undeniably, many a senior author has no business attaching his name to an article in which he contributed so little that he could not be aware of cheating, but it does not imply that every team member must necessarily stand bail for every part of a paper.

There are cases where it might be desirable for the participating members to publish their own parts in separate articles, but for obvious reasons that often is easier said than done.

The whole sad mess is not going to go away in a hurry, though the current efforts and trends are promising, if distasteful, but cheating in research is only one dimension of unethical conduct. There are examples of collusion and withholding of data, as we have seen in the climategate scandal, and in the reactions of subsequent evaluation and cover-ups. There are abuses of the peer review system, either of cronyism or competition, and of improper pressures brought to bear on junior staff to influence or suppress unwelcome results. There are difficulties in the very trainig of students, who begin their careers in cooking the books in undergraduate labs and tutorials long before they start serious postgraduate work.

We have a long way to go, and little in the line of GPS to guide us. It will take courage, competence, technology, and dedication.

Watch this space.

Avatar of: dumbdumb

dumbdumb

Posts: 84

May 1, 2017

Based on my experience, researchers faking data/experiments can be quite easily spotted. The reality is that coauthors, and in particular lab heads, just prefer not to see the signals indicating fraudulent behavior. After all, as long as, the cheating is not unmasked, institutions, departments, authors and coauthors all benefits from such publications, especially if they are in high IF journals. And even the journals themselves prefer to ignore the matter as long as they can, as testified by completing ignoring my email to report one such case. And pubpeer as well, so far

Avatar of: dmarciani

dmarciani

Posts: 46

May 3, 2017

While the coauthors should play an active role in making sure that the data is accurate and real, since their names are in the publication, we should not forget that the manuscripts should also pass the scrutiny of the reviewers. Thus, reviewers should pay more attention to possible deficiencies or questionable issues in the proposed paper; indeed, as reviewers they can recommend to the editors the need for clarification on certain issues and/or additional studies to make the submitted article acceptable for publication. Also, it will be helpful if the editors will restrain from playing an active role in “protecting” articles from criticism because it was published in their journal. Open debate will help to clarify the issues, rather than perpetuating honest or dishonest erroneous information.

Avatar of: JonRichfield

JonRichfield

Posts: 131

Replied to a comment from dmarciani made on May 3, 2017

September 4, 2017

Agreed.

This is just another reason among many, for avoiding anonymous review. Reviewers should be named and get credit for reviews, and stand to lose for failing to point out shortcomings or suspicious material. This need not happen during the review process, but definitely should be included at publication time.

And if reviewers were to regard that as too much of a threat to their reputations to permit their participations, why, then horrors: we would have the dreadful prospect of publications and editors having to stand on their own feet and take responsibility for maintaining their own stated standards. Editors would no longer be permitted to hide behind the excuse of the opinions of unanswerable, mysterious, "anonymous referees".

Popular Now

  1. Publishers’ Legal Action Advances Against Sci-Hub
  2. Decoding the Tripping Brain
  3. Metabolomics Data Under Scrutiny
    Daily News Metabolomics Data Under Scrutiny

    Out of 25,000 features originally detected by metabolic profiling of E. coli, fewer than 1,000 represent unique metabolites, a study finds.

  4. Do Microbes Trigger Alzheimer’s Disease?
AAAS