DUSAN PETRICICIn 1992, oncologist Werner Bezwoda wowed an audience at a conference in San Diego by describing how 90 percent of women with advanced breast cancer whom he had treated in his South African clinic with high-dose chemotherapy and bone marrow transplantation had achieved complete remission of their cancer. Seven years later he described more good results, but three independent trials of the treatment found no benefit. People became suspicious, and investigators eventually found that the hospital ethics committee had no record of his studies, patients reported as alive had been discharged for terminal care, and many of them had not given consent. Bezwoda eventually confessed to misconduct and disappeared from science. Shortly thereafter, his studies were retracted.

Research misconduct is most often discussed in the context of developed countries, but as this high-profile case illustrates, wherever there is human activity—whether it is politics, sports, religion, or science—there...

In a recent PLOS Medicine article (10:e1001315, 2013), we addressed the “big three” of research misconduct: data fabrication, data falsification, and plagiarism. Increasingly, however, we recognize that a multitude of “questionable research practices,” including selective reporting, redundant publication, hiding conflicts of interest, listing authors on papers who have done little or nothing, and much more, probably does more damage to science than the “big three.”

Unsurprisingly, there are few data on misconduct from the developing world, but studies of article retractions and problems with authorship confirm that misconduct occurs in low- and middle-income countries, and some work suggests that it might even be more common there than in developed regions. A recent systematic review of studies conducted in high-income countries shows frighteningly high levels of misconduct: nearly 2 percent of scientists had themselves fabricated or falsified data, and one-third admitted to questionable research practices, including selective reporting (e.g., “dropping data points based on a gut feeling”) and altering an experiment or its results “in response to pressures from a funding source” (PLOS ONE, 4:e5738, 2009). When asked about other researchers, those surveyed said that they believed as many as 14 percent of their colleagues had fabricated or falsified data and nearly three-quarters were guilty of questionable research practices.

Among low- and middle-income countries, the volume of research is increasing most dramatically in China, and a 2006 article in Science described the country as a “scientific Wild West [where] an unprecedented number of researchers stand accused of cheating—from fudging resumes to fabricating data—to gain fame or plum positions” (312:1464-66). But the article contained no data, and the National Science Foundation of China investigated 542 allegations of misconduct and found positive evidence  in 60 cases—a level of misconduct comparable to that seen in developed countries with similar amounts of research. The main problems were data falsification (40 percent), plagiarism (34 percent), and data fabrication or theft (34 percent).

Unfortunately, it can be difficult to prove or disprove misconduct. In some cases, such as that of R.B. Singh from India, no proper oversight body exists. Singh has published dozens of trials that many suspect are fraudulent, but because he is a private practitioner, there is no institution to investigate
the allegations. He insists he is innocent, but the British Medical Journal and The Lancet have published “expressions of concern” about his research.

Indeed, most low- and middle-income countries have no system for responding to research misconduct—China is one exception, with its Office of Scientific Research Integrity Construction founded in January 2007 to investigate allegations of misconduct—and in many countries, research misconduct is simply not discussed. Even in the developed world, misconduct is often not taken seriously. A paper published in The Lancet recently shows that only two countries in Europe—Norway and Denmark—have misconduct response systems enshrined in law, comparable to the Office of Research Integrity in the United States (381:1097-98, 2013). Most European countries, including many where research has been conducted for centuries, have no national system for responding to misconduct.

We believe that research misconduct is common, and that, although it might be unpleasant to discuss, every country that conducts research needs a national system to provide leadership on preventing, recognizing, investigating, correcting, and punishing wrongdoing in science. At the moment very few countries, rich or poor, have adequate systems.

Richard Smith is director of the United Health Chronic Disease Initiative and former editor of BMJ. Tracey Koehlmoos is the special assistant to the assistant commandant of the Marine Corps and senior program liaison for community health integration for the US Marine Corps. The opinions expressed in this article are her own and in no way reflect the opinions of the US Marine Corps, the Department of Defense, or any other agency.

Interested in reading more?

Magaizne Cover

Become a Member of

Receive full access to digital editions of The Scientist, as well as TS Digest, feature stories, more than 35 years of archives, and much more!