Opinion: Unethical Reporting

Two publications on the same topic are compromised by the decision to separate the data.

Apr 15, 2013
Dariusz Leszczynski

Arbitrary data segregation can be at best scientific folly and at worst unethical.
FLICKR, LUKE JONES
The International Commission on Non-Ionizing Radiation Protection (ICNIRP) is currently in process of reviewing scientific evidence concerning whether the radiation emitted by wireless communication devices is harmful. However, some of the evidence published in peer-review journals may have to be discarded due to the flaws. In a recent opinion for The Scientist, I presented flaws of the Danish Cohort, the largest recent epidemiological study to investigate the possible health effects of cell-phone use. The other deeply flawed publications on this topic came out of the Interphone project. 

Like the Danish Cohort, the strength of Interphone, an EU-funded epidemiological study to examine the possibility of a causal link between exposures to cell phone radiation and brain cancer, was its large size. In total, 13 countries in the study and researchers examined more than 1,500 cancer cases. But in 2011 Interphone published two studies, neither of which took full advantage of the massive dataset. Instead, each purposefully focused on only part of project’s data.

The first study, published in the American Journal of Epidemiology in May 2011, found no causal link between location of brain areas most exposed to the radio frequency electromagnetic fields (RF-EMF) emitted by cell phone and location of gliomas in people of Denmark, Finland, Germany, Italy, Norway, Sweden, and England. The second study, published in the Occupational and Environmental Medicine in June 2011, focused on populations in Australia, Canada, France, Israel and New Zealand and found a “weak” causal link between locations of RF-EMF most exposed sites and location of gliomas.

The power of the project was completely lost by this arbitrary split of the data. The authors of both publications admitted it in the articles’ discussions sections, but provided no scientific justification for the decision.

When I personally reached out to the senior authors on each paper, they pointed the finger at logistics. “The reason for the different approaches was mainly circumstantial,” said Elisabeth Cardis, a researcher at the Centre for Research in Environmental Epidemiology (CREAL) in Barcelona, Spain, and senior author of the OEM paper. “I left IARC [International Agency for Research on Cancer] for Barcelona in 2008, and there was no one left at IARC to coordinate international analyses of Interphone. For reasons of data protection and logistics, only 5 of the countries got permission for the database to be transferred to CREAL for further analyses; those are the countries included in the OEM paper.”  

The senior author of AJE paper, Anssi Auvinen of University of Tampere in Finland, said: “Quite simply, a large ship turns slowly. With the experience of slow and arduous process of reaching consensus in the large group, and in contrast much faster progress with the North European group, we wanted to take the next step and move from interview data alone to more in-depth analysis sooner rather than later. I learned only later that another publication was also being prepared in parallel by Elisabeth [Elisabeth Cardis, Coordinator of Interphone project], who was fully aware of our analysis, as I had circulated our manuscript to her before submission.”

Both published studies were unable to determine the existence of the causal link between cell phone radiation and brain cancer because, as the authors admitted in discussion sections, the samples sizes used in each study were too small.  And, as the statements of senior authors clearly indicate, there was no scientific reason for splitting data.

Interestingly, many of the authors of the AJE article had in the past expressed a view that there is no causal link between brain cancer and mobile phone radiation—and they found no causal link from their portion of the Interphone data. The OEM authors, on the other hand, did think that there might be causal link, and they found a weak link. The circumstances suggest that perhaps these differing viewpoints motivated the split—that the scientists known for their “no causality opinion” got together to publish the AJE study, while the scientists who believe that there might be a causality link worked together on the OEM paper.

Regardless of the reason, the arbitrary splitting of the data for non-scientific reasons is unethical. The British Medical Journal states that the falsification of data “ranges from fabrication to deceptive selective reporting of findings and omission of conflicting data, or willful suppression and/or distortion of data. . .” And the US Office of Research Integrity defines falsification as “manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record . . .” If a scientist were to perform 13 in vitro experiments in laboratory, and then pick seven and five data sets to publish them as separate articles, such scientist would be justly accused of data manipulation. Why aren’t epidemiologists held to the same standard?

In the context of the plenary discussions at a meeting in Lyon, France, in May 2011, where IARC classified RF-EMF as possible carcinogen, there is urgent need for good quality scientific evidence. Some of the published scientific evidence, like the two articles from Interphone, is of very poor quality and should be excluded from the ongoing ICNIRP evaluation.

Interphone did not provide reliable answers whether or not mobile-phone use poses a risk is still unclear—in part because the Interphone scientists decided, for whatever reason, not to analyze together its entire dataset. Instead, the scientific literature now holds two competing and selectively reported Interphone publications. The authors of both articles should be demanded to publish combined analysis of the full data set and, possibly, to retract the articles with partial data sets as they have not enough cases for statistical evaluation and their conclusions are clearly scientifically misleading.

Dariusz Leszczynski is a research professor at the Radiation and Nuclear Safety Authority in Finland.