Opinion: Unethical Reporting

Two publications on the same topic are compromised by the decision to separate the data.

By | April 15, 2013

Arbitrary data segregation can be at best scientific folly and at worst unethical.
The International Commission on Non-Ionizing Radiation Protection (ICNIRP) is currently in process of reviewing scientific evidence concerning whether the radiation emitted by wireless communication devices is harmful. However, some of the evidence published in peer-review journals may have to be discarded due to the flaws. In a recent opinion for The Scientist, I presented flaws of the Danish Cohort, the largest recent epidemiological study to investigate the possible health effects of cell-phone use. The other deeply flawed publications on this topic came out of the Interphone project. 

Like the Danish Cohort, the strength of Interphone, an EU-funded epidemiological study to examine the possibility of a causal link between exposures to cell phone radiation and brain cancer, was its large size. In total, 13 countries in the study and researchers examined more than 1,500 cancer cases. But in 2011 Interphone published two studies, neither of which took full advantage of the massive dataset. Instead, each purposefully focused on only part of project’s data.

The first study, published in the American Journal of Epidemiology in May 2011, found no causal link between location of brain areas most exposed to the radio frequency electromagnetic fields (RF-EMF) emitted by cell phone and location of gliomas in people of Denmark, Finland, Germany, Italy, Norway, Sweden, and England. The second study, published in the Occupational and Environmental Medicine in June 2011, focused on populations in Australia, Canada, France, Israel and New Zealand and found a “weak” causal link between locations of RF-EMF most exposed sites and location of gliomas.

The power of the project was completely lost by this arbitrary split of the data. The authors of both publications admitted it in the articles’ discussions sections, but provided no scientific justification for the decision.

When I personally reached out to the senior authors on each paper, they pointed the finger at logistics. “The reason for the different approaches was mainly circumstantial,” said Elisabeth Cardis, a researcher at the Centre for Research in Environmental Epidemiology (CREAL) in Barcelona, Spain, and senior author of the OEM paper. “I left IARC [International Agency for Research on Cancer] for Barcelona in 2008, and there was no one left at IARC to coordinate international analyses of Interphone. For reasons of data protection and logistics, only 5 of the countries got permission for the database to be transferred to CREAL for further analyses; those are the countries included in the OEM paper.”  

The senior author of AJE paper, Anssi Auvinen of University of Tampere in Finland, said: “Quite simply, a large ship turns slowly. With the experience of slow and arduous process of reaching consensus in the large group, and in contrast much faster progress with the North European group, we wanted to take the next step and move from interview data alone to more in-depth analysis sooner rather than later. I learned only later that another publication was also being prepared in parallel by Elisabeth [Elisabeth Cardis, Coordinator of Interphone project], who was fully aware of our analysis, as I had circulated our manuscript to her before submission.”

Both published studies were unable to determine the existence of the causal link between cell phone radiation and brain cancer because, as the authors admitted in discussion sections, the samples sizes used in each study were too small.  And, as the statements of senior authors clearly indicate, there was no scientific reason for splitting data.

Interestingly, many of the authors of the AJE article had in the past expressed a view that there is no causal link between brain cancer and mobile phone radiation—and they found no causal link from their portion of the Interphone data. The OEM authors, on the other hand, did think that there might be causal link, and they found a weak link. The circumstances suggest that perhaps these differing viewpoints motivated the split—that the scientists known for their “no causality opinion” got together to publish the AJE study, while the scientists who believe that there might be a causality link worked together on the OEM paper.

Regardless of the reason, the arbitrary splitting of the data for non-scientific reasons is unethical. The British Medical Journal states that the falsification of data “ranges from fabrication to deceptive selective reporting of findings and omission of conflicting data, or willful suppression and/or distortion of data. . .” And the US Office of Research Integrity defines falsification as “manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record . . .” If a scientist were to perform 13 in vitro experiments in laboratory, and then pick seven and five data sets to publish them as separate articles, such scientist would be justly accused of data manipulation. Why aren’t epidemiologists held to the same standard?

In the context of the plenary discussions at a meeting in Lyon, France, in May 2011, where IARC classified RF-EMF as possible carcinogen, there is urgent need for good quality scientific evidence. Some of the published scientific evidence, like the two articles from Interphone, is of very poor quality and should be excluded from the ongoing ICNIRP evaluation.

Interphone did not provide reliable answers whether or not mobile-phone use poses a risk is still unclear—in part because the Interphone scientists decided, for whatever reason, not to analyze together its entire dataset. Instead, the scientific literature now holds two competing and selectively reported Interphone publications. The authors of both articles should be demanded to publish combined analysis of the full data set and, possibly, to retract the articles with partial data sets as they have not enough cases for statistical evaluation and their conclusions are clearly scientifically misleading.

Dariusz Leszczynski is a research professor at the Radiation and Nuclear Safety Authority in Finland.


Add a Comment

Avatar of: You



Sign In with your LabX Media Group Passport to leave a comment

Not a member? Register Now!

LabX Media Group Passport Logo


Avatar of: Brian Hanley

Brian Hanley

Posts: 35

April 17, 2013

But how can it be unethical in either case if the larger dataset was unavailable to the first group due to privacy concerns? It sounds like the coordinator moved to Barcelona, either for professional reasons or because the study ran out of money. 

Similarly, in the second case, how can it be unethical if the larger dataset did not contain the enhanced data necessary for his study? 

That isn't logical. 

The primary culprit here appears to be funding, not scientific missteps - unless it can be shown that in either case the scientists are lying. But I doubt they are. 

Avatar of: Dariusz L.

Dariusz L.

Posts: 12

April 17, 2013


I hope you are just teasing or joking when you "buy" explanations of these scientists.

In the first place the teams were located in 13 different countries and transfer of information was possible and physical location of coordinator in Lyon or in Barcelona did not matter at all.

Data was already collected and they had money to make separate analyses - why not do it together?

Because with the first Interphone article they argued for over 4 years how to write manuscript and how to present findings. So, to avoid arguments and inability of reaching scientific consensus they decided to split data and publish two articles. They knew that each study had not enough cases for reaching any statistical significance. But still they split the data and published two unreliable, statistically insignificant, papers.

This is unethical.


Avatar of: Arisaig


Posts: 2

April 17, 2013

Yes - it does seem really stupid not to have combined both datasets (especially if there was data that could have been combined) to get a more informed set of results.  I think that is was selective and unethical when the combined data might have shed greater light (one way or another) on the possible health effects under scrutiny in this case.

I would probably also agree to a certain extent with BHP, that the real culprit in this case was most likely due to funding issues.

At least it was not as unethical as this scientist (who has now been jailed for 3 months) because he selected and presented data that he wanted to see!


Avatar of: anupama_ifp


Posts: 6

April 18, 2013

what I dont understand is - why dont they do it now: combine the available data from the two studies and see what results they get to - it may upset both earlier papers ... but that's science and in this case with an important application to society?

Avatar of: Dariusz L.

Dariusz L.

Posts: 12

April 18, 2013


Using "funding" as an excuse does not withstand scrutiny. Data was already collected and scientist had funding to do two separate analyses. Why not a single one?

Lack of funding is not reason for two separate articles.

Avatar of: Dariusz L.

Dariusz L.

Posts: 12

April 18, 2013


It is a "million dollar question" why they do not want to make a single analysis.

Importantly, neither funding agencies (EU & industry) nor supervising agencies (WHO & IARC) seem to care.

But this is also unethical attitude and clear wasting of taxpayers' money (EU funds).

So, in the end not only scientists have done something unethical, also EU, industry, WHO and IARC behave in unethical manner.

Avatar of: Arisaig


Posts: 2

April 19, 2013


My reference to "funding" being part of the problem here was not in relation to the data that was already there, but more to the fact that there was "no-one" available or perhaps willing to take on the job and responsibility of collating and combining the relevant datasets from both studies into the one.

I'm sure, like me you have seen this happen in the past previously - a lead researcher gets a new job and off they go and leave their past work behind for others to continue or complete (of course some will take it with them, but that also depends on many factors and the level of the researcher).  Researchers are often on contracts (2 -5 years etc), so if there is no "funding" available to continue to pay their salaries, then the work sometimes comes to a halt.  Everyone has bills to pay.

With something like this, I suspect there's more than just politics and egos involved - it's a very "sensitive" area and it would take a strong, committed and determined individual to seek to do this, especially if there was no money available to pay for their "time".

Avatar of: Dariusz L.

Dariusz L.

Posts: 12

April 19, 2013


I must disagree with your "explanations". If there is a will there is a way to do things. Nowadays, in internet era is possible to do things if scientists wish to do them. Here was clearly no wish to do things together. The theams simply did not even want to talk to each other.

This is common situation in cell phone research. There are two sides of the debate and they distrust each other and do not want to engage in any debate.

To understand more see my blog sites:




Avatar of: hkvora


Posts: 1

April 21, 2013

I fully agree with the writer of this article. The reporting of split data for no scientific reason is completely unethical. What makes this act more troublesome is the fact that RF-EMF has been classified as a potential carcinogen.

The implications of such a flawed data is enormous even economically - imagine cell phone companies and electronics giants having to change radio frequencies for cell usage across the world, litigations by insurance companies in claims for cancer... the list is enormous.

I also think that along with a claim for either complete retraction of data or republication as a total study, the funding agencies for the two groups should also pressurize them to come up with a study of the entire data set or refund the money!

One more aspect begs attention - why were the two articles by Interphone published in different journals? Were these articles reviewed/ rejected by other journal(s) and whether the reviewers in any of these journals were 'supported' by funding for their labs by any cell phone companies or electronics giants? Everyone in the scientific community is privy to the fact that though peer review is the best review process we have, it is not fool-proof against plagiarism or bias or contacts with the 'powers that be'. It seems appropriate to raise these questions since the group of authors were themselves biased towards one particular result.

-Dr. Hardeep Vora.

Avatar of: Dariusz L.

Dariusz L.

Posts: 12

April 21, 2013

Dr. Hardeep Vora,

Very good points, in addition to these I made in the article. Thanks

What is unfortunate, the funding agencies seem indifferent. Over twenty millions of euros of taxpayers' money were used and no answers provided, mailnly because of the insufficient numbers of cases and controls. Design of studies was was not the optimal to provide statistically reliable data and it was worsened further by the flaws in reporting...

Popular Now

  1. Neurons Compete to Form Memories
  2. Mapping the Human Connectome
    Daily News Mapping the Human Connectome

    A new map of human cortex combines data from multiple imaging modalities and comprises 180 distinct regions.

  3. The Genetic Components of Rare Diseases
  4. Classic Example of Symbiosis Revised