Advertisement

Opinion: Scientific Peer Review in Crisis

The case of the Danish Cohort

By | February 25, 2013

Large studies have to find links between cell phone use and disease. Is peer review to blame?WIKIMEDIA, ILDAR SAGDEJEVThe publication of a scientific study in a peer-reviewed journal is commonly recognized as a kind of “nobilitation” of the study that confirms its worth. The peer-review process was designed to assure the validity and quality of science that seeks publication. This is not always the case. If and when peer review fails, sloppy science gets published.

According to a recent analysis published in Proceedings of the National Academy of Sciences, about 67 percent of 2047 studies retracted from biomedical and life-science journals (as of May 3, 2012) resulted from scientific misconduct. However, the same PNAS study indicated that about 21 percent of the retractions were attributed to a scientific error. This indicates that failures in peer-review led to the publication of studies that shouldn’t have passed muster. This relatively low number of studies published in error (ca. 436) might be the tip of a larger iceberg, caused by the unwillingness of the editors to take an action.

Peer review is clearly an imperfect process, to say the least. Shoddy reviewing or reviewers have allowed subpar science into the literature. We hear about some of these oversights when studies are retracted due to “scientific error.” Really, the error in these cases lies with reviewers, who should have caught such mistakes or deceptions in their initial review of the research. But journal editors are also to blame for not sufficiently using their powers to retract scientifically erroneous studies.

Case in point: In May 2011, the International Agency for Research on Cancer (IARC) classified cell phone radiation as a possible human carcinogen based predominantly on epidemiological evidence. In December 2011, the update of the largest recent epidemiological study, the so-called Danish Cohort, failed to find any causal link between brain cancer and cell phone radiation. It was published in the British Medical Journal.

However, as pointed out by a number of scientists, including myself, peer-review of the Danish Cohort study failed to recognize a number of flaws, which invalidate the study’s conclusions.

The only information collected pertaining to a person’s exposure to cell phone radiation was the length of their cell phone subscription. Hence, two persons using cell phones—one many hours and another only a few minutes per week—were classified and analyzed in the same exposure group if their subscriptions were of equal length. This meant that in the Danish Cohort study highly exposed people and nearly unexposed people were actually mixed up in the same exposure groups.

From the initial size of the cohort of 723,421 cell phone subscribers, more than 420,000 private subscribers were included in the study but more than 200,000 corporate subscribers were excluded. The exclusion of the corporate cell phone users meant that, most probably, the heaviest users were excluded (unless they had also a private subscription). In addition to being excluded from user categories in the study, corporate users were also classified as unexposed. This means that the control group was contaminated. As the BMJ study admitted: “Because we excluded corporate subscriptions, mobile phone users who do not have a subscription in their own name will have been misclassified as unexposed…”

Another flaw of the study was a 12-year gap between data collected on cell phone subscriptions and information culled from a cancer registry. The study considered people with cell phone subscriptions as of 1995, while cancer registry data from 2007 was used in the follow-up study. That means that any person who started a cell phone subscription after 1995 was classified as unexposed. So the study’s authors considered a person who was diagnosed with brain cancer in 2007, but who had started a cell phone plan in 1996 as unexposed. In reality, that person with brain cancer had been exposed to cell phone radiation for 11 years.

It is clear to me that these flaws invalidate the conclusions of the Danish Cohort study. Peer-review failed, and a study that should never have got published due to its unfounded conclusions remains as a valid peer-reviewed article in the British Medical Journal. As long as the flawed study is not withdrawn it will be used by scientists and by decision makers to justify their actions—e.g. a reference to the Danish Cohort study was recently used as supporting evidence in failing to indicate a causal link between cell phone radiation and brain cancer by the US Government Accountability Office.

How is it possible that the British Medical Journal allowed such a poor quality peer review? Were the peer reviewers incompetent or did they have conflicts of interest? What was the involvement of the BMJ’s editors? Why, once alerted to serious design flaws by readers, have BMJ editors not taken any action?

In my opinion the Danish Cohort study should be retracted because no revision or rewriting can rescue it. The study is missing crucial data on exposure to cell phone radiation. Furthermore, an investigation should be launched to determine why such a flawed study was published. Was it peer reviewer and BMJ editor incompetence alone or was a conflict of interest among reviewers involved? (The authors of the study declared no conflicts of interest, but the original cohort was reportedly established with funding from a Danish phone company.) Answering these questions is important because it might help to avoid similar mistakes in the future.

DISCLAIMER: All opinions presented are author’s own and should not be considered as opinions of any of his employers.

Dariusz Leszczynski is a research professor at the Radiation and Nuclear Safety Authority in Finland and a visiting professor at Swinburne University of Technology in Australia.

Advertisement

Add a Comment

Avatar of: You

You

Processing...
Processing...

Sign In with your LabX Media Group Passport to leave a comment

Not a member? Register Now!

LabX Media Group Passport Logo

Comments

Avatar of: Henrik Eiriksson

Henrik Eiriksson

Posts: 4

February 26, 2013

The Danish Cohort study is being upheld by the Danish National Health Board as the reason not to invoke the Precautionary Principle regarding wireless technology - something that should have been done after the IARC/WHO 2B carcinogenic classification of radiofrequency radiation in 2011.

But the Danish Cohort study is a scientific travesty and its conclusions of "no risk" don't hold up like Prof. Leszczynski and others have highlighted.

Turns out that the only consultant on non-ionizing radiation matters for the Danish National Health Board is one of the authors of the Danish Cohort study: Christoffer Johansen.

Recently there has been recorded a 40% increase in malignant brain tumours in Denmark.

February 26, 2013

Nicely written and articulated and highly informative and important.  Thanks.   

February 26, 2013

The Swedish Socialstyrelsen (Swedish Medical and welfare board (www.sos.se)) shows no change in brain tumors frequency between 1970 and 2011. There was a small variation between 10 and 15 cases per 100 000 persons and year. Max value was 15.5 in 1984.  No increase since 1988 and 2011 were there 12.5 cases of brain tumors per 100 000 persons.

Avatar of: Curculio

Curculio

Posts: 44

February 26, 2013

Some of the blame should be put squarely on the editors when they disregard, ignore and otherwise minimize explicit criticisms made by reviewers.  I once made an observation where where it was obvious the numbers in a manuscript's table were wrong, though you would have needed to understand the data transformation to see it.  Logs are easy to read but they don't scream at you unless you know them.  I told the editor that there had been a mistake somewhere in the analyses.  There did not have to be misconduct, I've made similar mistakes and therefore have experience spotting errors.  Nothing was done about it and now the table is published.  Fortunately, this is in ecology so nobody will die or lose money because of it.  Follow the motivations for publishing and you will find the culprits.  It is no different than students telling faculty how to grade, otherwise they'll give less than stellar teaching reviews.  Hence, grade inflation.  The same with publications. 

Avatar of: jeenious

jeenious

Posts: 37

February 26, 2013

Well said,Henrik.

But money talks more loudly than mere evidence.

Where once we believed there was a divine law giver, we now have only on law, "He who profits most in money is the fittest in every situation and thence the most in posession of Darwinian "truth."

It's all about winning. And he who has the most money to pay in exchange for being correct, is correct.

May we call it, "$urvival of the fi$cally fitte$t?"

Gotta keep those sales quotas up!

(:>)

Avatar of: kenw

kenw

Posts: 3

February 26, 2013

I looked at the BMJ article. The second column of page 4 mentions several "limitations of the study". Possible misclassification of corporate users was one of them! So there was no "failure to recognize" an error/limitation in this case. Perhaps you should calm down a bit.

Avatar of: Dariusz

Dariusz

Posts: 1

February 26, 2013

Response to kenw:

Fact that the authors recognized problems of misclassification and contamination of the controls makes all even worse. With this flaw the authors went ahead and published their conclusions. This is an unethical behaviour. Not a reason to calm down. It is scientific scandal.

Avatar of: raymond ffoulkes

raymond ffoulkes

Posts: 3

February 26, 2013

Is there a decent causal story re. mobile phones & cancer?

Or are the studies purely statistical, or, perhaps, even just correlational?

Avatar of: anupama_ifp

anupama_ifp

Posts: 6

February 26, 2013

very well said and its timely: the danger lurks in all publishing nowadays and the single minded quest to be "cited" (for the researcher) and "sold" (for the publisher/ editor) -  in areas like ecology, studies from areas with few reports get away with anything (rather, with no substance at all) - simply because this is one of the few things we come to know about the ecology/ environment of xyz that has such interesting flora and fauna - as if it doesnt matter we learn wrong things...

Avatar of: Biron

Biron

Posts: 4

February 27, 2013

Professor:

"It is scientific scandal."

The flaws and limitations of the study are clearly indicated.

Also, I read your letter in which you stated:

"The study, with a cohort of over 723,000 people, concludes that there is no causal link between brain cancer and cell phone radiation."

It does not make this very broad claim.  It states that the study provides little evidence of a link.  That is very different from what you say.

You have exaggerated the study's claims in a way unbefitting to your usual cautious evaluation.

Finally, you throw out the conflict of interest claim.  Let's not forget that you are looking for research money which would flow more freely if there were greater perception of a risk  With your significant influence in this field one could make similar claims about you.

Get off the conspiracy bandwagon and get back to science as you implore the rest of us to do.

Regards

 

Avatar of: Doremifa

Doremifa

Posts: 2

February 27, 2013

The fact that the Danish Cohort paper considered phone users as those using cell phones 1982-1995. How common were cell phone usage in Europe during those years? 

Avatar of: Henrik Eiriksson

Henrik Eiriksson

Posts: 4

February 28, 2013

@Biron,

Did you bother to read past the abstract? If you dig down into the study it says:

"In conclusion, in this update of a nationwide study of mobile phone subscribers in Denmark we found no indication of an increased risk of tumours of the central nervous system. The extended follow-up allowed us to investigate effects in people who had used mobile phones for 10 years or more, and this long term use was not associated with higher risks of cancer. Furthermore, we found no increased risk in temporal glioma, which would be the most plausible tumour location if mobile phone use was a risk."

The original Danish Cohort study was funded by TeleDanmark Mobil and Sonofon - both Danish telecoms.

The original study says this about funding:

"Supported by grants from the two Danish operating companies (TeleDanmarkMobil and Sonofon); by the International Epidemiology Institute, Rockville, MD; and by the Danish Cancer Society."

All of the above are private companies. The International Epidemiology Institute is run by the two American co-authors of the original Danish Cohort (Boice & McLaughlin). They have been linked to Motorola and also, according to Dr. George Carlo, head of the Wireless Technology Research, they tried to pitch a similar study design as the Danish Cohort to the WTR, with emphasis on the probability that the study would not find risk increases. They were denied by the WTR and surfaced later as co-authors of the Danish cohort study based on a similar design.

The Danish Strategic Research Council took over the funding for the follow-up studies but the data set was collected on private and corporate funding.

Avatar of: DeWolfe Miller

DeWolfe Miller

Posts: 1

February 28, 2013

 

I am a Fellow in the American College of Epidemiology and have been a professor of epidemiology for almost 20 years. For me personally I tend not to critique studies published in radiation and nuclear safety. I am not a physicist. The bane of my professional existence is that everybody – I do mean everybody thinks they are an epidemiologist.

Several of the comments above pointed out that the publication was clear about its limitations. I think it was. As far as any comment or knowledge otherwise, there was no scientific misconduct (maybe I am wrong about this). This paper was published as a research paper and peer reviewed. It is one piece of evidence but as such would not be the only study to base public policy on. You read the paper and have issues. Now you have the opportunity to and by all means precede and do your own study.  Please do not make inquires for resources, funding, or collaboration addressed to my email.

I think your concerns seem otherwise to be misdirected. Consider redirecting your issues at the peer review process and how it may or can be made better and I will and I think many others will join you. 

Avatar of: Henrik Eiriksson

Henrik Eiriksson

Posts: 4

March 1, 2013

@Dewolfe Miller,

Sir, I warmly suggest you read the paper. It's totally fascinating, like f.x. how an epidemiology study can be done without any actual exposure metric. But don't worry, it doesn't stop there. You'll be up all night :-)

Avatar of: Biron

Biron

Posts: 4

March 1, 2013

@Henrik:

How would you compare this study to something like those of Horst Eger?

Avatar of: T. Andersen

T. Andersen

Posts: 3

March 2, 2013

Thank you for raising this very important subject, Dariusz Leszczynski

I wonder why you are not supported by a large number of genuine scientists who also question the publication of this scientificly totally corrupted study from the Danish Cancer Society.

The study is so flawed (scientificly completely worthless) that it must have been designet to mislead the public and health authorities.

At the same time it is a disgrace that anybody defend this so called "study".

Real scientists: Act now!

 

Avatar of: Dariusz L.

Dariusz L.

Posts: 12

March 3, 2013

@ Björn Hammarskjöld

Statistics that you refere to is not sufficient to prove lack of effect.

1. The latency time for brain cance is well over 10 years. It means that if there is effect it had not enough time to show in the statistisc.

2. Epidemiology is likely not sensitive enough to ever prove effect for such rare disease.

Avatar of: Dariusz L.

Dariusz L.

Posts: 12

March 3, 2013

@kenw

You said:

"Possible misclassification of corporate users was one of them! So there was no "failure to recognize" an error/limitation in this case. Perhaps you should calm down a bit."

I think that this is the reason to NOT calm down a bit.

When the authors know about the limitation of their study but still publish the conclusions as this limitation would not exist, this to me borders with misconduct.

One could compare it to the following situation. Overblown to make my point: "It is as if criminal would admit that committed a crime and the court would say that he is free to go because he did not hide the crime and admitted it."

Avatar of: Dariusz L.

Dariusz L.

Posts: 12

March 3, 2013

@Biron

Clear indication of limitations of the study and then stating conclusions as if these limitations would not exist is a scientific scandal.

Interestingly, whenever I criticize so called "positive" studies you support me and I get praise in your comments. When I criticize the so called "negative" studies you attack me.

Do you see "pattern"?

Are you biased Biron - it seems to me that you are...

Avatar of: Dariusz L.

Dariusz L.

Posts: 12

March 3, 2013

@DeWolfe Miller

Yet again, I repeat that presenting of study limitations and then making conclusions as these limitations would not exist is bordering with misconduct. The authors either knew that their data do not support conclusions or were simply incompetent.

Peer review is to blame and editors of BMJ too. Authors submitted bad paper but reviewers and editors accepted it. It means that all of them share the responsibility.

Avatar of: Dariusz L.

Dariusz L.

Posts: 12

March 3, 2013

@T.Andersen

Thanks. The scientists should act now. Why they do not act? I have no slightest idea. In fact I am getting "punished" by some of the fellow scientists for bringing this issue up in The Scientist. It is as if I would be doing something wrong, not the authors of the Danish Cohort. But this is the traditional treatment of "whistle blowers"...

Avatar of: Biron

Biron

Posts: 4

March 3, 2013

Professor:

I am concerned that you have lost objectivity.  Here is one example:

"Statistics that you refer to are not sufficient to prove lack of effect"

As a scientist you know that it is impossible to "prove" lack of "effect."  The demand for negative proof has heretofore been limited to the junk science websites -- now you bring it into this "science" blog.

Furthermore I have not complained about your analysis of the Danish Cohort study.  Your claims of scandal however are out of line.  A scandal might be when someone claims they performed a blinded study when in fact the lab technician's notebook contains the blinding codes.

You also stated that 12 scientists posted complaints.  Several are anti-wireless advocates.  Dr. Devra Davis posted a complaint about the lack of declaration of conflict of interest.  Note that she is promoting a book full of conspiracy theories, and yet claims no conflict!

Avatar of: Dariusz L.

Dariusz L.

Posts: 12

March 3, 2013

@Biron

You are stonewalling and spinnig.

Question in Danish Cohort is not whether it can prove negative.

Question is that the design of the study and the so called "exposures" are pure RUBBISH and could not prove anything, whether positive or negative.

In Danish Cohort are contaminated controls and exposure groups consisting of mixture of highly and little exposed persons. This is BAD SCIENCE.

This design is a travesty of science. And you will not convince me otherwise.

 

Avatar of: Kamal Mahawar

Kamal Mahawar

Posts: 1

March 3, 2013

Congratulations Dariusz on this really well written article. I would just like to add a couple of minor points to this beautifully written piece. First of all, why would any scientist like to publish fraudulent science and risk their reputation on it? Surely, nobody else would ever be able to replicate "false" science and either their paper will go into oblivion or bring them some bad reputation.

I feel this happens because there is too much pressure on scientists to publish and there is too much emphasis on publications and not just doing good science. Publications used to be a means to an end and they have perhaps now become the end. I feel this will continue for as long as scientific community continues to regard "peer reviewed" science as gospel truth and all the rest as utter "rubbish". Neither of the two is true. Peer Review cannot verify the authenticity of the manuscripts as reviewers rarely have access to any raw data. All the reviewers can do is to make sure that manuscript is not self contradictory and written in "proper" English. And then it brings all sort of publication biases into peer reviewed literature. Good scientists do not need the seal of approval by peers. On the other hand, one could argue, greatness will have no peer and if peers knew about the discovery, they would have done it themselves. If publication was simply an exercise of communication, why would any good scientist every publish junk in their name. Problem is, it is not. Publication is more than that. It is a surrogate marker of academic proficiency of individuals and that of the institutes they work in. An elaborate network of journals of varying hierarchy and impact factors has created a system which is neither efficient, nor fair or cost effective but it is so deeply entrenched in our academic institutions that it would be difficult to get rid of.

If publication was simply an exercise of scientific communication, journals like WebmedCentral have the solution for you. Publish everything with all the raw data first, peer review it later and decide the individual article's merit by using a series of article level metrics data. This will however mean getting rid of a mindset that we have had for decades now.

Kind regards,

Kamal Mahawar

Avatar of: T. Andersen

T. Andersen

Posts: 3

March 3, 2013

@ Kamal wrote:

" why would any scientist like to publish fraudulent science and risk their reputation on it?"

Fact is that it is not a rare phenomenon that junk science (to put it polite) is published, especially not in the Danish Cancer Society.

But they don't stop there. They categorically reject recognized research showing effect (caused by RF radiation) and they actively undermines WHOs categorization of RF radiation as a possible carcinogen. Again and again, they have announced that the probability of this is very small. And they have no what so ever sound arguments for this claim.


Strange isn't it?

Avatar of: Biron

Biron

Posts: 4

March 3, 2013

@ T. Anderson

How is something fraudulent that discloses its shortcomings?

The Junk science resides overwhelmingly on the anti-wireless activist sites, which generally contain nothing but junk.

The consensus among scientists is that the risk of radiation from cell phones is negligable.  Scientists and physicians use them enthusiastically.

Given the prominent role of people like Sage, Johansson, Davis, Carlo and the exceedingly bad studies we see on sites like Mast-Victims and EMFacts it is entirely appropriate that the movement has been marginalized.

My hope remains that the professor will provide the neutral oversight to move this issue forward but recent columns favor those who distort science.

Avatar of: T. Andersen

T. Andersen

Posts: 3

March 4, 2013

Concerning @ Biron

Don't feed the troll

Avatar of: Henrik Eiriksson

Henrik Eiriksson

Posts: 4

March 4, 2013

@Biron

I'm all out of Troll-mix, sorry, but if you think Mast-Victims.org features exclusively bad science then why don't you pop over to our open forum and put us all right? You don't even have to register, just put in your secret codename "Biron", complete the "prove you're human" test (c'mon, you can do it) and post.

Avatar of: Kraig

Kraig

Posts: 1

March 18, 2013

This opinion piece is naive. The writer seems to want a fool-proof substitute for his own critical thinking ability. Perhaps some folks need assistance, but most first-rate scientists can think for themselves. My advice is to stop whinning and use your brain. If you don;t agree with the conclusions in a paper, tell us. But don't criticize the work of reviewers and editors for failing to meet you standards of flawlessness. Shame on you.

Avatar of: JSnyder54

JSnyder54

Posts: 1

March 18, 2013

 

The rational discussion process via blog is in crisis, though I question that this process has ever functioned. Bloggers and internet optimists have proposed this process could crowd-source insights, increasing social media's ability to cast a wide net to capture human creativity. 

My point here is not to critique the opinion expressed by Lesczynski about "peer review in crisis", though I think he makes several valid and telling points about the failings of peer review to assure publication quality.

However, the comment string indicates that blog and response as a productive process generates substantial heat but no real light. Most of the responses are focused on the obviously scientifically flawed Danish Cohort Study. This study was offered as an example of flaws in the peer review process. I would guess, given the author's professional affiliation, that he used this study as an example becaue he was intimately familiar with it. He could have choosen any of thousands of retracted publications to illustrate this point, but he chose this one. Refuting the specifics of the Danish Cohort doesn't even respond to the initial opinion piece.

Then there are responses about, "how could you be so dumb to need a peer review to validate/invalidate a paper, can't you think on your own?" This misses the point by an even wider margin. Peer reviewing is ment to spot errors in data or logic of a paper bound for publication. Malfeasence or misrepresentations of lab data will not generally be caught by this process, though experts familiar with a topic might notice a pattern of a particular lab generating "surprising" data. I don't think any of us reads every paper with all of our BS detectors on full time. I at least expect published articles to have cleared a fairly exhaustive set of hurdles.

Scientific papers are simply hyper-formalized discussions, commications between in-group members with the intention to tell a story to a slightly larger circle of listeners. Given this, the peer review process should work, since it is meant to be an evaluation of the "likelyhood" of a story facing independent evaluation. The same reason that exceptional writers still use editors since they are not always able to spot flaws in their own writing, should encourage scientists to seek out expert review. Unlike writers, scientists cannot employ simple editorial expertise since such an individual has a vested interest in improving the original story. They need to go to independent review, though in the ecology or limited research dollars, few similarlly specialized researchers will be truely 'independant".

Lest this comment be as fully off-content as most of the other comments I would finish with an encouragement to Leszczynski to follow up with a deeper analysis of the statement, "there is a crisis in peer-review". Simply pointing to a flawed system is of little use without suggestions for improvement. (I don't imagine he thinks that peer review should e abandonded altogether)  Perhaps peer-reviewers could be compensated for thier time, but simultaneously be held accountable for missing content or logic flaws that lead to a paper's retraction. If a paper is retracted because of fraud there is little that peer review could do to catch this but poor reasoning or disconnected data should be as much the peer reviewer's fault for passing it on as the paper's authors for possing it in the first place.

Avatar of: Merritt Clifton

Merritt Clifton

Posts: 2

March 18, 2013

The larger issue here is the frequent failure of peer review to incorporate basic fact-checking,  including by tracing footnotes.  As an investigative reporter,  I have within the past 12 months discovered multiple instances of peer-reviewed published scientific papers citing as "recent" an estimate of global human rabies mortality originally derived from research done in India by William Harvey in 1911.  The Harvey findings have been restated  many times since in a variety of ways which obscured the reality that the Harvey study was the point of origin of the data,  and that Harvey did the study before the introduction to India of easily accessible post-exposure vaccination.  More recently,  tracing the origin of a much publicized claim published in Nature Communications that there are 80 million feral cats in the U.S. led back to an opinion paper published in Nature in 1934--which was soon rebutted.  The total cat population in the U.S. at the time,  pet and feral,  was found by National Family Opinion Survey founders Howard & Clara Trumbull to be about 35 million.  (The total cat population in the U.S. today is about 74 million pet cats and nine million ferals.)  Earlier in my career,  I memorably discovered an instance of a study which claimed that asbestos was not getting into water supply of the city of St. Jean,  Quebec,  because someone had misidentified the direction of flow of the Richelieu River as north to south.  In truth,  the Richelieu runs north from Lake Champlain to the St. Lawrence River.  Journalists of course goof too -- but reputable news media constantly publish corrections to try to make sure the public record is as accurate as possible.  In all three of the cases described above,  egregious mistakes remain "in the literature,"  continuing to be repeated and restated.  

Follow The Scientist

icon-facebook icon-linkedin icon-twitter icon-vimeo icon-youtube

Stay Connected with The Scientist

  • icon-facebook The Scientist Magazine
  • icon-facebook The Scientist Careers
  • icon-facebook Neuroscience Research Techniques
  • icon-facebook Genetic Research Techniques
  • icon-facebook Cell Culture Techniques
  • icon-facebook Microbiology and Immunology
  • icon-facebook Cancer Research and Technology
  • icon-facebook Stem Cell and Regenerative Science
Advertisement
Advertisement
The Scientist
The Scientist