Advertisement
Bethyl Laboratories
Bethyl Laboratories

Reading Into the Future

Will traditional scientific journals follow newspapers into oblivion?

By | April 1, 2012

image: Reading Into the Future istockphoto.com, ideabug

ISTOCKPHOTO.COM, IDEABUG

Newspapers are going to hell in a handbasket” after 20 quarters of declining ad revenue in the U.S., agree both Dean Starkman, a media critic who defends the best of traditional newspapers, and Clay Shirky, a guru of the digital age who looks forward to a future beyond newspapers. Otherwise, they disagree about most things, and as I read their fierce debate (Starkman, in the Columbia Journalism Review; Shirky, on his blog) I wonder what it might mean for the future of scientific journals.

Unlike newspapers, scientific journals are not facing the economic collapse that forces change. It’s hard to get financial data on individual journals, but Elsevier, the world’s largest publisher of scientific journals, has seen broadly stable revenues (€2,236 million in 2006, €2,370 million in 2010) but growing profits (€683 million in 2006, €847 million in 2010). Scientific journals remain very profitable. Few industries manage a profit margin of 35.7% (that for Elsevier in 2010), but then few industries are given their raw material—in this case, scientific studies—not only for free, but also in a form that needs minimal processing.

Are the economics likely to change? Probably not in the short term. Despite calls for boycotts, scientists continue to send their studies to familiar journals; and, although open-access publishing is flourishing, with more new journals and articles, it isn’t denting the incomes of traditional publishers. Indeed, some traditional publishers—like Springer—may well be boosting their income and profits by adding some open-access publishing to their own stable. And scientific libraries, the prime customers of scientific publishers, despite grumbling at price increases, continue to be obliged to buy traditional journals.

Although scientific publishers are not facing the same economic pressures as newspaper publishers, other pressures may oblige them to change. Shirky argues that newspapers limit choices: they are the few telling the many what is news and what they should think. Now, with the arrival of the Internet and social media, anybody can be a journalist and a publisher.

Much can be gained and little lost by abandoning pre-publication peer review, and we are seeing more and more experiments along these lines.

What Starkman cares about is not so much the technology of newspapers, but rather, strong stories—thorough, well-researched, accurate reports that bring down governments and change how we think. He is unclear where such stories will come from when newspapers are bankrupted.

There are similar worries within science, which is as elitist an institution as acting, painting, or creative writing. We have our stars, and they publish in the top journals—like Science, Nature, Cell, and The Lancet. Journals are our sorting mechanism. Millions of studies are published every year, and we cope with this torrent of information by hoping that the most important papers are published in the top journals. If we read the top journals we will know what’s important.

Sadly, this is an illusion. Although science has its stars, the proletarians are important, often publishing studies that show that the studies from the stars have misled us. Indeed, because the top journals skim off the sexy, exciting, and new, we are systematically misled if we read only those journals. Economists call this the “winner’s curse,” whereby the companies that win contracts have often overbid. John Ioannidis of Stanford and others have proposed that the curse operates within scientific publishing, and we now have data to support their proposition.

A study of the 49 most highly cited papers on medical interventions published in high-profile journals from 1990 to 2004 found that by 2005, a quarter of the randomized trials and five of six nonrandomized studies had been contradicted or found to be exaggerated. A second study, published in 2011, looked at original studies of biomarkers with 400 or more citations from 24 highly cited journals. These biomarker studies were compared with subsequent meta-analyses that evaluated the same biomarkers, and of the 35 highly cited original studies, 29 showed an effect size larger than that in the meta-analyses.

Shirky argues that in the case of news it is time to move from “filter then publish” to “publish then filter.” Such a world would also be healthier for science, and less deceiving. Instead of trying to understand the world by reading top journals, we should concentrate on systematic reviews and meta-analyses that combine each new study with what already exists and show us clearly how the evidence is changing. At the moment it is hard to conduct these systematic reviews because studies are scattered in a close-to-random way through thousands of journals, many of them inaccessible.

With the appearance of open-access megajournals or databases like PloS One, scientific publishing has begun to move in the direction favored by Shirky. These “journals” have peer-review systems that don’t attempt to judge what is new and important, but simply whether the conclusions are supported by the data. If the conclusions are appropriately tentative, as they should be, then almost anything can be published. The success of PloS ONE, which is now publishing around a thousand studies a month, is attested to by how quickly and widely it has been copied. These megajournals could be game changers that will eventually bring down the empire of traditional journals.

The other reason that journals exist is for quality assurance, and the system for achieving this is prepublication peer review, a classic mechanism of “filter then publish.” I’ve been arguing for years, probably to the point of tedium, that prepublication peer review is ineffective, hopeless at spotting error and fraud, expensive, slow, largely a lottery, and prone to bias and abuse. The “real peer review” is the process that takes place after publication when through “the market of ideas”—conversations, e-mails, reviews—a study is awarded the status it deserves and slotted into existing knowledge. Much can be gained and little lost by abandoning prepublication peer review, and we are seeing more and more experiments along these lines.

Shirky and others have pointed out that newspapers are a comparatively recent invention: the first mass-circulation newspaper, The Sun, appeared in New York in 1833. Those who have spent their lives working in newspapers understandably see them as more permanent than they will probably prove to be. As The Economist has argued, “The mass media era now looks like a relatively brief and anomalous period . . .The internet has disrupted this model and enabled the social aspect of media to reassert itself. . . . Blogs, Facebook and Twitter may seem entirely new, but they echo the ways in which people used to collect, share and exchange information in the past.”

Although they began in the 17th century, scientific journals may also prove to be transitory. Before journals appeared, scientists presented their studies at meetings, and the audience would discuss their value. Because of the World Wide Web, such meetings can now be held on a global scale, and offer a return to the original peer review: immediate and open.

Scientific journals are not yet going “to hell in a handbasket,” but, as Shirky observes of newspapers, “Some of the experiments going on today, small and tentative as they are, will eventually harden into institutional form, and that development will be as surprising as the penny press subsidizing journalism for seven generations.” We don’t lack for experiments in scientific publishing, and some have already “hardened into an institutional form.” But the old journals are still there—and in a form not so dissimilar to how they were 50 years ago. Can this continue for another 50 years? I doubt it.

Richard Smith is a former member of the board of the Public Library of Science and a former editor of the BMJ and chief executive of the BMJ Publishing Group. 

Advertisement

Add a Comment

Avatar of: You

You

Processing...
Processing...

Sign In with your LabX Media Group Passport to leave a comment

Not a member? Register Now!

LabX Media Group Passport Logo

Comments

Avatar of: olmstedhomested

olmstedhomested

Posts: 4

April 11, 2012

Please! With all the need for scientific inquiry, is this really a question to requires a waste of scientific study?

Avatar of: David Hill

David Hill

Posts: 1457

April 11, 2012

I think that this is a well-written and thoughtful article.  Yes, the question of scientific publishing is important, as we need to communicate well.  Many scientists like to communicate frequently with their peers (their real peers who understand the work, not the 'peers' who review for journals).  As stated in this article, the real peer review begins when you let your peers read your work, not when you submit it to a journal.  In the future, I expect to see a meaningful blog of 'mini-reviews' attached to any published scientific paper, as well as subsequent author revisions of that paper as appropriate.  We now have the technology to make the whole process of communicate work much better.  But, this will not be fostered by 'for profit, close access' publications that make their money by limiting access, thus drawing down the audience.  The author is also right about the fact that the 'big' publications look for headline-type publications that are (very) often misleading.  Many of these publications start with a hypothesis in the introduction, then claim this to be a 'known fact' in the conclusions.  Bad science, but the 'aftermarket' that puts out summaries of these papers for the public often sends out these 'known facts' to further misinform the public.  Part of the problem is that the 'review' process is broken, as the reviewers know little or nothing of the subject at hand, or they are hopelessly biased relative to the subject.  Open access will bring a wider, discerning audience to the party.  Open review will bring a wider, discerning audience to play an important role in the review process.

Avatar of: Mike Noren

Mike Noren

Posts: 3

April 11, 2012

 Open review works well in fields like mathematics, but what happens in fields where there are heavy financial incentives, like in medicine, or where finds have political implications, such as in atmospheric research? How do you prevent well-funded and highly motivated groups from corrupting the process?

Avatar of: EllenHunt

EllenHunt

Posts: 74

April 11, 2012

I must severely disagree.  In open-access publishing, the vast majority of papers are rarely read and rarely cited.  Review becomes of the utmost importance because it is the only time when anyone is going to seriously dig through a paper to see if it makes sense or is worth publishing. 

I will also note "how soon they forget" the study that found the grossest level of plagiarism in thousands of papers. 

Good review policy in open-source publishing is absolutely necessary and should be strengthened. 

Avatar of: Mike Noren

Mike Noren

Posts: 3

April 11, 2012

It's obvious to any scientist who knows their field that the top journals publish anything which is spectacular, and as any scientist knows "spectacular" results are usually in error. Luckily scientists, unlike science journalists, don't just read Science, Nature or Lancet.

In science the journals themselves are already irrelevant. I don't go to "Molecular Biology and Evolution" and browse their back issues for information, I do a google scholar search on the subject I'm interested in and track down the articles. If all journals disappeared tomorrow and articles were published directly into a database, that would only make my work easier.

And I hope online open publishing replaces the journals, the way publishing companies profiteer on the (often tax-payer funded) work of scientists is offensive.

That said I don't agree that peer review is useless. Anyone who has reviewed articles knows that there are errors in almost every manuscript when first submitted, and that a significant number of manuscripts are plagiates or so flawed they can't be saved. Since one can't simply ignore published articles on a subject, removing the "pre-filter" of review would mean an even greater torrent of flawed studies which one would have to take in to account and cite.

Avatar of:

Posts: 0

April 11, 2012

Please! With all the need for scientific inquiry, is this really a question to requires a waste of scientific study?

Avatar of:

Posts: 0

April 11, 2012

I think that this is a well-written and thoughtful article.  Yes, the question of scientific publishing is important, as we need to communicate well.  Many scientists like to communicate frequently with their peers (their real peers who understand the work, not the 'peers' who review for journals).  As stated in this article, the real peer review begins when you let your peers read your work, not when you submit it to a journal.  In the future, I expect to see a meaningful blog of 'mini-reviews' attached to any published scientific paper, as well as subsequent author revisions of that paper as appropriate.  We now have the technology to make the whole process of communicate work much better.  But, this will not be fostered by 'for profit, close access' publications that make their money by limiting access, thus drawing down the audience.  The author is also right about the fact that the 'big' publications look for headline-type publications that are (very) often misleading.  Many of these publications start with a hypothesis in the introduction, then claim this to be a 'known fact' in the conclusions.  Bad science, but the 'aftermarket' that puts out summaries of these papers for the public often sends out these 'known facts' to further misinform the public.  Part of the problem is that the 'review' process is broken, as the reviewers know little or nothing of the subject at hand, or they are hopelessly biased relative to the subject.  Open access will bring a wider, discerning audience to the party.  Open review will bring a wider, discerning audience to play an important role in the review process.

Avatar of:

Posts: 0

April 11, 2012

 Open review works well in fields like mathematics, but what happens in fields where there are heavy financial incentives, like in medicine, or where finds have political implications, such as in atmospheric research? How do you prevent well-funded and highly motivated groups from corrupting the process?

Avatar of:

Posts: 0

April 11, 2012

I must severely disagree.  In open-access publishing, the vast majority of papers are rarely read and rarely cited.  Review becomes of the utmost importance because it is the only time when anyone is going to seriously dig through a paper to see if it makes sense or is worth publishing. 

I will also note "how soon they forget" the study that found the grossest level of plagiarism in thousands of papers. 

Good review policy in open-source publishing is absolutely necessary and should be strengthened. 

Avatar of:

Posts: 0

April 11, 2012

It's obvious to any scientist who knows their field that the top journals publish anything which is spectacular, and as any scientist knows "spectacular" results are usually in error. Luckily scientists, unlike science journalists, don't just read Science, Nature or Lancet.

In science the journals themselves are already irrelevant. I don't go to "Molecular Biology and Evolution" and browse their back issues for information, I do a google scholar search on the subject I'm interested in and track down the articles. If all journals disappeared tomorrow and articles were published directly into a database, that would only make my work easier.

And I hope online open publishing replaces the journals, the way publishing companies profiteer on the (often tax-payer funded) work of scientists is offensive.

That said I don't agree that peer review is useless. Anyone who has reviewed articles knows that there are errors in almost every manuscript when first submitted, and that a significant number of manuscripts are plagiates or so flawed they can't be saved. Since one can't simply ignore published articles on a subject, removing the "pre-filter" of review would mean an even greater torrent of flawed studies which one would have to take in to account and cite.

Avatar of:

Posts: 0

April 12, 2012

When I became Associate Editor of a (moderately high impact) journal, what surprised me the most was the difficulty of obtaining peer review. By which I don't mean knowing who to invite (or writing effective reviews; that's a different story!). The problem was that many people, often the best people in the field to review a paper, refuse because they are too busy. This problem has become much worse over the last few years as submission rates rise and rise. 

This is important for all advocates of post-publication peer review because all current methods of post-publication peer review propose a significant *increase* of activity from the current standard. I do not think this is realistic without some kind of incentive model that would encourage people to spend large amounts of time reading and commenting in public on areas of their expertise. Of course, this blog post shows that some people will in general do this. But will a post publication peer review system work? By this I mean work in terms of people actually engaging. 

The well known Nature trial of open peer review (http://www.nature.com/nature/p... posted 71 articles for open comment. But 33 of these - almost half - received not a single comment. Comments that were posted were scanty and unremarkable. Despite this, there was huge interest in the trial and significant web traffic. So many people viewed the open peer review system, lots of people liked the concept, but no-one actually participated. 

This is surely the nub of the problem for any system of post-publication peer review. While I personally am sympathetic to the issues Richard raises, I'd like to see some realistic thinking (and proposals) about how the system would actually work. If it won't work, then it doesn't really matter what we think about post-publication peer review because it's not feasible. 

Avatar of: Geraint Rees

Geraint Rees

Posts: 1457

April 12, 2012

When I became Associate Editor of a (moderately high impact) journal, what surprised me the most was the difficulty of obtaining peer review. By which I don't mean knowing who to invite (or writing effective reviews; that's a different story!). The problem was that many people, often the best people in the field to review a paper, refuse because they are too busy. This problem has become much worse over the last few years as submission rates rise and rise. 

This is important for all advocates of post-publication peer review because all current methods of post-publication peer review propose a significant *increase* of activity from the current standard. I do not think this is realistic without some kind of incentive model that would encourage people to spend large amounts of time reading and commenting in public on areas of their expertise. Of course, this blog post shows that some people will in general do this. But will a post publication peer review system work? By this I mean work in terms of people actually engaging. 

The well known Nature trial of open peer review (http://www.nature.com/nature/p... posted 71 articles for open comment. But 33 of these - almost half - received not a single comment. Comments that were posted were scanty and unremarkable. Despite this, there was huge interest in the trial and significant web traffic. So many people viewed the open peer review system, lots of people liked the concept, but no-one actually participated. 

This is surely the nub of the problem for any system of post-publication peer review. While I personally am sympathetic to the issues Richard raises, I'd like to see some realistic thinking (and proposals) about how the system would actually work. If it won't work, then it doesn't really matter what we think about post-publication peer review because it's not feasible. 

Avatar of: A

A

Posts: 1457

April 16, 2012

The whole idea of journals is selecting the 'best' information. The need for that is growing fast with the data explosion that's going on. 

The article points a finger at the evident malfunctioning of the present system, which stands on 19th century roots. But it doesn't have a clue what might come next. The only thing that's clear is that 'digital age guru's' are very probably wrong. 

Avatar of:

Posts: 0

April 16, 2012

The whole idea of journals is selecting the 'best' information. The need for that is growing fast with the data explosion that's going on. 

The article points a finger at the evident malfunctioning of the present system, which stands on 19th century roots. But it doesn't have a clue what might come next. The only thing that's clear is that 'digital age guru's' are very probably wrong. 

Avatar of:

Posts: 0

April 18, 2012

Richard Smith is correct that pre-publication peer review is an unnecessary, and I would argue very harmful, filter.  Post-publication peer review should, in the long-run, sort out what is true/correct, although in the short term it may be subject to some of the same vagaries as pre-publication review.  Peer review, particularly pre-publication, if left to the unscrupulous or ill informed anonymous reviewer, as is often the case, invites all the Darwinian traits of unfettered capitalism.  It fosters survival of fittest, not the ‘fittest’ science, but the fittest networkers who wish to push forward their dominance.  Networking’s primary unspoken goal is to bypass the filter of peer review that exists for those not in the networked ‘club’.  In private, one will hear comments from those in the club, such as, “I must keep flying off to conferences or I will loose my stature in the fieldâ€쳌, and “if the data is published in a low impact journal it is really considered as not publishedâ€쳌.  Those in the club, essentially have ownership of the higher impact journals, routinely publishing their most mundane findings in these journals.  Scientific journals are like record labels, with the ‘Britney Spears effect’ frequently taking hold, those with flash (popular) but little substance win out.  As Richard Smith points out, the literature is full of studies published in lesser read journals that correct the conclusions made by high profile studies.  Since this is usually lesser known researchers correcting the more high profile researchers (i.e. if not lesser known their correction would be in a high profile journal), the result is an unspoken acceptance of the correction.  There can even seem to be a concerted effort to ignore (not cite) the correction, often with the incorrect high profile study continuing to be cited, maintaining the established hierarchy of scientists in a manner independent of the quality of science.  The increasingly capitalistic nature of biomedical science has made our field one where plagiarism is standard practice (it can even help one get into the Academy in some cases) and the academic currency of novelty of ideas is not valued.  The ‘elite’ often argue, its okay [the system] as the truth eventually wins out.  While the truth may win out eventually, many a scientific career is lost (some of the best in a field) while the decades wait for the truth to be accepted plays out.  How can we, with a straight face, encourage the next generation to go into such a field?  It amazes my non scientist friends that we scientists would allow science to be run in a manner little different than a mafia system.  So what is the solution?  Post-publication peer review will take some time to catch on. If one looks at online journals where commenting is possible, there are almost no comments.  Does this indicate that unlike the bloggers in other areas, scientists have no opinions, we are a group with little to say? No, it indicates that we are afraid of our mafia system, we cannot give our opinions freely and openly for fear of the power of the club to punish us (in peer review); hence this anonymous post.  Post-publication peer review when combined with a freely accessible regulatory body, that holds some clout, might be one solution, but it will be a very slow transition, as those who have been served well by the mafia system will resist.

Avatar of: forestview

forestview

Posts: 2

April 18, 2012

Richard Smith is correct that pre-publication peer review is an unnecessary, and I would argue very harmful, filter.  Post-publication peer review should, in the long-run, sort out what is true/correct, although in the short term it may be subject to some of the same vagaries as pre-publication review.  Peer review, particularly pre-publication, if left to the unscrupulous or ill informed anonymous reviewer, as is often the case, invites all the Darwinian traits of unfettered capitalism.  It fosters survival of fittest, not the ‘fittest’ science, but the fittest networkers who wish to push forward their dominance.  Networking’s primary unspoken goal is to bypass the filter of peer review that exists for those not in the networked ‘club’.  In private, one will hear comments from those in the club, such as, “I must keep flying off to conferences or I will loose my stature in the fieldâ€쳌, and “if the data is published in a low impact journal it is really considered as not publishedâ€쳌.  Those in the club, essentially have ownership of the higher impact journals, routinely publishing their most mundane findings in these journals.  Scientific journals are like record labels, with the ‘Britney Spears effect’ frequently taking hold, those with flash (popular) but little substance win out.  As Richard Smith points out, the literature is full of studies published in lesser read journals that correct the conclusions made by high profile studies.  Since this is usually lesser known researchers correcting the more high profile researchers (i.e. if not lesser known their correction would be in a high profile journal), the result is an unspoken acceptance of the correction.  There can even seem to be a concerted effort to ignore (not cite) the correction, often with the incorrect high profile study continuing to be cited, maintaining the established hierarchy of scientists in a manner independent of the quality of science.  The increasingly capitalistic nature of biomedical science has made our field one where plagiarism is standard practice (it can even help one get into the Academy in some cases) and the academic currency of novelty of ideas is not valued.  The ‘elite’ often argue, its okay [the system] as the truth eventually wins out.  While the truth may win out eventually, many a scientific career is lost (some of the best in a field) while the decades wait for the truth to be accepted plays out.  How can we, with a straight face, encourage the next generation to go into such a field?  It amazes my non scientist friends that we scientists would allow science to be run in a manner little different than a mafia system.  So what is the solution?  Post-publication peer review will take some time to catch on. If one looks at online journals where commenting is possible, there are almost no comments.  Does this indicate that unlike the bloggers in other areas, scientists have no opinions, we are a group with little to say? No, it indicates that we are afraid of our mafia system, we cannot give our opinions freely and openly for fear of the power of the club to punish us (in peer review); hence this anonymous post.  Post-publication peer review when combined with a freely accessible regulatory body, that holds some clout, might be one solution, but it will be a very slow transition, as those who have been served well by the mafia system will resist.

Avatar of: forsdyke

forsdyke

Posts: 4

April 19, 2012

I started Bionet.journals.note in the 1990s in the hope that it might catalyze thoughts as so nicely expressed by Richard Smith. Yes, agree,... . Yes, agree, ... . Yes, agree, ... . But perhaps there is something missing. Perhaps journal peer review has failed us, because the so-called "peers" have failed us. And the latter have failed us, because the funding agency peer review system, that allowed them to attain positions where they could so decisively influence what we read, failed us. How come?
      The title of another article in the same issue of The Scientist says it all - "Shopping Your Science" - with subtitle - "A dose of market training may help you win grants ... ". As I spelled out both in my book "Tomorrow's Cures Today?" (2000) and to Canada's Parliamentary Standing Committee on Science and Technology (2001): "The first rule of writing grant applications is not to be creative. As anyone can learn by reading accounts of great discoveries in the past, novel ideas are often difficult to articulate and difficult to understand. To put an original idea on a grant application is akin to professional suicide. People suffering the affliction of originality must either bring this deviant trait to order, or get out of scientific research." 

Donald Forsdyke, Kingston, Canada

Avatar of: forestview

forestview

Posts: 2

April 19, 2012

Yes, agree, but it is more than just creativity that is punished, it is actually depth of thinking on a subject that is punished in peer review.  The formula for success is lots of costly high tech simple to understand description (no thinking required).  The death knell is if you try and synthesize all this description into a concept/model, particularly if the model does not buy into the (mis)interpretation of one of the 'major player's' descriptions.

Avatar of:

Posts: 0

April 19, 2012

I started Bionet.journals.note in the 1990s in the hope that it might catalyze thoughts as so nicely expressed by Richard Smith. Yes, agree,... . Yes, agree, ... . Yes, agree, ... . But perhaps there is something missing. Perhaps journal peer review has failed us, because the so-called "peers" have failed us. And the latter have failed us, because the funding agency peer review system, that allowed them to attain positions where they could so decisively influence what we read, failed us. How come?
      The title of another article in the same issue of The Scientist says it all - "Shopping Your Science" - with subtitle - "A dose of market training may help you win grants ... ". As I spelled out both in my book "Tomorrow's Cures Today?" (2000) and to Canada's Parliamentary Standing Committee on Science and Technology (2001): "The first rule of writing grant applications is not to be creative. As anyone can learn by reading accounts of great discoveries in the past, novel ideas are often difficult to articulate and difficult to understand. To put an original idea on a grant application is akin to professional suicide. People suffering the affliction of originality must either bring this deviant trait to order, or get out of scientific research." 

Donald Forsdyke, Kingston, Canada

Avatar of:

Posts: 0

April 19, 2012

Yes, agree, but it is more than just creativity that is punished, it is actually depth of thinking on a subject that is punished in peer review.  The formula for success is lots of costly high tech simple to understand description (no thinking required).  The death knell is if you try and synthesize all this description into a concept/model, particularly if the model does not buy into the (mis)interpretation of one of the 'major player's' descriptions.

Follow The Scientist

icon-facebook icon-linkedin icon-twitter icon-vimeo icon-youtube

Stay Connected with The Scientist

  • icon-facebook The Scientist Magazine
  • icon-facebook The Scientist Careers
  • icon-facebook Neuroscience Research Techniques
  • icon-facebook Genetic Research Techniques
  • icon-facebook Cell Culture Techniques
  • icon-facebook Microbiology and Immunology
  • icon-facebook Cancer Research and Technology
  • icon-facebook Stem Cell and Regenerative Science
Advertisement
Eppendorf
Eppendorf
Advertisement
Life Technologies