Advertisement

Open access brings more citations

Analysis of PNAS articles suggests that open access papers are cited more heavily than subscription-based articles

By | May 16, 2006

Open access papers are cited more frequently than subscription-based articles, according to a study published this week in PLoS Biology, an open access journal. However, these findings alone may not persuade more authors to consider open access publishing, experts said. This report "tends to confirm what many people suspected would be the case," said Kenneth R. Fulton, executive director of the National Academy of Science and publisher of Proceedings of the National Academy of Science. But how widely applicable these findings are, and whether they will induce authors to consider open access publishing is unknown, he added. For instance, a survey released last week by the Publishing Research Consortium suggests that access to articles is not a major problem for researchers. Specifically, the survey found that scientists ranked greater access to the literature 12th out of 16 on a list of possible ways to improve research productivity. Previous research has examined the influence of open access on citation impact. The latest study focused on papers in PNAS, which began offering a per-article open access publishing option (costing from $750 to $1,000 per article) on June 8, 2004. Study author Gunther Eysenbach, publisher and editor of the open access Journal of Medical Internet Research, restricted his analysis to 1,492 articles published between June and December 2004, 212 of which were open access. He measured raw citation data zero to six months after publication, four to 10 months after publication, and again 10 to 16 months after publication. Open access articles were cited more heavily both four to 10 and 10 to 16 months after publication, both when Eysenbach considered only raw data and when he also accounted for such potential "confounders" as number of authors, past productivity, country of corresponding author, and submission track. For instance, after adjusting for confounders, open access articles were almost three times more likely than non-open access articles be cited at least once 10 to 16 months after publication. Self-archiving, in which authors post a paper for free on the Internet, also appeared to increase citations. There was "a clear relationship between the level of openness and the citation levels," Eysenbach writes. Fulton said PNAS has experienced a low but relatively steady percentage of authors choosing open access publishing. "Once this article comes out, perhaps we'll get a spike in open access submissions," he said, "It's really hard to tell." One author who might switch is Steven Gross, a researcher at the University of California, Irvine, who published a non-open access paper about intracellular actin-based transport in PNAS in September 2004. "I would be more inclined to publish open access" in light of these results, he said in an Email. "My feeling about the paper is that it's welcome and a step forward," David Hoole, head of brand marketing and content licensing at Nature Publishing Group (NPG), told The Scientist. But Hoole said he doubted this paper will motivate many authors to seek out open access, because when it comes to deciding where to publish, authors will be more influenced by conditions imposed by their grants than by the possibility of more citations. Nature currently has no plans to switch to an open access model, "but we are always considering our options," Hoole said. Some NPG journals offer open access, or a hybrid open access/fee-based approach akin to that at PNAS. Hoole did note one potential flaw in the study: To sidestep the potential confounder that open access articles were generally more important than non-open access papers, Eysenbach asked authors to self-rate the relative urgency, importance, and quality of their work. "Self-rating in these circumstances seems a weak methodology on which to base important claims," he said. Eysenbach said he tried to gain access to peer reviewers' assessments, but the journal would not provide them, citing confidentiality concerns. Thus, "the only option is really to ask the authors." Eysenbach told The Scientist he plans to continue monitoring his dataset at six-month intervals, noting that his latest data suggest the open access advantage continues to widen at 16 to 22 months post-publication. Jeffrey M. Perkel jperkel@the-scientist.com Links within this article S. Pincock, "Will open access work?" The Scientist, Oct. 11, 2005. http://www.the-scientist.com/news/20051011/02/ G. Eysenbach, "Citation advantage of open access articles," PLoS Biology, May 16, 2006. http://www.plosbiology.org Proceedings of the National Academy of Science http://www.pnas.org I.Rowlands and R. Olivieri, "Overcoming the barriers to research productivity: A case study in immunology and microbiology," Publishing Research Consortium. http://www.publishingresearch.org.uk/prcweb/PRCWeb.nsf/0/3DF67165EB1FCA078025716B0048ADB8!opendocument "Effect of open access on citation impact: A bibliography of studies," Open Citation Project. http://opcit.eprints.org/oacitation-biblio.html Gunther Eysenbach http://yi.com/home/EysenbachGunther/ The Journal of Medical Internet Research http://www.jmir.org J. Snider et al., "Intracellular actin-based transport: How far you go depends on how often you switch," PNAS, September 7, 2004. PM_ID 15331778 Nature Publishing Group http://www.nature.com/index.html T. Agres, "Publishers, societies oppose 'public access' bill," The Scientist, May 11, 2006. http://www.the-scientist.com/news/display/23426/ G. Eysenbach, "The open access advantage," Journal of Medical Internet Research, May 15, 2006. http://www.jmir.org/2006/2/e8/
Advertisement

Comments

May 16, 2006

I find the word "flaw" - used at the end of this article - a bit too strong - "limitation" would be a fairer expression. It is certainly a _limitation_ of this study that there was no objective third party judging the quality of each article - if I would have such an objective quality rating, I could have controlled for it as well. However, there is no such easily obtainable "third-party, objective" measure for quality - which is exactly why people use citations as a proxy for quality (I used and adjusted for data like the authors past citation history as proxy for article quality). The only way to address this issue would have been to give all 1492 papers to 2-3 experts and let them rate the quality, and then adjust for these quality ratings (due to high inter-rater variability it may even be that more raters are needed to come to a "reliable" conclusion on one articles' importance). If somebody gives me half a million dollar to pay each of these experts $100, I am happy to add this analysis. In the absence of funding I did what was feasible, namely adjusting for quality proxies (funding source, authors' past citation history etc). I also asked authors to self-rate their papers to show that there was no difference between the groups. While authors are naturally biased and may overrate the importance of their study, I think nobody is more qualified to judge the potential impact of their paper than the author, and the important part is that there was no difference between the groups in self-rated "quality".

Follow The Scientist

icon-facebook icon-linkedin icon-twitter icon-vimeo icon-youtube
Advertisement

Stay Connected with The Scientist

  • icon-facebook The Scientist Magazine
  • icon-facebook The Scientist Careers
  • icon-facebook Neuroscience Research Techniques
  • icon-facebook Genetic Research Techniques
  • icon-facebook Cell Culture Techniques
  • icon-facebook Microbiology and Immunology
  • icon-facebook Cancer Research and Technology
  • icon-facebook Stem Cell and Regenerative Science
Advertisement
Maverix Biomics
Maverix Biomics
Advertisement