The world of scientific publishing has been buzzing in the past few months, with many leading publishers working on new initiatives. Some have been exploring alternatives to traditional publishing processes, others rethinking standard business models. This surge in inventiveness has been fuelled by widespread and growing discontent with the limitations of conventional journal publishing.
The Research Works Act, which made the rounds of the US Congress earlier this year, brought the question of access to the fore and motivated scientists to become activist in their support for open-access publishing. Many universities (for example Harvard University and the University of California, San Francisco) have strongly urged their researchers towards open access, and, in the first nationwide push in the same direction, the British Government announced a couple of weeks ago that all publicly funded research will be published open access by 2014.
There has been a rising tide of blog posts, seminars, and workshops discussing the problems of the peer review system, with numerous proposals being floated for how to fix it, and also much discussion about the need for more openness and transparency, particularly with respect to the data behind research findings. (See "Bring on the Transparency Index," The Scientist, August 2012). In many areas, we have seen a move towards self-publishing, for example on preprint servers like ArXiv and F1000 Posters.
Concerns about access and peer review have coincided with discussions about new initiatives for measuring researcher outputs, assessing their impact, and evaluating the "‘invisible"’ anonymous work that researchers perform for the scientific community (such as refereeing articles and sitting on grant committees) that takes up significant time but for which they are not formally recognized.
These discussions have pushed publishers to explore alternatives to traditional publishing (such as data journals like GigaScience and data repositories like figshare and Dryad), and to rethink standard business models (for example PeerJ and eLife). (See "Whither Science Publishing," The Scientist, August 2012.) We at F1000 are also pushing to change the way scientific research is published, by implementing a completely open peer review process at our new publishing program, F1000 Research.
At F1000, we believe that if everything is out in the open, then biases will lose their power and errors will quickly be addressed and discussed. Furthermore, the contributions of referees, whose role in improving published science is vital, can be publicly acknowledged and formally recognized as important and valuable outputs.
The F1000 Research publication model works as follows: New submissions go through a rapid internal pre-publication check and are then published immediately, labeled clearly as "Awaiting Peer Review." Expert referees are then invited to review the submissions and are asked to do two things: first, assign a quick "Approved" (i.e., seems OK), "Not Approved" (i.e., does not seem OK), or "Approved with Reservations” (i.e., is a sensible contribution but the referee has strong reservations about one or more key aspects of it, such as the methods used or the conclusions drawn) status within a matter of days. The paper's status will be prominently displayed along with the referee’s name. Second, referees are asked to write a more standard referee report that they sign and publish alongside the article (this is optional if the referee status provided in the first step was "Approved"). Authors are then encouraged to revise their articles in response to the referee’s comments and each article version will be separately accessible and citable.
Other journals have implemented variations on the open peer review system just described. (See “I Hate Your Paper,” The Scientist, August 2010.) Biology Direct, for example, often publishes reviewer comments of accepted articles, but these reports are different than the ones prepared for the authors, and initial referee recommendations regarding whether to review the article or not are kept private. BMC also uses open peer review in some of its medical journals, but referee reports are only made public for articles accepted and published by the journals. And the BMJ journals practice optional open peer review, in which referees may sign their reports if they wish.
All of these models are quite different from ours: they are selectively open and structured to avoid risk of hurt feelings or reputations, while we are publishing first and making referees’ decisions and reviews—positive, negative or undecided—entirely transparent.
As a novel approach to publishing in the life sciences, the first question you might ask is whether nor not it is working. It’s too early to draw concrete conclusions at the moment, as we only soft-launched in mid-July, and we plan to spend the next few months fine-tuning our model and our processes before more formally launching on a full platform at the end of 2012. However, with the publication of our first articles a few weeks ago we have already been able to validate many of the core principles of what we’re trying to achieve, which is very encouraging. We are enjoying an unexpectedly high and positive response rate from referees agreeing to conduct F1000 Research-style open reviewing, and in a very timely manner: we have had several articles reviewed by a minimum of two referees within 48 hours of being published.
We are also seeing useful comments on the articles from interested scientists, and there are debates opening up between authors and referees about issues such as competing interests. We are watching this one closely as navigating and interpreting disclosures can be tricky—it seems reasonable to expect that in an increasingly siloed world of specialized science, a referee who is qualified to comment may, or perhaps should, have what many would judge to be relevant disclosures to make. Since all our referees are named, this question and concerns arising from it can be addressed openly.
Many had predicted that our “publish first, peer review later” model would attract substandard work and that we would be inundated with poor quality articles, but we are very pleased to see that this has not been the case thus far. Furthermore, the fact that our referees are clearly quite happy to openly criticize where they have concerns further supports our hunch that scientists will think twice before submitting sloppy articles as this will lead to open criticism of their work, which will be eternally linked to their paper by way of the article citation.
It is exciting to watch F1000 Research develop, even at this very early stage, and it will be interesting to see how these first articles and exchanges influence the openness and transparency of future authors and referees. We also look forward to feedback from the scientific community on other new publishing ventures, such as eLife, which is scheduled for launch later this year and plans to use a unique model for the funding of open access, and PeerJ, which proposes lifetime author membership fees as an alternative to article processing charges.
The first substantially new business model to hit science publishing was open access, which we launched with Biomed Central over a decade ago. After a long interregnum, publishers and scientists are clearly now primed for more radical change. F1000 Research is very pleased to be among the innovators who are testing new models and who will, ultimately, help make real change a reality.
Rebecca Lawrence is the publisher of F1000 Research.