For decades, researchers have complained that the publication and evaluation systems in academia are broken or need urgent reform. There have been calls for a more equitable system where scientists are evaluated based on the rigor, quality, significance, and impact of their work instead of their institutional affiliation and the impact factors of the journals where their research is published. On October 20, eLife, a peer-reviewed journal, announced new changes in their publishing policies that they claim will make this possible. 

See “Q&A: Why eLife Is Doing Away with Rejections

As of January 2023, eLife will no longer make acceptance or rejection decisions of papers that are submitted for publication. Instead, the journal will publish every paper that it sends out for review as a preprint, along with the peer reviewers’ comments and the journal’s editorial assessment of the work, which will highlight the significance of the research and the extent to which the evidence provided by the authors supports the claims made. It is up to the authors to then decide if they want to respond to the peer reviews and submit a revised version, or publish the preprint along with the editorial assessment and peer reviews without further modification.

These changes transform eLife into simply a service provider. Having a paper published in eLife will no longer be a privilege or a stamp of distinction. What will matter is what is written in the assessment and the peer-review reports, if people are willing to read them. 

The reactions from the community on Twitter and elsewhere have ranged from praise of eLife for being bold—challenging the publishing establishment and leveling the playing field in science—to doubts, skepticism, and warnings about the potential consequences of embracing this new model. Interestingly, most skeptical people still acknowledge that if embraced, improved, and implemented successfully, the new model could be a game changer. 

In my view, eLife’s new model is imperfect, but it is exactly the type of disruptive change we need to push the boundaries of innovation in scientific publishing. At the same time, lasting change will require much more than the action of a single journal; everyone involved in the scientific enterprise must participate.

From gatekeepers to one gatekeeper

So far, the loudest criticism has been that it is now solely the job of an editor, rather than an editor plus a committee of peers, to act as a research gatekeeper—in this case, by deciding if a paper will be sent out for review. Everyone who has submitted their work to eLife knows very well that getting past the editors is already the main hurdle to publication. Although eLife admits that it will not have the capacity to review all submitted papers, its criteria for accepting or rejecting papers remains vague. In a tweet, eLife Editor-in-Chief Michael Eisen stated, “We will review papers based on our capacity to produce high-quality reviews that will be useful to the public.” Providing more clarity about the new editorial policies and involving more than one editor in deciding which papers get reviewed is essential to address current concerns and reduce editorial bias. 

We also must accept that some sort of gatekeeping will always be needed to safeguard science and protect society from flawed research, misinformation, fraud, and manipulation of data and facts. 

Career considerations

Many scientists who have been publishing their research in eLife for years are disappointed by the new policies and feel let down. By sending their best work to eLife, they helped the journal build its reputation. In exchange, they got “reputational credit” that they or their trainees could leverage for career advancement, and a stamp of distinction that allowed them to stand out from the crowd. Many are concerned about how their work published in eLife will be perceived by others a few years from now.

For early career researchers, the prestige and impact factors of journals represent much more than numbers. Publishing in a high-profile journal enables them to open new doors of scientific opportunity (e.g., jobs, collaborations, and funding), enter and establish their presence in new research fields, or access speaking opportunities at conferences and symposia to promote their science and career. Until the scientific community embraces the new model of preprint and post-publication reviews as the norm, which may take a while, it is likely that these researchers will not be sending their best papers to eLife.

The challenge presented by replacing journals’ prestige with quality feedback from the editors and reviewers is that the latter is not yet quantifiable or easy to aggregate into a single matrix. 

The flip side of making it more difficult to quantify the quality of a researchers’ output is that authors will no longer be obligated to address gaps and flaws in their work as a condition of publication. But it is likely that individuals who fail to do so will not attract talented students and postdocs, and their integrity and work will be scrutinized during grant reviews or when they come up for promotion. Eventually, people will realize there is accountability and a high price for failing to close holes in their research. Only a democratic and open publishing system, where all reviews and assessments are published and researchers assume their responsibility in protecting scientific integrity, can safeguard against such practices.

A need for bold experiments

The new eLife model is imperfect and may have flaws, but without participation and feedback from the scientific community, we will never know whether it works or how to improve it. Like any experiment, it may work, partially work, or fail. Whatever the outcome, we will no doubt learn something new that we can use to improve the existing model or to launch a new experiment. 

We cannot change the current publishing system or academic culture without all of us—organizations and individuals—sharing the risks and responsibilities that come with reform. Therefore, the real question is not whether this is the right model or not, but whether the scientific community, universities, scientific societies, and funders are prepared to help bring about needed change. To transition to a system where what matters is what we publish, rather than where we publish, we will all have to make the time to actively participate in changing every aspect of the publishing process, from submitting our papers to journals that embrace new models, to reviewing preprints and post-publication articles, to taking the time to read such reviews when evaluating candidates for jobs, promotions, and awards. eLife has shown that it is ready to take risks and experiment. Are we prepared to do the same? Are universities ready to end their addiction to the impact factor and reassess and change their incentive and evaluation systems? 

The stakes are high for early career researchers, and they cannot afford to take the lead in experimenting with new publishing models. Most will likely choose to wait and see if they will work. Therefore, it is the responsibility of tenured, established scientists to lead and actively participate in these experiments and promote evaluation practices where the quality of the work rather than the journal’s name is what counts. Changing the culture and the system starts with changing our practices, beliefs, and behaviors, one scientist at a time. If a critical mass of senior scientists does this, it will help reduce the risks for early career researchers. 

The real transformation in publishing will happen when the community embraces both preprint and post-publication reviews and when these reviews and the opinion of the community, rather than the impact factor of journals, become the primary tools we use to evaluate, hire, and promote our colleagues. If and when that happens, posting preprints and community-sourcing peer review will be sufficient, and we will not need eLife, other journals, or editors. For this to happen, we need to find more creative ways to incentivize open peer review at different stages of publishing and encourage high-quality reviews by making them admissible when evaluating the work of scientists and their achievements. 

See “Opinion: Repairing Peer Review

We also need to expand the circle of discussions and debates around the future of publishing and evaluation to include all the stakeholders (researchers, universities, funders, scientific societies, and publishers). This will pave the way for a culture that levels the playing field, makes science more equitable, and rewards quality over quantity.