The nationwide experiment will initially include around 100,000 volunteers.
More papers correlate with top-cited research for more-established academics, but not newly minted professors, according to a study.
September 28, 2016|
FLICKR, ROBERT CUDMORECitations are often used as one measure of research quality. Established researchers who publish more papers are more likely to see those papers widely cited, information scientists at the University of Montreal and Leiden University in the Netherlands have found. In their study, published today (September 28) in PLOS ONE, the authors reported a different trend among researchers who published their first paper in or after 2009. Members of this less-established cohort were more likely to have top-cited papers if they were less-productive authors (had published 15 or fewer papers), but those less-established authors who had more than 30 papers had fewer papers fall within the top 1 percent of cited papers in their fields.
Analyzing the publication and citation records of more than 28 million researchers, the study’s authors provide a window into whether policies that incentivize researchers to publish as many papers as possible lead to higher-quality work—or just more publications.
“I haven’t seen a study as comprehensive and massive, as far as the data, as this one,” Sverker Sörlin, a professor of environmental history at the KTH Royal Institute of Technology in Stockholm, Sweden, who was not involved in the work, told The Scientist. “The take-home message, for me, is that researchers who publish a lot also tend to publish higher-quality work. My assumption is that over the long term, the younger researchers that continue to do research will also conform to this behavior.”
“The results are not really surprising,” Mathias Binswanger, an economist at the University of Applied Sciences Northwestern Switzerland who has written about academia’s “publish or perish” problem but was not involved in the study, wrote in an email to The Scientist. “By spreading results of research in a variety of papers, you get more attention and, eventually, more citations for some articles.”
Vincent Larivière of the University of Montreal and Rodrigo Costas of Leiden University used a database of 28,078,476 researchers who had published at least one paper between 1980 and 2013 and separated the authors into four categories—medical and life sciences; natural sciences; law, arts, and humanities; and social and behavioral sciences. The team then tallied how many of each author’s papers fell into the 1 percent most-cited in their respective fields each year.
For the researchers who published their first paper between 1980 and 1985, an increase in the total number of papers published was linked to an increased proportion of top 1 percent most-cited papers for each author. This correlation was strongest for medical and life science authors. “We saw no optimum in the number of papers that lead to better-quality ones,” Larivière told The Scientist. “The more a researcher published, the more likely he or she was to publish a top-cited paper.”
But for researchers in the medical and life sciences who published their first paper in 2009 or later, a higher publication output of more than 30 papers was linked to a lower share of the top-cited papers. “For the younger scholars, the incentives to publish as much as possible [appear to have] detrimental effects,” said Larivière.
Citations alone cannot determine research quality, Binswanger pointed out. “It is dangerous to equate the quality of a paper with the number of its citations. Sometimes it takes a long time do discover the quality of a paper of a lesser-known author, while famous authors are constantly cited for strategic reasons to increase the chance of publication,” he said. “I think the term ‘quality’ should be completely avoided when we use measurable indicators.”
Larivière and Costas also found that the longer researchers stayed in academia, the higher the number of top-cited papers they accumulated. “There is this Darwinian effect by which people who don’t perform terribly well in research tend to leave research positions,” said Sörlin.
Larivière views the “publish or perish” culture as potentially harmful to the scientific community, particularly for younger researchers. And it’s one factor fueling the uptick in low-quality journals, he said. He would next like to understand the impacts of gender and authorship position on research productivity.
V. Larivière et al., “How many is too many? On the relationship between research productivity and impact,” PLOS ONE, doi:10.1371/journal.pone.0162709, 2016.
September 29, 2016
A couple of years ago a friend proposed a four hour day, in a book he wrote, so that people could engage in more intellectual and home/spiritual related tasks. He was quite surprised to find out that such an idea was not only proposed but successfully and practically accomplished millenia ago in the monastic orders. The monasteries in England prior to the "poor laws" provided an atmosphere which allowed more free time so that people could become more intellectually and spiritually engaged in a balanced lifestyle.
My opinion is that we have gone way way overboard with the productivity nonsense and that it is radically affecting the quality of work and life, and the "quantity" of work has become an illusive phantom that no one can understand any more. When work is based primarily on productivity, advancement is lacking. We gain information but lose out in wisdom and understanding in a form of ignorance to which we have beconme blind. We never get around to examining just what we have really done. We lose out tremendously when in the name of productivity, we have to wade through a swamp of false results and results that are accepted solely on the basis of a "positive" result and rejected solely on the basis of a "negative" result.
A nice little quagmire we have developed, haven't we?
September 30, 2016
I agree fully with your opinion: “We have gone way overboard with the productivity nonsense …” etc .
However, your first suggestion about "four hour day" may not be applicable to scientists, most of whom think of science 24 hours a day, when sleeping, dreaming, taking a shower or sitting on the jon. You might be knowing the Kekule story that the sructure of benzene ceme to him in a dream. My own chemistry teacher in my undergraduate days, the late Professor TR Govindachari, was said to wake up in the middle of the might and scrawl structures and reactions on the floor with a piece of chalk. There are other examples.
October 1, 2016
I agree with Dr Binswanger that high citations do not mean high quality or high impact. In the scientific world we should and must encourage science-based publications - active researchers tend to publish most of their scientific observations and others are more selective. In my opinion number of citations should not be tied to quality of a paper or even glorify someone. I see many repeatative science with very high analytical / software based /imaging based works with seasoned researchers and sometime new generation of researchers who are asked to publish in high impact journals, are getting numourous citations. On the other hand many insightful works that are being published by not so known authors or in not so known fields/journals (known fields I meant life sciences, medicine, chemistry) are not being cited because of many reasons including their wide-spread accessibility. Yes, we have gone way overboard with citations and impact factors.