PIXABAY, OPENCLIPARTVECTORSDuring the course of their careers, many scientists criticize the work of others—pointing out flaws, inconsistencies, or contradictions—in the literature. This is part of scientific progress. A proof-of-concept study now describes a research tool for recognizing these so-called negative citations, making it possible to contextualize and study them on a larger scale than possible before.
According to the results, published today (October 26) in PNAS, papers pay only a slight long-term penalty in the total number of citations they receive after a negative one. That criticized papers continue to garner citations over time suggests that it’s better to receive negative attention than none at all.
Until now, the best way of studying negative citations was by reading and coding each individual article—something Jeffrey Furman, an associate professor of strategy and innovation at Boston University, said he tried before, while analyzing the impact of retracted studies. In contrast, the new method, which employs natural language processing and machine learning, “could have saved me dozens of hours, if only I had waited,” said Furman, who was not involved with the study. “Coming up with an algorithm for identifying citations that involve disagreement with prior research strikes me as a pretty important methodological contribution,” he added.
Setting out to understand how often studies are criticized in the literature and the implications of such reports, Nicola Lacetera of the University of Toronto’s Institute for Management and Innovation and his colleagues developed an natural language processing algorithm from a training set of 15,000 citations extracted from Journal of Immunology studies, categorizing the citations as “negative” or “objective” with the help of five immunology experts.
Lacetera’s group then used the tool to analyze 15,731 articles from the same journal published between 1998 and 2007. Of 146,891 unique papers cited in these articles, about 7 percent received one or more negative citations. Not surprisingly, papers were more likely to receive a negative citation within the first few years of publication. What’s more, those negatively cited studies received more citations overall.
“The results make sense and fit with what I expect,” said biostatistician Gonçalo Abecasis of the University of Michigan in Ann Arbor, who did not participate in the research. “Usually, when you take the time to write and say, ‘These [researchers] got it wrong,’ you do it with reference to papers that are otherwise high profile and that people think are probably good.”
Perhaps not surprisingly, negative citations seemed to occur less often between pairs of papers coauthored by scientists who were closer, geographically. “If you’re in the same department, the social cost of a negative citation may be too high, and when you can interact personally there might be other ways—more informal ways—to express your criticism,” Lacetera said.
Although the tool focused on one immunology journal, Lacetera said that it could easily be adapted for the study of publications in other fields. It could also be used as an indicator of what research questions are receiving heightened attention at a given point in time. Negative citations may even boost the reliability of research findings over the long-term within a given field, said Lacetera, and that’s something he hopes to measure.
Others are already thinking about how they can use this tool. In particular, some noted its potential to enhance the study of retractions.
Furman, for example, is hoping to find out why retracted papers continue to be cited—whether it’s because researchers are unaware of the retractions or are citing them as acknowledgment.
Pierre Azoulay of MIT, who is a departmental colleague of study coauthor Christian Catalini but did not participate in the work, noted that the tool could allow researchers to ask whether the number of negative citations a paper receives can predict whether it will be retracted. “That sounds science fiction-y, but we can at least try,” he said. “I’m very excited by this.”
More generally, added Azoulay, “this piece is the harbinger of things to come—that by combining really large data sets with new analytical techniques such as machine learning, we can derive meaning from those article pairs that are linked via citation.”
“That is extremely promising for this entire bourgeoning field that makes science itself an object of study,” he said.
C. Catalini et al., “The incidence and role of negative citations in science,” PNAS, doi:10.1073/pnas.1502280112, 2015.