As difficult as it is to forecast a researcher’s success, the field of metascience—the study of scientists—is giving it a go. In research published today (September 12) in Nature, scientists at Northwestern University and the University of Chicago used past performance to predict a researcher’s future h index, a measure of scientific output that takes into account numbers of publications and citations.
“This is the first study I’ve seen to try to predict the h index prospectively,” said Peter Higgins, a professor of gastroenterology at the University of Michigan, who was not involved in the study. Predicting future output is exactly what department committees are trying to do when they evaluate a faculty member for promotion or tenure, Higgins noted: “Do we invest in this particular faculty member? Are they on the right track?”
The h index was developed in 2005 by Jorge Hirsch, a physicist at the University of California, San Diego, to quantify the scientific impact of an individual’s research publications. It’s calculated using the number of papers a researcher has published and the number of citations those papers have accrued. An h index of 15, for example, indicates a scientist with 15 papers, each cited at least 15 times.
Physical medicine and rehabilitation professor Konrad Körding of the Northwestern University Feinberg School of Medicine wanted to find a way to determine whether the past output, quantified by the h index, could be used to predict future output, and thus be used to inform hiring and tenure committees in academia. In order to find qualities that correlated with future h index numbers, Körding and his colleagues turned to Scopus, a database of academic publications. After winnowing down the pool of researchers to about 3,000 neuroscientists, plus a handful of evolutionary biologists and Drosophila researchers, the researchers took basic information from researchers’ curricula vitae 5 years after their first publication, and examined what qualities predicted h index numbers another 5 years onward.
The team suspected that information, like quality of one’s thesis advisor or time taken to finish PhD training—important considerations for academic search committees—would inform future h numbers, but in fact, the final equation was relatively simple. In addition to the h index, they included just four other factors: total number of publications, years since a researcher’s first publication, the number of papers in top-level journals, and the number of different journals in which the publications are found.
“We were surprised at how many features were left out [of the final equations],” said Northwestern’s Daniel Acuna, first author on the study. Combined, these factors were able to predict 5-year future h indices with about 66 percent accuracy, while the h index alone predicted with less than 50 percent accuracy. (How will your h index hold up over time? To get an idea, plug your stats into their online calculator).
Acuna and Körding hypothesize that publishing in different types of journals may be important because this allows a wider exposure for work, and more opportunity for citation. It may also reflect a researcher’s penchant for collaborations or interdisciplinary research, which could influence a researcher’s productivity, said Carl Bergstrom, an evolutionary biologist at the University of Washington who was not involved in the study.
But Hirsch fears that publishing diversity, which contributes more to longer-term predictions of h index (10-plus years) than to more short-term estimates (1 to 5 years), may be a fluke. “At least in my field,” noted Hirsch in an email, “it’s a sign of ‘shopping around’ for a journal that will accept a paper that has been rejected by more mainstream journals, which should be a negative indicator of future success.”
Furthermore, “we don’t know if the correlation is causal,” Bergstrom said. “It could be a spurious correlation.”
Even so, Körding hopes that the research may take some of the guess-work out of the decision-making process in search committees and study sections, which often follow gut feelings when choosing among possible candidates. “The cool thing is you know how much each feature should influence a search committee’s evaluation,” said Körding.
Some cautioned against trying to apply the work to fields beyond neuroscience, however. “Some fields are better cited than others,” explained Sidney Redner, a physicist at Boston University who was not involved with the work. “The methodology is very field-dependent. With different fields, you would probably need different parameters.”
And, of course, even if researchers are able to accurately predict an individual’s future h index, a single number “is no substitute for reading papers,” Bergstrom said. “It’s only a rough quantitative guideline.”
D. Acuna et al., “Predicting scientific success,” Nature, 489:201-202, 2012.