Advertisement

Predicting Publishing Futures

Researchers measure scientific output to determine if past success predicts future productivity.

By | September 12, 2012

image: Predicting Publishing Futures Flickr, moonlightbulb

As difficult as it is to forecast a researcher’s success, the field of metascience—the study of scientists—is giving it a go. In research published today (September 12) in Nature, scientists at Northwestern University and the University of Chicago used past performance to predict a researcher’s future h index, a measure of scientific output that takes into account numbers of publications and citations.

“This is the first study I’ve seen to try to predict the h index prospectively,” said Peter Higgins, a professor of gastroenterology at the University of Michigan, who was not involved in the study. Predicting future output is exactly what department committees are trying to do when they evaluate a faculty member for promotion or tenure, Higgins noted: “Do we invest in this particular faculty member? Are they on the right track?”

The h index was developed in 2005 by Jorge Hirsch, a physicist at the University of California, San Diego, to quantify the scientific impact of an individual’s research publications. It’s calculated using the number of papers a researcher has published and the number of citations those papers have accrued. An h index of 15, for example, indicates a scientist with 15 papers, each cited at least 15 times.

Physical medicine and rehabilitation professor Konrad Körding  of the Northwestern University Feinberg School of Medicine wanted to find a way to determine whether the past output, quantified by the h index, could be used to predict future output, and thus be used to inform hiring and tenure committees in academia. In order to find qualities that correlated with future h index numbers, Körding and his colleagues turned to Scopus, a database of academic publications. After winnowing down the pool of researchers to about 3,000 neuroscientists, plus a handful of evolutionary biologists and Drosophila researchers, the researchers took basic information from researchers’ curricula vitae 5 years after their first publication, and examined what qualities predicted h index numbers another 5 years onward.

The team suspected that information, like quality of one’s thesis advisor or time taken to finish PhD training—important considerations for academic search committees—would inform future h numbers, but in fact, the final equation was relatively simple. In addition to the h index, they included just four other factors: total number of publications, years since a researcher’s first publication, the number of papers in top-level journals, and the number of different journals in which the publications are found.

“We were surprised at how many features were left out [of the final equations],” said Northwestern’s Daniel Acuna, first author on the study. Combined, these factors were able to predict 5-year future h indices with about 66 percent accuracy, while the h index alone predicted with less than 50 percent accuracy. (How will your h index hold up over time? To get an idea, plug your stats into their online calculator).

Acuna and Körding hypothesize that publishing in different types of journals may be important because this allows a wider exposure for work, and more opportunity for citation. It may also reflect a researcher’s penchant for collaborations or interdisciplinary research, which could influence a researcher’s productivity, said Carl Bergstrom, an evolutionary biologist at the University of Washington who was not involved in the study.

But Hirsch fears that publishing diversity, which contributes more to longer-term  predictions of h index (10-plus years) than to more short-term estimates (1 to 5 years), may be a fluke.  “At least in my field,” noted Hirsch in an email, “it’s a sign of ‘shopping around’ for a journal that will accept a paper that has been rejected by more mainstream journals, which should be a negative indicator of future success.”

Furthermore, “we don’t know if the correlation is causal,” Bergstrom said. “It could be a spurious correlation.”

Even so, Körding hopes that the research may take some of the guess-work out of the decision-making process in search committees and study sections, which often follow gut feelings when choosing among possible candidates. “The cool thing is you know how much each feature should influence a search committee’s evaluation,” said Körding.

Some cautioned against trying to apply the work to fields beyond neuroscience, however. “Some fields are better cited than others,” explained Sidney Redner, a physicist at Boston University who was not involved with the work. “The methodology is very field-dependent. With different fields, you would probably need different parameters.”

And, of course, even if researchers are able to accurately predict an individual’s future h index, a single number “is no substitute for reading papers,” Bergstrom said. “It’s only a rough quantitative guideline.”

D. Acuna et al., “Predicting scientific success,” Nature, 489:201-202, 2012.

Advertisement
The Scientist
The Scientist

Add a Comment

Avatar of: You

You

Processing...
Processing...

Sign In with your LabX Media Group Passport to leave a comment

Not a member? Register Now!

LabX Media Group Passport Logo

Comments

Avatar of: IkeRoberts

IkeRoberts

Posts: 9

September 13, 2012

Redner's caveat not to generalize to other fields is very important, because many have different citation and publishing patterns than neurobiology. It is particularly so in applied fields, where some of the work that has the greatest impact on society--the intended purpose--is cited much less than work that informs only other researchers.

Avatar of: Faye18

Faye18

Posts: 1

September 13, 2012

Reminds me, doesn't Reuters have some Nobel prize predictor using H-index like metrics. My impression was that it wasn't very good.

http://ip-science.thomsonreute...

Avatar of: Bruno Diaz

Bruno Diaz

Posts: 1457

September 13, 2012

Link to online calculator seems to be broken...

Avatar of: niketnsko

niketnsko

Posts: 9

September 13, 2012

www.niketnsko.com Rabat Kvinder Nike Shox til salg,billig Nike Shox NZ,Nike Shox Turbo,Clearance Nike Shox Sko,Shox sko tilbud,nike sko shox salg

Avatar of: Ed Rybicki

Ed Rybicki

Posts: 82

September 21, 2012

You know how damaging this could be? I played with the tool, and it severely penalises you for (a) not publishing in a "top" journal (how many of us do??), (b) obviously takes NO account of how one might strike gold, or get involved in something trendy, and shoot up the ratings.

It could be SERIOUSLY misused by administrators: I can think of several high-achieving people in our institution alone - and especially one late bloomer - who might have lost their jobs if such a tool had been used too early. Like any prediction, it is worth what you pay for it.

September 24, 2012

I strongly agree with ED Rybicki. This type of simple calculation should be used sparingly and doesn't seem fit for use in the evaluation of academic research or performance. High impact citation already seems like a flawed measurement of research or academic performance, so why try to simplify it even further?

Follow The Scientist

icon-facebook icon-linkedin icon-twitter icon-vimeo icon-youtube
Advertisement
RayBiotech
RayBiotech

Stay Connected with The Scientist

  • icon-facebook The Scientist Magazine
  • icon-facebook The Scientist Careers
  • icon-facebook Neuroscience Research Techniques
  • icon-facebook Genetic Research Techniques
  • icon-facebook Cell Culture Techniques
  • icon-facebook Microbiology and Immunology
  • icon-facebook Cancer Research and Technology
  • icon-facebook Stem Cell and Regenerative Science
Advertisement
Mettler Toledo
Mettler Toledo
Advertisement
Life Technologies