New impact factors yield surprises

Thomson Reuters has released its 2009 Journal Citation Report, cataloging journals' impact factors, and shuffling in the top few spots have some analysts scratching their heads. Specifically, the publication with second highest impact factor in the "science" category is __Acta Crystallographica - Section A__, knocking none other than the __New England Journal of Medicine__ from the runner's up position. This title's impact factor rocketed up to 49.

By | June 21, 2010

Thomson Reuters has released its 2009 Journal Citation Report, cataloging journals' impact factors, and shuffling in the top few spots have some analysts scratching their heads. Specifically, the publication with second highest impact factor in the "science" category is __Acta Crystallographica - Section A__, knocking none other than the __New England Journal of Medicine__ from the runner's up position. This title's impact factor rocketed up to 49.926 this year, more than 20-fold higher than last year. A single article published in a 2008 issue of the journal seems to be responsible for the meteoric rise in the __Acta Crystallographica - Section A__'s impact factor. linkurl:"A short history of SHELX,";http://www3.interscience.wiley.com/journal/119398457/abstract by University of Göttingen crystallographer linkurl:George Sheldrick,;http://shelx.uni-ac.gwdg.de/~gsheldr/ which reviewed the development of the computer system SHELX, has been cited more than 6,600 times, according to ISI. This paper includes a sentence that essentially instructs readers to cite the paper they're reading -- "This paper could serve as a general literature citation when one or more of the open-source SHELX programs (and the Bruker AXS version SHELXTL) are employed in the course of a crystal-structure determination." (Note: This may be a good way to boost your citations.) "Without another, similarly important article in 2010, __Acta Crystallographica - Section A__ is likely to return in 2011 to its prior Journal Impact Factor of between 1.5 and 2.5," linkurl:wrote;http://community.thomsonreuters.com/t5/Citation-Impact-Center/What-does-it-mean-to-be-2-in-Impact/ba-p/11386 Marie McVeigh, director of Journal Citation Reports and bibliometric policy at Thomson Reuters, in a discussion forum on the company's website. Number one stayed the same from last year's to this year's list -- __CA: A Cancer Journal for Clinicians__ Other surprises in this year's impact factor roundup include: - __PLoS ONE__ debuted in the Journal Citation Report for the first time with a respectable impact factor of 4.351. This score puts the open access journal in the top 25th percentile for biology publications. But might this sudden success be more of a bane than a boon to __PLoS ONE__, blogger Philip Davis linkurl:asks.;http://scholarlykitchen.sspnet.org/2010/06/21/plosone-impact-factor-blessing-or-a-curse/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+ScholarlyKitchen+%28The+Scholarly+Kitchen%29 It may turn out that accepting 70 percent of the manuscripts submitted to your journal gets a bit trickier when you're flooded with papers. - The __Cell__ family of journals made an impressive showing in this year's report. __Cell Stem Cell__ came in with a 23.563 impact factor, a 40 percent growth over last year's rating. __Cell Host and Microbe__'s impact factor also grew, coming in 75 percent higher than last year, at 13.021. Cell, however, was knocked from the top ten. - __Nature Genetics__ made it into the top ten, with an impact factor of 34.284. - 1055 new titles were ranked, and more than 700 of those publications were added as part of Thomson Reuters' new "Regional Content Expansion." - More than 4700 titles showed an increase over their 2008 impact factors. __(Editor's note: 21st June - Marie McVeigh was incorrectly referred to as a Thomson Reuters blogger in the original version of this story. The mistake has been corrected above. __The Scientist__ regrets the error.)__
**__Related stories:__***linkurl:Down with Reviews;http://www.the-scientist.com/article/display/57236/
[April 2010]*linkurl:Citation amnesia: The results;http://www.the-scientist.com/blog/display/55801/
[25th June 2009]*linkurl:New impact metric;http://www.the-scientist.com/blog/display/55343/
[19th January 2009]

Comments

Avatar of: javaid bhat

javaid bhat

Posts: 1

June 21, 2010

This interesting news article again highlights the essence of judging the journals on some other scale than the one named as impact factor. But the it is now a fact that politics of science is driven by this misleading number.
Avatar of: anonymous poster

anonymous poster

Posts: 1

June 21, 2010

One more reason why the impact factor of any given journal should NOT be calculated as the average, but as the MEDIAN number of citations/year. Even the real estate market is ahead of science on this one, since house price indicators are given as median values.
Avatar of: anonymous poster

anonymous poster

Posts: 1

June 21, 2010

Every journal have impact factor (IF) and citation half life (CT). May be we should consider to have a new factor\nimpression factor = CT*sqrt(IF)\n\nFor example\nNature Medicine \nIF=27.136; CT=6.6 --> impression factor = 34.38\n\nTRENDS MOL MED \nIF=11.045; CT=4.3 --> impression factor = 14.29\n\nORPHANET J RARE DIS \nIF=5.825; CT=2.6 --> impression factor = 6.28\n\nLAB INVEST \nIF=4.602; CT=9.9 --> impression factor = 10.3
Avatar of: Shi Liu

Shi Liu

Posts: 32

June 21, 2010

Impact factor is flawed from its basic root - a wrong formula for calculating a wrong collection of irresponsible citation data. There are many ways to boost the value of impact factor by those journals which promote no true science. Thus, we should stop the impact factor game. More at http://im1.biz/CitationIF.htm
Avatar of: DAVID YEW

DAVID YEW

Posts: 2

June 21, 2010

This news story shows how impact factors can really change and mislead. In spite of the outcry from scientists all over the globe, impact factors (and not citation half life) has continuously been unwisely employed, interpreted and manipulated, particularly by the administration. The result is that many scientists who actually did good work went down the drain and were never recognized by anyone because the journals they published in had a "low impact factor" (which everybody knows is not an indication for individual papers). Thirty and forty years before impact factors came abroad, we scientists were surviving pretty well. Even now, we select the papers which are important to our research, and these selections usually do not come from high impact journals. What is worse is that in these days, when the economy is weak, a lot of 'scientists' use impact factors to step down on their colleagues to get ahead. If the administrations do not have a broad knowledge or mind, this is going to be detrimental to the future of the universities. Ladies and gentlemen, let's let natural selection continue and forget the metrics. \nFinally, let's remember that no research is small if it is properly done. \n\nD.T. Yew*\nThe Chinese University of Hong Kong\n* Comments represent the views of the author only.
Avatar of: Bjoern Brembs

Bjoern Brembs

Posts: 14

June 21, 2010

Ha! Only the fool who slept through statistics 101 would take the arithmetic mean of data so skewed to the left as citation data. Any student in that class could explain to Thomson Reuters why this is a bad idea and favors and incentivizes actions such as those by Acta Crystallographica - Section A. Well done, Acta, teach everyone some basic undergraduate statistics!
Avatar of: anonymous poster

anonymous poster

Posts: 34

June 22, 2010

Some people don't like it, I mean, just a number. It showed how many people actually cited your work, nobody forced them to do that, somehow they found it and cited it. It could be crap. If is a couple, a dozen, it is random. When it goes one hundred or even a thousand, it tells you something. I know, don't over interpret it, but you know, I have to tell you it feels good.
Avatar of: KE Thampi

KE Thampi

Posts: 1

June 22, 2010

In my view this relationship is not a relevant one.We should search for a relevance in between many factors......
Avatar of: anonymous poster

anonymous poster

Posts: 28

June 22, 2010

Google scholar is a better tool to measure citation individually. IF of each paper can be determined based on the citation of first two years after publication if administration wants it.
Avatar of: Ting Wang

Ting Wang

Posts: 15

June 23, 2010

IF of European Journal of Pharmaceutics in 2008 was 3.6 but the figure dropped down to 2.6 in 2009. This really surprized us. We can not judge its quality if based on just IF because the IFs of other journals in the same level did not change so much. But we believe EJPS has still its high profile. \nIn fact, last year Editor-in-chief of Pharmaceutical Research did make a critical comment on IF of journals released by Thomson Reuters. Please see in detail.\nhttp://www.springerlink.com/content/k5202054521l3635/
Avatar of: Nikolay Pestov

Nikolay Pestov

Posts: 2

June 23, 2010

Oh... \nImpact factor is really dangerous in hands of enthusiastic bureaucrats. In Russia, some individual salaries and grants are distributed according to IF of journals. That system was forcefully implemented by Ministry of Science several years ago despite resistance from the scientists themselves.
Avatar of: anonymous poster

anonymous poster

Posts: 28

December 30, 2010

How many iPSC papers have been published in Nature, Cell, Science, and PNAS? How many of them are incremental? IF just misleading scientists to spending more time to do incremental research and pay more attention to these journals when they cited papers. \n\nHowever, some papers published in these journals just ignored (did not cite) primary findings and cencepts published in other journals. This is a ethical problem.\n\n\n\n
Avatar of: Mike Waldrep

Mike Waldrep

Posts: 155

December 30, 2010

Interesting! I hope that everyone had a great weekend,a Merry Christmas,is having a great week, has another great weekend and I hope that they have a Happy New Year!
Avatar of: anonymous poster

anonymous poster

Posts: 28

December 30, 2010

Many journals publish both review and original papers. This causes overestimateion of IF. PLoS ONE just published original papers. Its IF is underestimated.
Avatar of: David Hill

David Hill

Posts: 41

December 30, 2010

For reasons cited by others, including publication of strings of papers on related subjects, popularity of certain subjects by cloned colonies of academic interest, etc., IF is one measurement that should be disregarded by all. Einstein's 1905 papers aroused little interest at the time because, as Einstein communicated, most academic Physicists at the time were more interested in less important subjects. Who is to say what subject is important?

Popular Now

  1. Infant Microbiome: Vaginal Delivery Versus C-Section
  2. Immune System Maintains Brain Health
  3. Opinion: WHO’s Silence on Cannabis
  4. Top 10 Innovations 2016
    Features Top 10 Innovations 2016

    This year’s list of winners celebrates both large leaps and small (but important) steps in life science technology.

Rockland