# New impact factors yield surprises

Thomson Reuters has released its 2009 Journal Citation Report, cataloging journals' impact factors, and shuffling in the top few spots have some analysts scratching their heads. Specifically, the publication with second highest impact factor in the "science"

By | June 21, 2010

[19th January 2009]

javaid bhat

Posts: 1

June 21, 2010

This interesting news article again highlights the essence of judging the journals on some other scale than the one named as impact factor. But the it is now a fact that politics of science is driven by this misleading number.

anonymous poster

Posts: 1

June 21, 2010

One more reason why the impact factor of any given journal should NOT be calculated as the average, but as the MEDIAN number of citations/year. Even the real estate market is ahead of science on this one, since house price indicators are given as median values.

anonymous poster

Posts: 1

June 21, 2010

Every journal have impact factor (IF) and citation half life (CT). May be we should consider to have a new factor\nimpression factor = CT*sqrt(IF)\n\nFor example\nNature Medicine \nIF=27.136; CT=6.6 --> impression factor = 34.38\n\nTRENDS MOL MED \nIF=11.045; CT=4.3 --> impression factor = 14.29\n\nORPHANET J RARE DIS \nIF=5.825; CT=2.6 --> impression factor = 6.28\n\nLAB INVEST \nIF=4.602; CT=9.9 --> impression factor = 10.3

Shi Liu

Posts: 32

June 21, 2010

Impact factor is flawed from its basic root - a wrong formula for calculating a wrong collection of irresponsible citation data. There are many ways to boost the value of impact factor by those journals which promote no true science. Thus, we should stop the impact factor game. More at http://im1.biz/CitationIF.htm

DAVID YEW

Posts: 2

June 21, 2010

This news story shows how impact factors can really change and mislead. In spite of the outcry from scientists all over the globe, impact factors (and not citation half life) has continuously been unwisely employed, interpreted and manipulated, particularly by the administration. The result is that many scientists who actually did good work went down the drain and were never recognized by anyone because the journals they published in had a "low impact factor" (which everybody knows is not an indication for individual papers). Thirty and forty years before impact factors came abroad, we scientists were surviving pretty well. Even now, we select the papers which are important to our research, and these selections usually do not come from high impact journals. What is worse is that in these days, when the economy is weak, a lot of 'scientists' use impact factors to step down on their colleagues to get ahead. If the administrations do not have a broad knowledge or mind, this is going to be detrimental to the future of the universities. Ladies and gentlemen, let's let natural selection continue and forget the metrics. \nFinally, let's remember that no research is small if it is properly done. \n\nD.T. Yew*\nThe Chinese University of Hong Kong\n* Comments represent the views of the author only.

Bjoern Brembs

Posts: 14

June 21, 2010

Ha! Only the fool who slept through statistics 101 would take the arithmetic mean of data so skewed to the left as citation data. Any student in that class could explain to Thomson Reuters why this is a bad idea and favors and incentivizes actions such as those by Acta Crystallographica - Section A. Well done, Acta, teach everyone some basic undergraduate statistics!

anonymous poster

Posts: 34

June 22, 2010

Some people don't like it, I mean, just a number. It showed how many people actually cited your work, nobody forced them to do that, somehow they found it and cited it. It could be crap. If is a couple, a dozen, it is random. When it goes one hundred or even a thousand, it tells you something. I know, don't over interpret it, but you know, I have to tell you it feels good.

KE Thampi

Posts: 1

June 22, 2010

In my view this relationship is not a relevant one.We should search for a relevance in between many factors......

anonymous poster

Posts: 28

June 22, 2010

Google scholar is a better tool to measure citation individually. IF of each paper can be determined based on the citation of first two years after publication if administration wants it.

Ting Wang

Posts: 15

June 23, 2010

IF of European Journal of Pharmaceutics in 2008 was 3.6 but the figure dropped down to 2.6 in 2009. This really surprized us. We can not judge its quality if based on just IF because the IFs of other journals in the same level did not change so much. But we believe EJPS has still its high profile. \nIn fact, last year Editor-in-chief of Pharmaceutical Research did make a critical comment on IF of journals released by Thomson Reuters. Please see in detail.\nhttp://www.springerlink.com/content/k5202054521l3635/

Nikolay Pestov

Posts: 2

June 23, 2010

Oh... \nImpact factor is really dangerous in hands of enthusiastic bureaucrats. In Russia, some individual salaries and grants are distributed according to IF of journals. That system was forcefully implemented by Ministry of Science several years ago despite resistance from the scientists themselves.

anonymous poster

Posts: 28

December 30, 2010

How many iPSC papers have been published in Nature, Cell, Science, and PNAS？ How many of them are incremental? IF just misleading scientists to spending more time to do incremental research and pay more attention to these journals when they cited papers. \n\nHowever, some papers published in these journals just ignored (did not cite) primary findings and cencepts published in other journals. This is a ethical problem.\n\n\n\n

Mike Waldrep

Posts: 155

December 30, 2010

Interesting! I hope that everyone had a great weekend,a Merry Christmas,is having a great week, has another great weekend and I hope that they have a Happy New Year!

anonymous poster

Posts: 28

December 30, 2010

Many journals publish both review and original papers. This causes overestimateion of IF. PLoS ONE just published original papers. Its IF is underestimated.

David Hill

Posts: 41

December 30, 2010

For reasons cited by others, including publication of strings of papers on related subjects, popularity of certain subjects by cloned colonies of academic interest, etc., IF is one measurement that should be disregarded by all. Einstein's 1905 papers aroused little interest at the time because, as Einstein communicated, most academic Physicists at the time were more interested in less important subjects. Who is to say what subject is important?

### Popular Now

1. The Scientist Swapping Cigarettes for Vaping

New evidence suggests e-cigarettes are not without risks to human health, but can be useful in getting people to kick their smoking habit.