A new proposal for citation data
Researchers have proposed a new scheme for ranking the quality or impact of scientific journals that they say is more accurate than the Impact Factor, according to a linkurl:paper;http://www.plosone.org/article/fetchArticle.action?
Researchers have proposed a new scheme for ranking the quality or impact of scientific journals that they say is more accurate than the Impact Factor, according to a linkurl:paper;http://www.plosone.org/article/fetchArticle.action?articleURI=info:doi/10.1371/journal published last week in PLoS ONE.
Rather than relying on an average of citations to rate a journal, the system uses a mathematical model to characterize the typical number of citations that papers in specific journals are likely to receive.
Since its inception in the 1960's by the Institute for Scientific Information (now Thomson Scientific), the Impact Factor has become the standard journal ranking system. It's calculated by adding up all the citations published in a particular journal during a two year period and dividing that number by the total number of citable papers during that time.
But linkurl:Luis Amaral;http://amaral.chem-eng.northwestern.edu/ at Northwestern University and colleagues say that method has two major problems. First, it relies on the mean value, which "seems like a very natural number," Amaral explained, but only if the distribution of what one is measuring is normal, i.e. bell-shaped -- not the case for paper citations.
Amaral and his team looked though a database of 23 million papers published in more than 2,000 journals since 1955. The most cited paper in the pool got 200,000 citations, while half of all papers didn't get cited at all. "You have this really broad range, so the mean is a really bad measure to use," he said; if one paper in a given year receives 1000 citations, "no matter if none of the other papers have got many citations, that paper is going to change significantly the mean."
The second problem with impact factor, he said, is that papers in different fields get cited at different rates. In medicine and physics, papers tend to get cited very quickly after they're published, while those in economics or math can take a couple of years to accumulate citations. That's true for individual journals within a field as well, he added.
So they converted the number of citations of each paper to its logarithm, in order to "bring these numbers that are very broad closer to a narrower range." The logarithms of citations for individual papers did form a normal, bell-shaped distribution for within specific journals. Over time, the likelihood that a paper will get cited decreases, so its citations reach a stationary state, so the researchers then developed a mathematical model to predict how quickly papers in different journals acquired their "characteristic" number of citations. "That means there's a sort of pattern to a given journal," Amaral said, and that pattern is stable from year to year.
Under their system, journal rankings change somewhat: Trends in Biochemical Science, for example, ranks quite highly according to the Impact Factor (13.863), above journals such as EMBO, Plant Cell, and the Journal of Biological Chemistry, but much lower, and below those others, in the new ranking system. (See data comparing the group's ranking system to the Impact Factor for journals in biochemistry and molecular biology linkurl:here;http://www.the-scientist.com/supplementary/pdf/MOLECULAR_BIOLOGY.pdf and cell biology journals linkurl:here.;http://www.the-scientist.com/supplementary/pdf/CELL_BIOLOGY.pdf )
In a good ranking system, papers from higher ranked journals should have higher numbers of citations than papers from lower ranked journals most of the time -- a criterion that, according to their analysis, their system (called q) meets better than the Impact Factor, Amaral said.
Jim Pringle, vice president of product development at Thomson Scientific, wrote in an Emailed statement in response to the paper: "Thomson Scientific encourages experimentation and development of new citation-based methodologies and metrics. This particular article applies complex new mathematical techniques in a potentially interesting way. We look forward to the methodological discussion that will no doubt ensue." Pringle also directed readers to journals such as Scientometrics and the Journal of the American Society for Information Science and Technology (JASIST), and Thomson's recently launched linkurl:Citation Impact Forum,;http://scientific.thomson.com/citationimpactforum/ for further insight into rankings.
What do you think about this method -- is it better or worse than the Impact Factor? Tell us in a comment; we promise not to take it personally. (The Impact Factor was developed by Eugene Garfield, who also founded The Scientist.)
March 4, 2008
due to the factor, many researchers dedicated themselves into it ,they want to publish papers which has many citations.I think most of them go in a wrong way, many focus on those fields which are easy to publish papers, in china, this problem is very serious, professors must have certain papers to be qualified. and we postgraduates must have papers to graduate,and these papers must have certain citations.\n I think it is the time to have a new rule to evaluate researchers' work.
March 5, 2008
The impact factor is outdated.\na) It is not reproducible:\nhttp://www.jcb.org/cgi/content/full/179/6/1091
\nb) It is open to manipulation: see above and also:\nhttp://medicine.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journal.pmed.0030291
\nIt is also the single metric by which we evaluate scientists and grants. Thomson Scientific up until now is a monopolist. The current state is absurd and irrational.\n(Summary here
)\n\nThomson Scientific's forum
, BTW is a direct response to the JCB article linked above.\n\nAs of This year, there is a second contender for journal rank: ScImago
\n\nBut isn't it grotesque that in today's day and age scientists publish in different journals instead of a single, fully searchable and cross-referenced, peer-reviewed database? \n\nIf overnight all journals were wiped out and you were king for a day, would you recreate approx. 20,000 different scholarly journals? With today's technology, would you even create 2?
March 7, 2008
Review articles like "trends" journals should not count as research publications because typically the research has already been published elswhere, and much or most of it was done by researchers other than the authors. It is simultaneously publishing material twice, and acquiring the credit for the work done by people cited in the review. Reviews are rarely multi-author. The high citations and hence impact factors for reviews is because it saves harrased or lazy researchers the need to read tedious research publications, and in doing so, reduces the citations due to the original researchers. Review-journals are high-level textbooks, and may be indications of teaching ability and esteem, but should not rank as research.\nHugh Fletcher