Advertisement

Scientists Take Aim at Impact Factor

A declaration asks the scientific community to put less weight on the metric, widely used to evaluate journals’ prestige.

By | May 20, 2013

FLICKR, ELVERT BARNESA declaration asking that the research community pay less attention to impact factor now has 484 signatures, 87 of them from institutions and 397 from individual members of the scientific community. Impact factor is calculated by Thomson Reuters based on academic journals’ citation rates, as a measure of prestige. The document, called the San Francisco Declaration on Research Assessment (DORA), asks that scientists, funding bodies, and others stop using the number as “a surrogate measure of the quality of individual research articles.”

DORA, set into motion at the Annual Meeting of The American Society for Cell Biology (ACSB) in December and spearheaded by the society, argues that, within journals, most citations are likely to come from relatively few papers and so the aggregate impact factors do not reflect an individual paper’s merit. Impact factors also vary by field and don’t differentiate between review papers and original research. And the emphasis encourages editors to enact policies to drive up their impact factors, sometimes artificially, the declaration said.

The declaration also asks that measures of research impact be transparently calculated and openly available for anyone to use. Thomson Reuters currently has not explained its full methodology and its data require permission to use.

“We, the scientific community, are to blame—we created this mess, this perception that if you don’t publish in CellNature, or Science, you won’t get a job,” Stefano Bertuzzi, executive director of the ACSB, told Nature. “The time is right for the scientific community to take control of this issue.”

Many organizations involved in scientific publishing have signed the declaration, including the Public Library of Science (PLOS), EMBO, eLife, and the American Association for the Advancement of Science (AAAS), the publisher of Science. Bruce Alberts, editor-in-chief of Science, wrote in an editorial that “Impact Factor mania makes no sense.” He argued that it discourages journals from publishing research from fields that tend to get cited less, inundates journals like Science with inappropriate submissions, and discourages risky and innovative research.

Nature did not sign the declaration. Editor-in-Chief Philip Campbell told a reporter from his organization’s Nature News Blog that “the draft statement contained many specific elements, some of which were too sweeping for me or my colleagues to sign up to.”

Thomson Reuters said in a statement that it didn’t see the declaration as a blanket condemnation of impact factors themselves, but rather how they are used. “Thomson Reuters continues to encourage publishers, researchers, and funders to consider the correct use of the many metrics available, including the Journal Impact Factor and data from the Web of Science, when performing research assessments.”

Advertisement

Add a Comment

Avatar of: You

You

Processing...
Processing...

Sign In with your LabX Media Group Passport to leave a comment

Not a member? Register Now!

LabX Media Group Passport Logo

Comments

Avatar of: Mehrishi

Mehrishi

Posts: 5

May 20, 2013

The problem really arises from the asusmption that MSS (especially, with novel findings) are judged throughly and get published. I have seen MSS with serious flaws by 'names' published: e.g. 3-6 donors' blood chemistry data- reaches a conclusion thta is flawed but published- another MS data on 40 donors and thousands of cells arrives raeching the opposite conclusion and is turned down.

That would not have happened 30 years ago.

It is against the laws of simple logic that one cannot assume and forwrad an argument to prove a point  if you assume, it is fallacious.

The assumption on the part of the Js (editorial assitants given the title of 'senior editors'') working for the busy editors- supporting their assistants to the hilt- tight/wrong or not quite right.

This is what has been affecting MSS publication and also retraction rates for flawed MSS for SCIENCE,PNAS,NATURE and others.

2. The change in scietific/editing publishing following explosion of production in multiple Js that has taken place and affects the publication, say, from the days of John Maddox is that for busy editors-in-chief is:

the introduction of preliminary screeners, 5-10 yrs post PhD experience.

In the 'old days' the editors chose the reviewers, specilaists with deep wide knowledge and the assessment was thorough from real experts.

Sadly, the preliminary screeners without really deep wide knowledge look at  MSS and, I suspect, miraculous/exciting new titles and findings, frankly not quite able to judge inspecting some of their comments and the almost unquestiuoned blind support of the editors of the prelimiary screeenrs (as on a shop floor).

3. Such prelimianry screeners are turning down MSS novel findings and novel technologies- quite unable to cope.

4. This explains that why often when a practising scientist/clinician is an an editor, he/she over rules the preliminary screeners and even the referees!

I have nooticed this at least on halx a dozen MSS of my own and some of others.

Conclusion is simple: Repair/reform the prelimianry screeners blocking MSS because of their limitations to judge novel findings- if the findings are really new or controversial- they should be cosncious/bold/just/humble enough to acknowledge and the ditors should be more industrious for the pile of MSS that look 'difficult' and if there are real merits in such MSS 'rejected; by the prleimianry screener rather than rubber stamp the judgement.

I have been refereeing MSS and $m grant applications when such discrepancies are obvious.

 

Dr JN Mehrishi, PhD, FRCPath

Avatar of: dubert

dubert

Posts: 1

May 24, 2013

science does not progress when one promotes people who specialize in generating repetitious least-publishable units and/or in knowing best how to game the bureaucracy and how to please failed-ph.d.-kingmakers at major journals, the mafiosos at NIH/NSF study sessions, and their peers by citing them or giving them openings for further fassade publications.

science progresses when important scientific breakthroughs are made.

therefore whatever contributes to increasing the probability of such breakthroughs is an important contribution to science (this includes onerous teaching beyond the textbook).

i propose to evaluate scientists and their output according to how many established theories (or "scientific" fads) they have refuted, how many seminal hypotheses and crucial new questions they have proposed, how many breakthrough new methods they have developed, etc.

5, 10, and 15 years after the ph.d., the scientist would write his/her own explicitly argued and heavily footnoted evaluation describing his vision and merits as breakthrough thinker and scientist by commenting explicitly on his established-theory refutations, novel hypotheses and questions proposed, breakthrough new methods developed, etc., and by contrasting everything to how things were before his work.

the factuality of the listed results and presented context and the relevance that the evaluated person attributes to the topics and results mentioned in the self-evaluation would then be critiqued by

a) a group of experts recommended and justified by the evaluated person and

b) a group of international experts chosen by a panel of national experts themselves chosen by the country's professional organization.

(this could be refined of course; the most important thing is to avoid both invidious and crony reviewing).

the two reviews would then be exchanged between groups and contradictions would be eliminated.

those who would come up empty-handed would start performing more and more work for others who have delivered breakthrough work in the past (and the technical training of such ""support specialists" would be augmented and everybody would get paid the same to avoid careerists).

these "specialists" would also carry out work for starting postdocs.

say one would start working 25%, 50%, 75%, 100% for others, after coming up empty-handed after 5, 10, 15, and 20 years....

of course, after delivering an important breakthrough ("hard work" and a "steady output" do to qualify as such), one would regain all of the "lost" ground (quotation signs because it must be a nightmare to have to feign that one is a creative scientist when one is not, especially if one is paid the same either away).

Follow The Scientist

icon-facebook icon-linkedin icon-twitter icon-vimeo icon-youtube
Advertisement

Stay Connected with The Scientist

  • icon-facebook The Scientist Magazine
  • icon-facebook The Scientist Careers
  • icon-facebook Neuroscience Research Techniques
  • icon-facebook Genetic Research Techniques
  • icon-facebook Cell Culture Techniques
  • icon-facebook Microbiology and Immunology
  • icon-facebook Cancer Research and Technology
  • icon-facebook Stem Cell and Regenerative Science
Advertisement
Advertisement
Mettler Toledo
BD Biosciences
BD Biosciences