/>

The impact of impact factors

Updated - August 08, 2013 03:28 am IST

Science journals. Photo: V. Ganesan.

Science journals. Photo: V. Ganesan.

“Lay” readers (those who do not practice science as a profession) may or may not know that we scientists too have a rating system to judge ourselves. One might call this the equivalent of the ATP ratings of tennis players or the Nielson rating of TV channels. We use something called the impact factor or IF. This was originally started to help librarians decide which journals are read more often than others, and thus to plan their priorities in the subscription budgets. But, over time, this has been transformed into judging an individual scientist’s productivity. This aberration is being criticized, increasingly in the recent past, by scientific societies and efforts are on to correct this. How did this happen?

About 40 years ago, publishers of science journals and librarians came out with a formula to figure out which journals are more often read than others. One way of doing this was to note how many times articles in a given journal (call it J) published, say in the last couple of years, are cited by scientists in their research publications this year. A paper of value is of course cited more often by colleagues who work in the same area than other papers in the same journal. One can quantify this; let us say there were X such citations this year of that article from the journal J. But note that J published not that paper alone, but a total of Y papers. Thus the impact factor of J is then calculated to be X/Y. The higher the IF of a journal, the more coveted it is.

Note the interesting point here. Which one of the Y papers was cited so many times? Was it mine? If so, I can pride myself that my work has been of some value. If not mine, then I am one of the “run alongs” or one in the bandwagon. A journal’s IF says nothing about any individual article published in that journal. It is an average. Yet, over time, we scientists go around telling others: “I have my paper published in journal J with an IF of 6 or 30 (or whatever)”. Is it my IF or the journal’s?

And academies, juries for prizes, promotions and recruitment too have been using IF as the metric for individual scientist’s value. It is not that we did not realize this aberration all along. Looking at metrics is an easy way for selection committees to shirk work; they are too lazy to read the actual paper of the candidate, the IF value is enough! This fallacy has been pointed out time and again. My colleague Dr J Gowrishankar once showed how one can artificially increase the number of citations. Another pointed out that a paper has been cited not because it was an advance in the field but because it was wrong and people cited it to tell others!

A forceful criticism of this aberration has been made by Dr Mark Johnston, editor-in-chief of the journal Genetics in its August 2013 issue. He says the blame is on us, the scientists: “if we did not value so much the high IF journals, our mostly junior colleagues would not pay so much attention to IF” And he quotes the famous satirical statement from the Pogo comic strip: “we have met the enemy and it is us”!

Professor Johnston’s editorial follows what is known as the San Francisco Declaration on Research Assessment , brought out by a group of editors and publishers of scholarly journals, in December 2012. Accessible free on the net at http://am.ascb.org/dora/, and worth reading by all, it makes a strong case for eliminating the use of journal-based metrics particularly in funding, appointment and promotion considerations, and the need to assess research on its own merits rather than which journal it is published in, and for publishers to greatly reduce the journal IF as a promotional tool. This declaration has been approved by 78 organizations and granting agencies around the world (not yet by any from India).

Turning to the Indian scene, Professor Subhash Lakhotia of BHU, writing in Current Science (10 Aug 2013), makes the further point of how this unnatural preference towards IF has let Indian Science journals lose steam. “Armed with the IF the ‘experts’ rapidly cut out ‘good’ from ‘poor’ or ‘bad’. Lakhotia is right, even in prize-awarding committees and in promotions, nominees are asked to separate their publications in ‘Indian” from ‘International’ journals, using this as a judgment criterion. Here too, the enemy is us.

How then does one measure or quantify a scientist’s productivity and the impact of his research? By using the number of times each of his publications has been cited by peers. Professor Jorge Hirsch of UCSD California quantifies this using what is termed as the H index, which is far superior that IF, since it focuses on the paper and not the journal. But that is another story.

dbala@lvpei.org

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.