Hoisted from the Comments: Journal Impact Factors
| Peter Klein |
An old post of Nicolai’s on journal impact factors is still attracting attention. Two recent comments are reproduced here so they don’t get buried in the comments.
There is an interesting paper by Joel Baum on this, “Free Riding on Power Laws: Questioning the Validity of the Impact Factor as a Measure of Research Quality in Organization Studies,” Organization 18(4) (2011): 449-66. He does a nice analysis of citations, and shows (what many of us suspected) that citations are highly skewed to a small subset of articles, so the idea of an impact factor which is based on a mean citation rate is erroneous. He concludes that “Impact Factor has little credibility as a proxy for the quality of eitehr organization studies journals of the articles they publish, resulting in attributions of journal or article quality that are incorrect as much or more than half the time. The clear implication is that we need to cease our reliances on such non-scientific, quantitative characterisation to evaluate the quality of our work.”
To which Ram Mudambi responds:
This analysis was already done in a paper we wrote in 2005, finally now published in Scientometrics.
We have the further and stronger result that in many years, the top 10 percent of papers in A- journals like Research Policy outperform the top 10 percent of papers in A journals like AMJ.
So it is the paper that matters, NOT the journal in which it was published. Evaluating scholars on the basis of where they have published is pretty meaningless. Some years ago, we had a senior job candidate with EIGHTEEN real “A” publications — it turned out he had only 118 total citations on Google scholar. So his work was pretty trivial, even though it appeared in top journals.
See also the good twin blog for further discussion.