Are Journal Impact Factors Reliable?
3 January 2008 at 10:20 am Peter G. Klein 2 comments
| Peter Klein |
Not really, according to the RePEc blog (via Newmark). Thompson (formerly ISI) uses an imprecise and inconsistent method to compute journal impact factors and, even worse, refuses to release the raw data so that scores can be independently verified. Journals typically require authors to make data public as a condition of publication; why use rankings based on hidden data? Writes RePEc: “[A]ll of us should treat impact factors and citation data with considerable caution. Basing journal rankings, tenure, promotion, and raises on uncritical acceptance of [these] data is a poor idea.”
It would be nice to have more information about the magnitude and direction of the potential bias. Do these problems affect the rank ordering of journals, or simply the precision of the point estimates? Is there any research on this problem?
Entry filed under: - Klein -, Institutions.
2 Comments Add your own
Leave a comment
Trackback this post | Subscribe to the comments via RSS Feed
1. measuring impact « orgtheory.net | 3 January 2008 at 12:22 pm
[…] Peter Klein and the RePEc blog, in this article a team of biologists take on Thomson’s venerable citation […]
2. More on Impact Factors « Voir Dire | 6 January 2008 at 1:54 am
[…] Hat tip to Peter Klein over at Organizations & Markets. […]