More on Journal Rankings
| Peter Klein |
The HES (History of Economics Society) listserv is buzzing over ERIH, the European Science Foundation’s ranking of journals in the history and philosophy of science. Writes Deirdre McCloskey, for example:
Among [the] many bad effects [of ranking journals] is to encourage people to rank another person not by reading and considering (a sample of) her work but by counting how many Grade A journals she has contributed to. It takes scientific and scholarly judgment out of the hands of actual readers of the actual work and puts it into the hands of the median voter in a beauty contest. It leads to mediocrity in science, such as the practice of using t tests as the sole criterion of importance in statistical studies. The beauty contest is based on rumor, not reading. When reputation rankings include a dummy journal with a plausible sounding name the respondents claim familiarity with the journal and firmly rank it. Don’t we need to stop this corrupt practice, not encourage it?
Other commentators largely agree. David Colander notes that productivity rankings of economists based on journal articles use proxies (journal articles) that “are only a small portion of economists’ total output (which includes teaching, other research, and service) (I estimate 20%) and that emphasis in one reduces emphasis in the others, so the probability of the rankings carrying through is exceedingly small, even if there is positive correlation with other activities.”