Measuring Research Productivity
16 January 2007 at 2:05 pm Peter G. Klein 5 comments
| Peter Klein |
The Chronicle of Higher Education reports rankings of 7,294 PhD programs in 104 disciplines at 354 US universities. The rankings are from the Faculty Scholarly Productivity (FSP) Index computed by Academic Analytics. Unlike the Business Week and US News and World Reports rankings, based largely on subjective, peer evaluations, the FSP purports to be an objective performance index. (I’m reminded of Frank Knight’s reported interpretation of Kelvin’s dictum: “If you can’t measure, measure anyway.”)
A few surprises: In economics, Johns Hopkins, Duke, and UC-San Diego make the top ten, ahead of Chicago, Stanford, and Berkeley. Iowa ranks #5 in management, behind heavyweights Columbia, Wharton, MIT, and Northwestern (Cornell and Florida also make the top ten). The top ten in finance includes Emory, Temple, Boston College, and Houston. No surprises in sociology (at least to my untrained eye).
Thanks to Rich Vedder for the pointer. Rich interprets these data as evidence that the traditional, subjective rankings are biased toward the elite universities. On the other hand, quantitative metrics for intangibles like knowledge creation can’t possibly incorporate quality in a satisfactory way.
Entry filed under: - Klein -, Institutions, Teaching.
5 Comments Add your own
Leave a comment
Trackback this post | Subscribe to the comments via RSS Feed









1.
Chihmao Hsieh | 16 January 2007 at 2:18 pm
My oh-so-soon-to-be alma mater WUSTL is ranked 3rd on this list in “Business Administration.” I wonder how much it’d cost me to get Business Week and USNWR to talk to the Chronicle.
Rankings are so incredibly silly. Didn’t a researcher several years ago find that many of the most popular rankings are negatively autocorrelated, in order to sell magazines?
2. sozlog » Blog Archive » in-com-men-su-ra-bi-li-ty … | 19 January 2007 at 4:23 pm
[…] and organizations and markets discuss evaluation of research productivity and their effects on academia, based on the latest […]
3.
Eric H | 26 January 2007 at 12:33 am
I wonder who is the most efficient in terms of (quantitative measure of output) / (dollar) where those dollars could be (a) in that department, (b) in tuition, (c) research grants.
Also, I wonder if you would like to qualify,”On the other hand, quantitative metrics for intangibles like knowledge creation can’t possibly incorporate quality in a satisfactory way.” Really? I have a passing interest in the effectiveness of money spent by the occasional millionaire, especially someone like Michael Milken or Augie Nieto, who is stricken by a disease and decides to find its cure. They aren’t scientists, so what makes them or us think this is money well spent? On the other hand, they do know how to build organizations, raise money, and direct other kinds of research or find people who can.
I agree that it would be extremely difficult to measure their effectiveness, especially when you are looking for a creative breakthrough where there are no guarantees that an answer exists. When Edison denied that his research was a failure on the basis that he now knew of 999 ways not to make a light bulb, how could we, ex ante, know whether he was right or just holding off his creditors?
4.
Peter Klein | 26 January 2007 at 10:08 am
Eric, I think your third paragraph answers your second. Certainly one can invest in innovation and will want some measure of returns. Not only individual philanthropists, but also corporate R&D departments are faced with this problem all the time. But the available proxies are crude. Of course, in the private sector, there are ultimate, long-term measures of innovation performance: profit. Not so for universities and non-profits. Academic research doesn’t have to pass a market test (fortunately for most academic researchers!)
5.
Eric H | 27 January 2007 at 11:49 am
Eric, I think your third paragraph answers your second.
Yes, but I’m not satisfied with it. 8~)
(Is there an emoticon for whining?)