Too Much Research

8 July 2010 at 2:50 pm 13 comments

| Peter Klein |

Bill McKelvey is one of the signatories to a controversial Chronicle piece that ran last month, “We Must Stop the Avalanche of Low-Quality Research.” The five authors, from a variety of academic disciplines, argue that “the amount of redundant, inconsequential, and outright poor research has swelled in recent decades, filling countless pages in journals and monographs.” As evidence they point to increases in the numbers of journals, journal pages, and authors and decreases in average citation rates.

[I]nstead of contributing to knowledge in various disciplines, the increasing number of low-cited publications only adds to the bulk of words and numbers to be reviewed. Even if read, many articles that are not cited by anyone would seem to contain little useful information. The avalanche of ignored research has a profoundly damaging effect on the enterprise as a whole. Not only does the uncited work itself require years of field and library or laboratory research. It also requires colleagues to read it and provide feedback, as well as reviewers to evaluate it formally for publication. Then, once it is published, it joins the multitudes of other, related publications that researchers must read and evaluate for relevance to their own work. Reviewer time and energy requirements multiply by the year. The impact strikes at the heart of academe.

I think this assessment is generally on target, in my own field at least. What percentage of the articles in your favorite scholarly journal do you read, let alone remember? How much of the research in your field really adds value? Of course, search tools make it easier to find relevant information, so I’m not sure the point about lit reviews is all that compelling. Still, it does seem increasingly difficult to sort wheat from chaff.

I’m less impressed with the authors’ proposed solutions — limiting the number of publications that can be considered for promotion and tenure, making greater use of impact factors, and enforce tighter page restrictions. These strike me as superficial fixes. The main problem is the vast increase in the scale and scope of the “scientific” enterprise itself, almost all of it due to public funding. There are simply too many universities and institutes, too many research faculty, too many granting agencies, too much research money. It’s a self-perpetuating process, almost exclusively driven by supply-side considerations (who on earth “demands” the output of most English departments?). Some of you will be shocked by the claim that there’s “too much” research money, particularly in today’s austere climate. But I mean too much relative to some social optimum, not too much relative to what university professors want.

Why would we expect this kind of system to produce high-quality research? Perhaps it’s a miracle that any good work gets done at all.

Add to: Facebook | Digg | Del.icio.us | Stumbleupon | Reddit | Blinklist | Twitter | Technorati

Entry filed under: - Klein -, Education, Institutions, Methods/Methodology/Theory of Science, Recommended Reading.

SMS India Workshop on Strategic Entrepreneurship Stanford Conference on the Asian Firm

13 Comments Add your own

  • 1. Joe Mahoney  |  8 July 2010 at 7:46 pm

    Expecting that even a majority of papers provide something “new” greatly misses the point. To the extent that writing by all Professors improves their activity of mind and contributes to classroom teaching for the young, is justification enough for MORE writing. Publications are a monitoring device to make sure folks stay active and write. Originality by some is the icing on the cake. It is not the cake itself. Put differently, espoused rhetoric aside, a few elite universitiy Professors publishing pathbreaking work is NOT the only purpose that journal publications serve

  • 2. srp  |  8 July 2010 at 9:50 pm

    “But I mean too much relative to some social optimum, not too much relative to what university professors want.”

    I’m having trouble wrapping my mind around this distinction.

  • 3. Yan Chen  |  8 July 2010 at 11:29 pm

    If Joe’s comments on the purposes of journal publications are correct, restricting the number of articles, as suggested by the original authors, may be of good for all of us. After all, we don’t need researchers to publish so many articles in order just to show that they read journal articles.

  • 4. Steve Phelan  |  8 July 2010 at 11:52 pm

    I agree with Joe that a major function of research is to ensure that academics maintain currency with new ideas. I call this keeping up with the field ‘scholarship’ and distinguish it from ‘discovery-oriented’ research.

    The question is whether the investment in doing ‘research’ is the most efficient way to ensure scholarship. I suspect not. It’s a ‘folly of rewarding A while hoping for B’ situation. Research and chasing A grade pubs is now the raison d’etre for most of the top 200 business schools. Not teaching, not relevance to practice, just chasing the magical A pub that very few will ever read.

    The problem is that a pub is a relatively simple measure, portable across campuses, and has legitimacy in the eyes of other university fields such as science and engineering.

    I actually miss the days when b-schools were populated with consultants and practitioners. I favor a blend of rigor and relevance. We should be moving closer to practice not further away.

    My prediction is that research professorships in business schools will become increasingly rare in the next fifty years (without grant funding). The price of an MBA is soaring due to the high costs of faculty doing research with low teaching loads. The cost-benefit calculation is beginning to look pretty shaky.

    Just my $0.02 worth,
    Steve

  • 5. michael webster  |  9 July 2010 at 10:16 am

    Perhaps we could work towards an institutional version of the Kurt Goedel solution, who only published less than 20 papers, all highly influential.

    See: http://plato.stanford.edu/entries/goedel/

  • 6. k  |  9 July 2010 at 2:42 pm

    only adds to the bulk of words and numbers to be reviewed’ Why? who says that you must review every paper ever published. Someone, sorry i dont have the quote, that the worst papers are the most quoted because every one is forced to begin with the wrong thesis to answer to it

  • 7. Gregory Rader  |  9 July 2010 at 3:08 pm

    This is just one instance of much larger issue referred to as “filter failure” by Clay Shirky. As information becomes more accessible and the barriers to publishing (crowdsourcing, blogs, comments, social networking, etc) decrease the result is an explosion of content that our existing filtering methods simply can’t handle. Simply look at the comments on any controversial article on a mainstream website and you will find hundreds of people repeating the same points.

    The solution however isn’t to discourage content production but instead to design better collaboration methods that reduce redundancy and better filtering methods that remove or aggregate redundant content while elevating original content.

    *This comment is generalizing away from the case specific point about public funding encouraging redundant content…I am no more a fan of pointlessly esoteric research than anyone else.

  • 8. David Hoopes  |  9 July 2010 at 4:14 pm

    I agree that a good deal of published research is not that good. Yet, opinions vary greatly as to what is good and what is not. I can think of many examples of work that is quite popular that does little for me. And, I can think of major streams of work that in my opinion are based on very shaky ground in terms of methods, theory, or both. It’s difficult to see how pursuing McKelvey and friends’ agenda really addresses the underlying issue of improving research quality.

    For better or worse, the coin(s?) of the realm is publication volume and journal quality. At least in management, it is very easy to find severe problems with the “quality” journals. I’m not sure others share my view on the lack of quality research in quality journals. I don’t see any easy obvious answers.

    It seems to me the academy has other more pressing problems.

  • 9. srp  |  9 July 2010 at 5:26 pm

    Beware of citation counts as a measure of research value. At least in management, most citations are pro forma hat tips rather than substantive critique or use as a building block. Often citations misstate the original source material. Heavily cited articles are often the winners of a “superstars” network competition where one of a bunch of similar works gets an early citation bump and then accumulates citations by positive feedback.

    The only impact that you can truly be sure of is the impact on your own research and thinking. How often do you refer in your mind, consciously or unconsciously, to a given paper? There are lots of obscure papers and books that I internally “cite” as I think about problems, but they would never find their way into a paper because they’re too far back in my background reasoning process to seem relevant.

  • 10. David Hoopes  |  9 July 2010 at 6:56 pm

    No argument here.

  • 11. Rafe  |  10 July 2010 at 5:55 pm

    An article (Econolib?) a few years ago surveyed 12 months of articles in a journal, asking a series of questions, like is a problem clearly formulated that the work addresses? is a theory or a hypothesis clearly formulated for testing? did the design actually permit the results to test the theory? etc. About 12% of the articles cleared all the hurdles.

  • 12. YSK  |  11 July 2010 at 1:16 am

    I agree with what David says above. As the quip from advertising goes: “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” Similarly, over 90% of published research is a waste, but which 90%?
    And who gets to decide? For example, look at this statement from the article:
    “In addition, as more and more journals are initiated, especially the many new “international” journals created to serve the rapidly increasing number of English-language articles produced by academics in China, India, and Eastern Europe, libraries struggle to pay the notoriously high subscription costs.”
    It does seem that these authors want only their beloved US journals to be in the accepted list!
    As other comments at the Chronicle website pointed out, these authors seem to prefer to a top-down solution, which I am sure is not popular with readers of this blog.

  • 13. c  |  13 July 2010 at 6:01 am

    To respond to YSK’s comment, I think it’s difficult to judge which 90% is a waste, insofar that half of that 90% may end up being cited in the distant future instead of immediately.

    Can’t we just get journals to charge authors for submitting a manuscript? And then distribute those funds to reviewers for their efforts? Don’t some journals already do this?

    What I picture is a system not unlike professional sports and the “minor leagues.” Scholars of questionable reviewing ability start off their reviewing career at a low-tier journal, where they are required to review for free, for reputational reasons. If they exhibit strong reviewing skills, they are nominated to review for top-tier journals, where compensation is doled out. Ultimately, the idea is that only the most intrinsically motivated reviewers are allowed to review manuscripts for (and get paid by) top-tier journals, and that they are so deeply intrinsically motivated that they aren’t possibly crowded out by extrinsic incentives.

    And I’d submit that any of the favoritism or biasedness of such a reviewing system already exists today with the current system.

Leave a comment

Trackback this post  |  Subscribe to the comments via RSS Feed


Authors

Nicolai J. Foss | home | posts
Peter G. Klein | home | posts
Richard Langlois | home | posts
Lasse B. Lien | home | posts

Guests

Former Guests | posts

Networking

Recent Posts

Categories

Feeds

Our Recent Books

Nicolai J. Foss and Peter G. Klein, Organizing Entrepreneurial Judgment: A New Approach to the Firm (Cambridge University Press, 2012).
Peter G. Klein and Micheal E. Sykuta, eds., The Elgar Companion to Transaction Cost Economics (Edward Elgar, 2010).
Peter G. Klein, The Capitalist and the Entrepreneur: Essays on Organizations and Markets (Mises Institute, 2010).
Richard N. Langlois, The Dynamics of Industrial Capitalism: Schumpeter, Chandler, and the New Economy (Routledge, 2007).
Nicolai J. Foss, Strategy, Economic Organization, and the Knowledge Economy: The Coordination of Firms and Resources (Oxford University Press, 2005).
Raghu Garud, Arun Kumaraswamy, and Richard N. Langlois, eds., Managing in the Modular Age: Architectures, Networks and Organizations (Blackwell, 2003).
Nicolai J. Foss and Peter G. Klein, eds., Entrepreneurship and the Firm: Austrian Perspectives on Economic Organization (Elgar, 2002).
Nicolai J. Foss and Volker Mahnke, eds., Competence, Governance, and Entrepreneurship: Advances in Economic Strategy Research (Oxford, 2000).
Nicolai J. Foss and Paul L. Robertson, eds., Resources, Technology, and Strategy: Explorations in the Resource-based Perspective (Routledge, 2000).