Posts filed under ‘Methods/Methodology/Theory of Science’
| Peter Klein |
An interesting piece in The Economist: “Economic history is dead; long live economic history?”
Last weekend, Britain’s Economic History Society hosted its annual three-day conference in Telford, attempting to show the subject was still alive and kicking. The economic historians present at the gathering were bullish about the future. Although the subject’s woes at MIT have been echoed across research universities in both America and Europe, since the financial crisis there has been something of a minor revival. One reason for this may be that, as we pointed out in 2013, it is widely believed amongst scholars, policy makers and the public that a better understanding of economic history would have helped to avoid the worst of the recent crisis.
However, renewed vigour can be most clearly seen in the debates economists are now having with each other.
These debates are those about the long-run relationship between debt and growth initiated by Reinhart and Rogoff, about the historic effectiveness of Keynesian monetary and fiscal policy, and about the role of global organizations like the IMF and World Bank in promoting international coordination.
I guess my view is closer to Andrew Smith’s, that while history should play a stronger role in economics (and management) research and teaching, it probably won’t, for a variety of professional and institutional reasons. Of course, there is a difference between, say, research in economic or business history and “papers published in journals specializing in economic or business history.” In the first half of the twentieth century, quantitative economics was treated as a specialized subfield; now virtually all mainstream economics is quantitative. (The same may happen to empirical sociology, to theorizing in strategic management, and in other areas.)
| Peter Klein |
Jeffrey Selingo raises an important point about the distinction between “public” and “private” universities, but I disagree with his analysis and recommendation. Selingo points out that the elite private universities have huge endowments and land holdings, the income from which, because of the universities’ nonprofit status, is untaxed. He calls this an implicit subsidy, worth billions of dollars according to this study. “Such benefits account for $41,000 in hidden taxpayer subsidies per student annually, on average, at the top 10 wealthiest private universities. That’s more than three times the direct appropriations public universities in the same states as those schools get.”
I agree that the distinction between public and private universities is blurry, but not for the reasons Selingo gives. First, a tax break is not a “subsidy.” Second, there are many ways to measure the “private-ness” of an organization — not only budget, but also ownership and governance. In terms of governance, most US public universities look like crony capitalists. The University of Missouri’s Board of Curators consists of a handful of powerful local operatives, all political appointees (and all but one lawyers) and friends of the current and previous governors. At some levels, there is faculty governance, as there is at nominally private universities. In terms of budget, we don’t need to invent hidden subsidies, we need only look at the explicit ones. If we include federal research funding, the top private universities get a much larger share of their total operating budgets from government sources than do the mid-tier public research universities. (I recently read that Johns Hopkins gets 90% of its research budget from federal agencies, mostly NIH and NSF.) And of course federal student aid is relevant too.
So, what does it mean to be a “private” university?
| Peter Klein |
Two of my favorite writers on the economic organization of science, Terence Kealey and Martin Ricketts, have produced a recent paper on science as a “contribution good.” A contribution good is like a club good in that it is non-rivalrous but at least partly excludable. Here, the excludability is soft and tacit, resulting not from fixed barriers like membership fees, but from the inherent cognitive difficulty in processing the information. To join the club, one must be able to understand the science. And, as with Mancur Olson’s famous model, consumption is tied to contribution — to make full use of the science, the user must first master the underlying material, which typically involves becoming a scientist, and hence contributing to the science itself.
Kealey and Ricketts provide a formal model of contribution goods and describe some conditions favoring their production. In their approach, the key issue isn’t free-riding, but critical mass (what they call the “visible college,” as distinguished from additional contributions from the “invisible college”).
The paper is in the July 2014 issue of Research Policy and appears to be open-access, at least for the moment.
Modelling science as a contribution good
Terence Kealey, Martin Ricketts
The non-rivalness of scientific knowledge has traditionally underpinned its status as a public good. In contrast we model science as a contribution game in which spillovers differentially benefit contributors over non-contributors. This turns the game of science from a prisoner’s dilemma into a game of ‘pure coordination’, and from a ‘public good’ into a ‘contribution good’. It redirects attention from the ‘free riding’ problem to the ‘critical mass’ problem. The ‘contribution good’ specification suggests several areas for further research in the new economics of science and provides a modified analytical framework for approaching public policy.
| Peter Klein |
I sometimes worry that the blog format is being displaced by Facebook, Twitter, and similar platforms, but Patrick Dunleavy from the LSE Impact of Social Science Blog remains a fan of academic blogs, particularly focused group blogs like, ahem, O&M. Patrick argues that blogging (supported by academic tweeting) is “quick to do in real time”; “communicates bottom-line results and ‘take aways’ in clear language, yet with due regard to methods issues and quality of evidence”; helps “create multi-disciplinary understanding and the joining-up of previously siloed knowledge”; “creates a vastly enlarged foundation for the development of ‘bridging’ academics, with real inter-disciplinary competences”; and “can also support in a novel and stimulating way the traditional role of a university as an agent of ‘local integration’ across multiple disciplines.”
Patrick also usefully distinguishes between solo blogs, collaborative or group blogs (like O&M), and multi-author blogs (professionally edited and produced, purely academic). O&M is partly academic, partly personal, but we have largely the same objectives as those outlined in Patrick’s post.
See also our recent discussion of academics and social media.
| Peter Klein |
Should academic work be classified primarily by discipline, or by problem? Within disciplines, do we start with theory versus application, micro versus macro, historical versus contemporary, or something else? Of course, there may be no single “optimal” classification scheme, but how we think about organizing research in our field says something about how we view the nature, contributions, and problems in the field.
There’s a very interesting discussion of this subject in the History of Economics Playground blog, focusing on the evolution of the Journal of Economic Literature codes used by economists (parts 1, 2, and 3). I particularly liked Beatrice Cherrier’s analysis of the AEA’s decision to drop “theory” as a separate category. The Machlup–Hutchison–Rothbard exchange helps establish the context.
[T]he seemingly administrative task of devising new categories threw AEA officials, in particular AER editor Bernard Haley and former AER interim editor Fritz Machlup, into heated debates over the nature and relationships of theoretical and empirical work.
Machlup campaigned for a separate “Abstract Economic Theory” top category. At the time of the revision, he was engaged in methodological work, striving to find a third way between Terence Hutchison’s “ultraempiricism,” and the “extreme a priorism” of his former mentor, Ludwig Von Mises (see Blaug, ch.4). He believed it was possible to differentiate between “fundamental (heuristic) hypotheses, which are not independently testable,” and “specific (factual) assumptions, which are supposed to correspond to observed facts or conditions.” The former was found in Keynes’s General Theory, and the latter in his Treatise on Money, Machlup explained. He thus proposed that empirical analysis be classified independently, under two categories: “Quantitative Research Techniques” and “Social Accounting, Measurements, and Numerical Hypotheses” (e.g., census data, expenditure surveys, input-output matrices, etc.). On the contrary, Haley wanted every category to cover the theoretical and empirical work related to a given subject matter. In his view, separating them was impossible, even meaningless: “Is there any theory that is not abstract? And, for that matter, is there any economic theory worth its salt that is not applied,” he teased Machlup. Also, he wanted to avoid the idea that “class 1 is theory, the rest are applied … How about monetary theory, international trade theory, business cycle theory?” He accordingly designed the top category to encompass price theory, but also statistical demand analysis, as well as “both theoretical and empirical studies of, e.g., the consumption function [and] economic growth models of the Harrod-Domar variety,” among other subjects. He eschewed any “theory” heading, which he replaced with titles such as “Price system; National Income Analysis.” His scheme eventually prevailed, but “theory” was reinstated in the title of the contentious category.
| Peter Klein |
Joe Gillis: You’re Norma Desmond. You used to be in silent pictures. You used to be big.
Norma Desmond: I *am* big. It’s the *pictures* that got small.
— Sunset Boulevard (1950)
John List gave the keynote address at this weekend’s Southern Economic Association annual meeting. List is a pioneer in the use by economists of field experiments or randomized controlled trials, and his talk summarized some of his recent work and offered some general reflections on the field. It was a good talk, lively and engaging, and the crowd gave him a very enthusiastic response.
List opened and closed his talk with a well-known quote from Paul Samuelson’s textbook (e.g., this version from the 1985 edition, coauthored with William Nordhaus): “Economists . . . cannot perform the controlled experiments of chemists and biologists because they cannot easily control other important factors.” While professing appropriate respect for the achievements of Samuelson and Nordhaus, List shared the quote mainly to ridicule it. The rise of behavioral and experimental economics over the last few decades — in particular, the recent literature on field experiments or RCTs — shows that economists can and do perform experiments. Moreover, List argues, field experiments are even better than using laboratories, or conventional econometric methods with instrumental variables, propensity score matching, differences-in-differences, etc., because random assignment can do the identification. With a large enough sample, and careful experimental design, the researcher can identify causal relationships by comparing the effects of various interventions on treatment and control groups in the field, in a natural setting, not an artificial or simulated one.
While I enjoyed List’s talk, I became increasingly frustrated as it progressed, and found myself — I can’t believe I’m writing these words — defending Samuelson and Nordhaus. Of course, not only neoclassical economists, but nearly all economists, especially the Austrians, have denied explicitly that economics is an experimental science. “History can neither prove nor disprove any general statement in the manner in which the natural sciences accept or reject a hypothesis on the ground of laboratory experiments,” writes Mises (Human Action, p. 31). “Neither experimental verification nor experimental falsification of a general proposition are possible in this field.” The reason, Mises argues, is that history consists of non-repeatable events. “There are in [the social sciences] no such things as experimentally established facts. All experience in this field is, as must be repeated again and again, historical experience, that is, experience of complex phenomena” (Epistemological Problems of Economics, p. 69). To trace out relationships among such complex phenomena requires deductive theory.
Does experimental economics disprove this contention? Not really. List summarized two strands of his own work. The first deals with school achievement. List and his colleagues have partnered with a suburban Chicago school district to perform a series of randomized controlled trials on teacher and student performance. In one set of experiments, teachers were given various monetary incentives if their students improved their scores on standardized tests. The experiments revealed strong evidence for loss aversion: offering teachers year-end cash bonuses if their student improved had little effect on test scores, but giving teacher cash up front, and making them return it at the end of the year if their students did not improve, had a huge effect. Likewise, giving students $20 before a test, with the understanding that they have to give the money back if they don’t do well, leads to large improvements in test scores. Another set of randomized trials showed that responses to charitable fundraising letters are strongly impacted by the structure of the “ask.”
To be sure, this is interesting stuff, and school achievement and fundraising effectiveness are important social problems. But I found myself asking, again and again, where’s the economics? The proposed mechanisms involve a little psychology, and some basic economic intuition along the lines of “people respond to incentives.” But that’s about it. I couldn’t see anything in these design and execution of these experiments that would require a PhD in economics, or sociology, or psychology, or even a basic college economics course. From the perspective of economic theory, the problems seem pretty trivial. I suspect that Samuelson and Nordhaus had in mind the “big questions” of economics and social science: Is capitalism more efficient than socialism? What causes business cycles? Is there a tradeoff between inflation and unemployment? What is the case for free trade? Should we go back to the gold standard? Why do nations to go war? It’s not clear to me how field experiments can shed light on these kinds of problems. Sure, we can use randomized controlled trials to find out why some people prefer red to blue, or what affects their self-reported happiness, or why we eat junk food instead of vegetables. But do you really need to invest 5-7 years getting a PhD in economics to do this sort of work? Is this the most valuable use of the best and brightest in the field?
My guess is that Samuelson and Nordhaus would reply to List: “We are big. It’s the economics that got small.”
See also: Identification versus Importance
Peter invited me to reply to [Warren Miller’s] comment, so I’ll try to offer a defense of formal economic modeling.
In answering Peter’s invitation, I’m at a bit of a disadvantage because I am definitely NOT an IO economist (perhaps because I actually CAN relax). Rather, I’m a strategy guy — far more interested in studying the private welfare of firms than the public welfare of economies (plus, it pays better and is more fun). So, I am in a much better position to comment on the benefits that the game-theoretic toolbox is currently starting to bring to the strategy field than on the benefits that it has brought to the economics discipline over the last four decades (i.e., since Akerlof’s 1970 Lemons paper really jump-started the trend).
Peter writes, “game theory was supposed to add transparency and ‘rigor’ to the analysis.” I have heard this argument many times (e.g., Adner et al, 2009 AMR), and I think it is a red herring, or at least a side show. Yes, formal modeling does add transparency and rigor, but that’s not its main benefit. If the only benefit of formal modeling were simply about improving transparency and rigor then I suspect that it would never have achieved much influence at all. Formal modeling, like any research tool or method, is best judged according to the degree of insight — not the degree of precision — that it brings to the field.
I can’t think of any empirical researcher who has gained fame merely by finding techniques to reduce the amount of noise in the estimate of a regression parameter that has already been the subject of other previous studies. Only if that improved estimation technique generates results that are dramatically different from previous results (or from expected results) would the improved precision of the estimate matter much — i.e., only if the improved precision led to a valuable new insight. In that case, it would really be the insight that mattered, not the precision. The impact of empirical work is proportionate to its degree of new insight, not to its degree of precision. The excruciatingly unsophisticated empirical methods in Ned Bowman’s highly influential “Risk-Return Paradox” and “Risk-Seeking by Troubled Firms” papers provide a great example of this point.
The same general principle is true of theoretical work as well. I can’t think of any formal modeler who has gained fame merely by sharpening the precision of an existing verbal theory. Such minor contributions, if they get published at all, are barely noticed and quickly forgotten. A formal model only has real impact when it generates some valuable new insight. As with empirics, the insight is what really matters, not the precision. (more…)