Posts filed under ‘Methods/Methodology/Theory of Science’
| Peter Klein |
Scientific progress, like economic progress, largely consists of combining and recombining existing resources and knowledge. At least that’s the way I interpret a new paper from Santa Fe Institute researchers Hyejin Youn, Luis Bettencourt, Jose Lobo, and Deborah Strumsky, “Invention as a Combinatorial Process: Evidence from US Patents” (via Steve Fiore):
Invention has been commonly conceptualized as a search over a space of combinatorial possibilities. Despite the existence of a rich literature, spanning a variety of disciplines, elaborating on the recombinant nature of invention, we lack a formal and quantitative characterization of the combinatorial process underpinning inventive activity. Here, we use US patent records dating from 1790 to 2010 to formally characterize invention as a combinatorial process. To do this, we treat patented inventions as carriers of technologies and avail ourselves of the elaborate system of technology codes used by the United States Patent and Trademark Office to classify the technologies responsible for an invention’s novelty. We find that the combinatorial inventive process exhibits an invariant rate of ‘exploitation’ (refinements of existing combinations of technologies) and ‘exploration’ (the development of new technological combinations). This combinatorial dynamic contrasts sharply with the creation of new technological capabilities—the building blocks to be combined—that has significantly slowed down. We also find that, notwithstanding the very reduced rate at which new technologies are introduced, the generation of novel technological combinations engenders a practically infinite space of technological configurations.
Or, as the Santa Fe press release puts it, “Most new patents are combinations of existing ideas and pretty much always have been, even as the stream of fundamentally new core technologies has slowed.” See also the authors’ earlier paper, “Atypical Combinations and Scientific Impact.”
| Peter Klein |
Great illustration from the Mad Scientist Confectioner’s Club (via Fan Xia).
| Peter Klein |
An interesting piece in The Economist: “Economic history is dead; long live economic history?”
Last weekend, Britain’s Economic History Society hosted its annual three-day conference in Telford, attempting to show the subject was still alive and kicking. The economic historians present at the gathering were bullish about the future. Although the subject’s woes at MIT have been echoed across research universities in both America and Europe, since the financial crisis there has been something of a minor revival. One reason for this may be that, as we pointed out in 2013, it is widely believed amongst scholars, policy makers and the public that a better understanding of economic history would have helped to avoid the worst of the recent crisis.
However, renewed vigour can be most clearly seen in the debates economists are now having with each other.
These debates are those about the long-run relationship between debt and growth initiated by Reinhart and Rogoff, about the historic effectiveness of Keynesian monetary and fiscal policy, and about the role of global organizations like the IMF and World Bank in promoting international coordination.
I guess my view is closer to Andrew Smith’s, that while history should play a stronger role in economics (and management) research and teaching, it probably won’t, for a variety of professional and institutional reasons. Of course, there is a difference between, say, research in economic or business history and “papers published in journals specializing in economic or business history.” In the first half of the twentieth century, quantitative economics was treated as a specialized subfield; now virtually all mainstream economics is quantitative. (The same may happen to empirical sociology, to theorizing in strategic management, and in other areas.)
| Peter Klein |
Jeffrey Selingo raises an important point about the distinction between “public” and “private” universities, but I disagree with his analysis and recommendation. Selingo points out that the elite private universities have huge endowments and land holdings, the income from which, because of the universities’ nonprofit status, is untaxed. He calls this an implicit subsidy, worth billions of dollars according to this study. “Such benefits account for $41,000 in hidden taxpayer subsidies per student annually, on average, at the top 10 wealthiest private universities. That’s more than three times the direct appropriations public universities in the same states as those schools get.”
I agree that the distinction between public and private universities is blurry, but not for the reasons Selingo gives. First, a tax break is not a “subsidy.” Second, there are many ways to measure the “private-ness” of an organization — not only budget, but also ownership and governance. In terms of governance, most US public universities look like crony capitalists. The University of Missouri’s Board of Curators consists of a handful of powerful local operatives, all political appointees (and all but one lawyers) and friends of the current and previous governors. At some levels, there is faculty governance, as there is at nominally private universities. In terms of budget, we don’t need to invent hidden subsidies, we need only look at the explicit ones. If we include federal research funding, the top private universities get a much larger share of their total operating budgets from government sources than do the mid-tier public research universities. (I recently read that Johns Hopkins gets 90% of its research budget from federal agencies, mostly NIH and NSF.) And of course federal student aid is relevant too.
So, what does it mean to be a “private” university?
| Peter Klein |
Two of my favorite writers on the economic organization of science, Terence Kealey and Martin Ricketts, have produced a recent paper on science as a “contribution good.” A contribution good is like a club good in that it is non-rivalrous but at least partly excludable. Here, the excludability is soft and tacit, resulting not from fixed barriers like membership fees, but from the inherent cognitive difficulty in processing the information. To join the club, one must be able to understand the science. And, as with Mancur Olson’s famous model, consumption is tied to contribution — to make full use of the science, the user must first master the underlying material, which typically involves becoming a scientist, and hence contributing to the science itself.
Kealey and Ricketts provide a formal model of contribution goods and describe some conditions favoring their production. In their approach, the key issue isn’t free-riding, but critical mass (what they call the “visible college,” as distinguished from additional contributions from the “invisible college”).
The paper is in the July 2014 issue of Research Policy and appears to be open-access, at least for the moment.
Modelling science as a contribution good
Terence Kealey, Martin Ricketts
The non-rivalness of scientific knowledge has traditionally underpinned its status as a public good. In contrast we model science as a contribution game in which spillovers differentially benefit contributors over non-contributors. This turns the game of science from a prisoner’s dilemma into a game of ‘pure coordination’, and from a ‘public good’ into a ‘contribution good’. It redirects attention from the ‘free riding’ problem to the ‘critical mass’ problem. The ‘contribution good’ specification suggests several areas for further research in the new economics of science and provides a modified analytical framework for approaching public policy.
| Peter Klein |
I sometimes worry that the blog format is being displaced by Facebook, Twitter, and similar platforms, but Patrick Dunleavy from the LSE Impact of Social Science Blog remains a fan of academic blogs, particularly focused group blogs like, ahem, O&M. Patrick argues that blogging (supported by academic tweeting) is “quick to do in real time”; “communicates bottom-line results and ‘take aways’ in clear language, yet with due regard to methods issues and quality of evidence”; helps “create multi-disciplinary understanding and the joining-up of previously siloed knowledge”; “creates a vastly enlarged foundation for the development of ‘bridging’ academics, with real inter-disciplinary competences”; and “can also support in a novel and stimulating way the traditional role of a university as an agent of ‘local integration’ across multiple disciplines.”
Patrick also usefully distinguishes between solo blogs, collaborative or group blogs (like O&M), and multi-author blogs (professionally edited and produced, purely academic). O&M is partly academic, partly personal, but we have largely the same objectives as those outlined in Patrick’s post.
See also our recent discussion of academics and social media.
| Peter Klein |
Should academic work be classified primarily by discipline, or by problem? Within disciplines, do we start with theory versus application, micro versus macro, historical versus contemporary, or something else? Of course, there may be no single “optimal” classification scheme, but how we think about organizing research in our field says something about how we view the nature, contributions, and problems in the field.
There’s a very interesting discussion of this subject in the History of Economics Playground blog, focusing on the evolution of the Journal of Economic Literature codes used by economists (parts 1, 2, and 3). I particularly liked Beatrice Cherrier’s analysis of the AEA’s decision to drop “theory” as a separate category. The Machlup–Hutchison–Rothbard exchange helps establish the context.
[T]he seemingly administrative task of devising new categories threw AEA officials, in particular AER editor Bernard Haley and former AER interim editor Fritz Machlup, into heated debates over the nature and relationships of theoretical and empirical work.
Machlup campaigned for a separate “Abstract Economic Theory” top category. At the time of the revision, he was engaged in methodological work, striving to find a third way between Terence Hutchison’s “ultraempiricism,” and the “extreme a priorism” of his former mentor, Ludwig Von Mises (see Blaug, ch.4). He believed it was possible to differentiate between “fundamental (heuristic) hypotheses, which are not independently testable,” and “specific (factual) assumptions, which are supposed to correspond to observed facts or conditions.” The former was found in Keynes’s General Theory, and the latter in his Treatise on Money, Machlup explained. He thus proposed that empirical analysis be classified independently, under two categories: “Quantitative Research Techniques” and “Social Accounting, Measurements, and Numerical Hypotheses” (e.g., census data, expenditure surveys, input-output matrices, etc.). On the contrary, Haley wanted every category to cover the theoretical and empirical work related to a given subject matter. In his view, separating them was impossible, even meaningless: “Is there any theory that is not abstract? And, for that matter, is there any economic theory worth its salt that is not applied,” he teased Machlup. Also, he wanted to avoid the idea that “class 1 is theory, the rest are applied … How about monetary theory, international trade theory, business cycle theory?” He accordingly designed the top category to encompass price theory, but also statistical demand analysis, as well as “both theoretical and empirical studies of, e.g., the consumption function [and] economic growth models of the Harrod-Domar variety,” among other subjects. He eschewed any “theory” heading, which he replaced with titles such as “Price system; National Income Analysis.” His scheme eventually prevailed, but “theory” was reinstated in the title of the contentious category.