Posts filed under ‘Myths and Realities’
| Peter Klein |
Jeffrey Selingo raises an important point about the distinction between “public” and “private” universities, but I disagree with his analysis and recommendation. Selingo points out that the elite private universities have huge endowments and land holdings, the income from which, because of the universities’ nonprofit status, is untaxed. He calls this an implicit subsidy, worth billions of dollars according to this study. “Such benefits account for $41,000 in hidden taxpayer subsidies per student annually, on average, at the top 10 wealthiest private universities. That’s more than three times the direct appropriations public universities in the same states as those schools get.”
I agree that the distinction between public and private universities is blurry, but not for the reasons Selingo gives. First, a tax break is not a “subsidy.” Second, there are many ways to measure the “private-ness” of an organization — not only budget, but also ownership and governance. In terms of governance, most US public universities look like crony capitalists. The University of Missouri’s Board of Curators consists of a handful of powerful local operatives, all political appointees (and all but one lawyers) and friends of the current and previous governors. At some levels, there is faculty governance, as there is at nominally private universities. In terms of budget, we don’t need to invent hidden subsidies, we need only look at the explicit ones. If we include federal research funding, the top private universities get a much larger share of their total operating budgets from government sources than do the mid-tier public research universities. (I recently read that Johns Hopkins gets 90% of its research budget from federal agencies, mostly NIH and NSF.) And of course federal student aid is relevant too.
So, what does it mean to be a “private” university?
| Peter Klein |
Two of my favorite writers on the economic organization of science, Terence Kealey and Martin Ricketts, have produced a recent paper on science as a “contribution good.” A contribution good is like a club good in that it is non-rivalrous but at least partly excludable. Here, the excludability is soft and tacit, resulting not from fixed barriers like membership fees, but from the inherent cognitive difficulty in processing the information. To join the club, one must be able to understand the science. And, as with Mancur Olson’s famous model, consumption is tied to contribution — to make full use of the science, the user must first master the underlying material, which typically involves becoming a scientist, and hence contributing to the science itself.
Kealey and Ricketts provide a formal model of contribution goods and describe some conditions favoring their production. In their approach, the key issue isn’t free-riding, but critical mass (what they call the “visible college,” as distinguished from additional contributions from the “invisible college”).
The paper is in the July 2014 issue of Research Policy and appears to be open-access, at least for the moment.
Modelling science as a contribution good
Terence Kealey, Martin Ricketts
The non-rivalness of scientific knowledge has traditionally underpinned its status as a public good. In contrast we model science as a contribution game in which spillovers differentially benefit contributors over non-contributors. This turns the game of science from a prisoner’s dilemma into a game of ‘pure coordination’, and from a ‘public good’ into a ‘contribution good’. It redirects attention from the ‘free riding’ problem to the ‘critical mass’ problem. The ‘contribution good’ specification suggests several areas for further research in the new economics of science and provides a modified analytical framework for approaching public policy.
| Peter Klein |
The old Keynesian idea that war is good for the economy is not taken seriously by anyone outside the New York Times op-ed page. But much of the discussion still focuses on macroeconomic effects (on aggregate demand, labor-force mobilization, etc.). The more important effects, as we’ve often discussed on these pages, are microeconomic — namely, resources are reallocated from higher-valued, civilian and commercial uses, to lower-valued, military and governmental uses. There are huge distortions to capital, labor, and product markets, and even technological innovation — often seen as a benefit of wars, hot and cold — is hampered.
A new NBER paper by Zorina Khan looks carefully at the microeconomic effects of the US Civil War and finds substantial resource misallocation. Perhaps the most significant finding relates to entrepreneurial opportunity — individuals who would otherwise create significant economic value through establishing and running firms, developing new products and services, and otherwise improving the quality of life are instead motivated to pursue government military contracts (a point emphasized in the materials linked above). Here is the abstract (I don’t see an ungated version, but please share in the comments if you find one):
The Impact of War on Resource Allocation: ‘Creative Destruction’ and the American Civil War
B. Zorina Khan
NBER Working Paper No. 20944, February 2015
What is the effect of wars on industrialization, technology and commercial activity? In economic terms, such events as wars comprise a large exogenous shock to labor and capital markets, aggregate demand, the distribution of expenditures, and the rate and direction of technological innovation. In addition, if private individuals are extremely responsive to changes in incentives, wars can effect substantial changes in the allocation of resources, even within a decentralized structure with little federal control and a low rate of labor participation in the military. This paper examines war-time resource reallocation in terms of occupation, geographical mobility, and the commercialization of inventions during the American Civil War. The empirical evidence shows the war resulted in a significant temporary misallocation of resources, by reducing geographical mobility, and by creating incentives for individuals with high opportunity cost to switch into the market for military technologies, while decreasing financial returns to inventors. However, the end of armed conflict led to a rapid period of catching up, suggesting that the war did not lead to a permanent misallocation of inputs, and did not long inhibit the capacity for future technological progress.
| Peter Klein |
We’ve addressed the widely held, but largely mistaken, view of creative artists and entrepreneurs as auteurs, isolated and misunderstood, fighting the establishment and bucking the conventional wisdom. In the more typical case, the creative genius is part of a collaborative team and takes full advantage of the division of labor. After all, is our ability to cooperate through voluntary exchange, in line with comparative advantage, that distinguishes us from the animals.
Christian Caryl’s New Yorker review of The Imitation Game makes a similar point about Alan Turing. The film’s portrayal of Turing (played by Benedict Cumberbatch) “conforms to the familiar stereotype of the otherworldly nerd: he’s the kind of guy who doesn’t even understand an invitation to lunch. This places him at odds not only with the other codebreakers in his unit, but also, equally predictably, positions him as a natural rebel.” In fact, Turing was funny and could be quite charming, and got along well with his colleagues and supervisors.
As Caryl points out, these distortions
point to a much broader and deeply regrettable pattern. [Director] Tyldum and [writer] Moore are determined to suggest maximum dramatic tension between their tragic outsider and a blinkered society. (“You will never understand the importance of what I am creating here,” [Turing] wails when Denniston’s minions try to destroy his machine.) But this not only fatally miscasts Turing as a character—it also completely destroys any coherent telling of what he and his colleagues were trying to do.
In reality, Turing was an entirely willing participant in a collective enterprise that featured a host of other outstanding intellects who happily coexisted to extraordinary effect. The actual Denniston, for example, was an experienced cryptanalyst and was among those who, in 1939, debriefed the three Polish experts who had already spent years figuring out how to attack the Enigma, the state-of-the-art cipher machine the German military used for virtually all of their communications. It was their work that provided the template for the machines Turing would later create to revolutionize the British signals intelligence effort. So Turing and his colleagues were encouraged in their work by a military leadership that actually had a pretty sound understanding of cryptological principles and operational security. . . .
The movie version, in short, represents a bizarre departure from the historical record. In fact, Bletchley Park—and not only Turing’s legendary Hut 8—was doing productive work from the very beginning of the war. Within a few years its motley assortment of codebreakers, linguists, stenographers, and communications experts were operating on a near-industrial scale. By the end of the war there were some 9,000 people working on the project, processing thousands of intercepts per day.
The rebel outsider makes for good storytelling, but in most human endeavors, including science, art, and entrepreneurship, it is well-organized groups, not auteurs, who make the biggest breakthroughs.
| Peter Klein |
Via Michael Strong, a thoughtful review and critique of Western-style economic development programs and their focus on one-size-fits-all, “big idea” approaches. Writing in the New Republic, Michael Hobbs takes on not only Bono and Jeff Sachs and USAID and the usual suspects, but even the randomized-controlled-trials crowd, or “randomistas,” like Duflo and Banerjee. Instead of searching for the big idea, thinking that “once we identify the correct one, we can simply unfurl it on the entire developing world like a picnic blanket,” we should support local, incremental, experimental, attempts to improve social and economic well being — a Hayekian bottom-up approach.
We all understand that every ecosystem, each forest floor or coral reef, is the result of millions of interactions between its constituent parts, a balance of all the aggregated adaptations of plants and animals to their climate and each other. Adding a non-native species, or removing one that has always been there, changes these relationships in ways that are too intertwined and complicated to predict. . . .
[I]nternational development is just such an invasive species. Why Dertu doesn’t have a vaccination clinic, why Kenyan schoolkids can’t read, it’s a combination of culture, politics, history, laws, infrastructure, individuals—all of a society’s component parts, their harmony and their discord, working as one organism. Introducing something foreign into that system—millions in donor cash, dozens of trained personnel and equipment, U.N. Land Rovers—causes it to adapt in ways you can’t predict.
| Peter Klein |
In the opportunity-discovery perspective, profits result from the discovery and exploitation of disequilibrium “gaps” in the market. To earn profits an entrepreneur needs superior foresight or perception, but not risk capital or other productive assets. Capital is freely available from capitalists, who supply funds as requested by entrepreneurs but otherwise play a relatively minor, passive role. Residual decision and control rights are second-order phenomena, because the essence of entrepreneurship is alertness, not investing resources under uncertainty.
By contrast, the judgment-based view places capital, ownership, and uncertainty front and center. The essence of entrepreneurship is not ideation or imagination or creativity, but the constant combining and recombining of productive assets under uncertainty, in pursuit of profits. The entrepreneur is thus also a capitalist, and the capitalist is an entrepreneur. We can even imagine the alert individual — the entrepreneur of discovery theory — as a sort of consultant, bringing ideas to the entrepreneur-capitalist, who decides whether or not to act.
A scene from Fargo nicely illustrates the distinction. Protagonist Jerry Lundegaard thinks he’s found (“discovered”) a sure-fire profit opportunity; he just needs capital, which he hopes to get from his wealthy father-in-law Wade. Jerry sees himself as running the show and earning the profits. Wade, however, has other ideas — he thinks he’s making the investment and, if it pays off, pocketing the profits, paying Jerry a finder’s fee for bringing him the idea.
So, I ask you, who is the entrepreneur, Jerry or Wade?
Peter invited me to reply to [Warren Miller’s] comment, so I’ll try to offer a defense of formal economic modeling.
In answering Peter’s invitation, I’m at a bit of a disadvantage because I am definitely NOT an IO economist (perhaps because I actually CAN relax). Rather, I’m a strategy guy — far more interested in studying the private welfare of firms than the public welfare of economies (plus, it pays better and is more fun). So, I am in a much better position to comment on the benefits that the game-theoretic toolbox is currently starting to bring to the strategy field than on the benefits that it has brought to the economics discipline over the last four decades (i.e., since Akerlof’s 1970 Lemons paper really jump-started the trend).
Peter writes, “game theory was supposed to add transparency and ‘rigor’ to the analysis.” I have heard this argument many times (e.g., Adner et al, 2009 AMR), and I think it is a red herring, or at least a side show. Yes, formal modeling does add transparency and rigor, but that’s not its main benefit. If the only benefit of formal modeling were simply about improving transparency and rigor then I suspect that it would never have achieved much influence at all. Formal modeling, like any research tool or method, is best judged according to the degree of insight — not the degree of precision — that it brings to the field.
I can’t think of any empirical researcher who has gained fame merely by finding techniques to reduce the amount of noise in the estimate of a regression parameter that has already been the subject of other previous studies. Only if that improved estimation technique generates results that are dramatically different from previous results (or from expected results) would the improved precision of the estimate matter much — i.e., only if the improved precision led to a valuable new insight. In that case, it would really be the insight that mattered, not the precision. The impact of empirical work is proportionate to its degree of new insight, not to its degree of precision. The excruciatingly unsophisticated empirical methods in Ned Bowman’s highly influential “Risk-Return Paradox” and “Risk-Seeking by Troubled Firms” papers provide a great example of this point.
The same general principle is true of theoretical work as well. I can’t think of any formal modeler who has gained fame merely by sharpening the precision of an existing verbal theory. Such minor contributions, if they get published at all, are barely noticed and quickly forgotten. A formal model only has real impact when it generates some valuable new insight. As with empirics, the insight is what really matters, not the precision. (more…)