Posts filed under ‘Myths and Realities’

Is Terrorism a Disease?

| Peter Klein |

100628-D-9880W-113.JPG

US Defense Secretary Ash Carter is making the rounds with a speech about ISIL being a “cancer” that must be cured with aggressive treatment. “[L]ike all cancers, you can’t cure the disease just by cutting out the tumor. You have to eliminate it wherever it has spread, and stop it from coming back. . . . . [We have] three military objectives: One, destroy the ISIL parent tumor in Iraq and Syria by collapsing its two power centers in Mosul, Iraq and Raqqah, Syria. . . . Two, combat the emerging metastases of the ISIL tumor worldwide wherever it appears. . . .” Terrorism, in other words, is a cancer metastasizing from the underlying tumor of Islamic fundamentalism.

This language may rally the troops, but it is particularly unhelpful in understanding the nature, antecedents, consequences, and remedy for terrorism. As Robert Pape, Alan Krueger, and other social scientists have shown, terrorism is a tactic, a form of purposeful human action, and should be understood as such, not as a mindless, undirected biological phenomenon.

Edith Penrose warned more than sixty years ago about the limits of biological analogies in understanding social issues. “The chief danger of carrying sweeping analogies very far is that the problems they are designed to illuminate become framed in such a special way that significant matters are frequently inadvertently obscured. Biological analogies contribute little either to the theory of price or to the theory of growth and development of firms and in general tend to confuse the nature of the important issues.” I have written before about the problem of treating gun violence as a disease, rather than a legal, social, and criminological issue. To understand why people shoot guns, on purpose or accidentally, we need to focus on their preferences, beliefs, and actions. (This does not imply some kind of straw-man “rationality,” by the way.) Likewise, if we want to reduce terrorist acts, we should treat terrorism as a military tactic, designed to achieve specific ends, rather than a disease or epidemic whose “growth” we have to stop.

Update (27 Jan): From David Levine I learn of another example of a medical researcher trying to address a social science problem without apparently understanding the concept of selection bias (he samples on the dependent variable).

26 January 2016 at 12:06 pm Leave a comment

SMACK-down of Evidence-Based Medicine

| Peter Klein |

Mother-Child-300x199As a skeptic of the evidence-based management movement (championed by Pfeffer, Sutton, et al.) I was amused by a recent spoof article in the Journal of Evaluation in Clinical Practice, “Maternal Kisses Are Not Effective in Alleviating Minor Childhood Injuries (Boo-Boos): A Randomized, Controlled, and Blinded Study,” authored by the Study of Maternal and Child Kissing (SMACK) Working Group. Maternal kisses were associated with a positive and statistically significant increase in the Toddler Discomfort Index (TDI):

Maternal kissing of boo-boos confers no benefit on children with minor traumatic injuries compared to both no intervention and sham kissing. In fact, children in the maternal kissing group were significantly more distressed at 5 minutes than were children in the no intervention group. The practice of maternal kissing of boo-boos is not supported by the evidence and we recommend a moratorium on the practice.

The actual author, Mark Tonelli, is a prominent critic of evidence-based medicine, described by the journal’s editor as a “collapsing” movement and in a recent British Journal of Medicine editorial as a “movement in crisis.” Most of the criticisms of evidence-based medicine will sound familiar to Austrian economists: overreliance on statistically significant, but clinically irrelevant, findings in large samples; failure to appreciate context and interpretation; lack of attention to underlying mechanisms rather than unexplained correlations; and a general disdain for tacit knowledge and understanding.

My guess is that evidence-based management, which is modeled after evidence-based medicine, is in for a similarly rocky ride. Teppo had some interesting orgtheory posts on this a few years ago (e.g., here and here). Evidence-based management has been criticized, as you might expect, by critical theorists and other postmodernists who don’t like the concept of “evidence” per se but the real problems are more mundane: what counts as evidence, and what conclusions can legitimately be drawn from this evidence, are far from obvious in most cases. Particularly in entrepreneurial settings, as we’ve written often on these pages, intuition, Verstehen, or judgment may be more reliable guides than quantitative, analytical reasoning.

Update: Thanks to Ivan Zupic for pointing me to a review and critique of EBM in the current issue of AMLE. 

2 January 2016 at 5:06 pm Leave a comment

Incentives, Ideology, and Climate Change

| Peter Klein |

1484We’ve written before on the institutions of scientific research which, like other human activities, involves expenditures of scarce resources, has benefits and costs that can be evaluated on the margin, and is affected by the preferences, beliefs, and incentives of scientific personnel (1, 2, 3). This sounds trite, but the view persists, especially among mainstream journalists, that science is fundamentally different, that scientists are disinterested truth-seekers immune from institutional and organizational constraints. This is the default assumption about scientists working within the general consensus of their discipline. By contrast, critics of the consensus position, whether inside our outside the core discipline, are presumed to be motivated by ideology or private interest.

You don’t need to be Thomas Kuhn, Imre Lakatos, or any modern historian or philosopher of science to find this asymmetry puzzling. But it is the usual assumption in particular areas, most notably climate science. A good example is this recent New York Times piece by Justin Gillis, “Short Answers to Hard Questions About Climate Change.” In response to the question, “Why do people question climate change?” Gillis gives us ideology and private interests.

Most of the attacks on climate science are coming from libertarians and other political conservatives who do not like the policies that have been proposed to fight global warming. Instead of negotiating over those policies and trying to make them more subject to free-market principles, they have taken the approach of blocking them by trying to undermine the science.

This ideological position has been propped up by money from fossil-fuel interests, which have paid to create organizations, fund conferences and the like. The scientific arguments made by these groups usually involve cherry-picking data, such as focusing on short-term blips in the temperature record or in sea ice, while ignoring the long-term trends.

Ignore the saucy rhetoric (critics of the consensus view don’t just question the theory or evidence, they “attack climate science”), and note that for Gillis, opposition to the mainstream view is a puzzle to be explained, and the most likely candidates are ideology and special interests. Honest disagreement is ruled out (though earlier in the piece he recognizes the vast uncertainties involved in climate research). Why so many scientists, private and public organizations, firms, etc. support the mainstream position is not, in Gillis’s opinion, worth exploring. It’s Because Science. The fact that billions of dollars are flowing into climate research — a flow that would slow to a trickle if policymakers believed that man-made carbon emissions are not contributing to global warming — apparently has no effect on scientific practice. The fact that many climate-change proponents are, in general, ideologically predisposed to policies that impose greater government control over markets, that reduce industrial activity, that favor particular technologies and products over others is, again irrelevant.

Of course, I’m not claiming that climate scientists in or outside the mainstream consensus are fanatics or money-grubbers. I’m saying you can’t have it both ways. If ideology and private interests are relevant on one side of a debate, they’re relevant on the other side as well. Perhaps the ideology and private interests of New York Times writers blind them to this simple point.

2 December 2015 at 5:45 pm 1 comment

The Internet as Collective Invention

| Dick Langlois |

Further to Peter’s post on government science funding: I just received, hot off the (physical) press, a copy of Shane Greenstein’s new book How the Internet Became Commercial (Princeton, 2015). Among the myths that Greenstein — now apparently at Harvard Business School — debunks is the idea that the internet was in any sense a product of government industrial policy. Although government had many varied and uncoordinated influences on the development of the technology, the emergence of the Internet was ultimately an example of what the economic historian Robert Allen called collective invention. It was very much a spontaneous process. And it was not fundamentally different from other episodes of technological change in history.

27 October 2015 at 1:58 pm Leave a comment

Science, Technology, and Public Funding

| Peter Klein |

This piece by Matt Ridley builds on Terence Kealey’s critique of government science funding, and also echoes Nathan Rosenberg’s critique of the linear model of science and technology. We have pointed out similarly that arguments for public science funding are usually not very scientific.

When you examine the history of innovation, you find, again and again, that scientific breakthroughs are the effect, not the cause, of technological change. It is no accident that astronomy blossomed in the wake of the age of exploration. The steam engine owed almost nothing to the science of thermodynamics, but the science of thermodynamics owed almost everything to the steam engine. The discovery of the structure of DNA depended heavily on X-ray crystallography of biological molecules, a technique developed in the wool industry to try to improve textiles.

Technological advances are driven by practical men who tinkered until they had better machines; abstract scientific rumination is the last thing they do. As Adam Smith, looking around the factories of 18th-century Scotland, reported in “The Wealth of Nations”: “A great part of the machines made use in manufactures…were originally the inventions of common workmen,” and many improvements had been made “by the ingenuity of the makers of the machines.”

It follows that there is less need for government to fund science: Industry will do this itself. Having made innovations, it will then pay for research into the principles behind them. Having invented the steam engine, it will pay for thermodynamics.

I have argued repeatedly against the “laundry list” rationale for public funding, the listing of technologies and products that came out of government programs, as if that were justification for these programs. Ridley agrees:

Given that government has funded science munificently from its huge tax take, it would be odd if it had not found out something. This tells us nothing about what would have been discovered by alternative funding arrangements.

And we can never know what discoveries were not made because government funding crowded out philanthropic and commercial funding, which might have had different priorities.

24 October 2015 at 3:28 pm Leave a comment

Deaton’s Critique of Randomized Controlled Trials

| Peter Klein |

Because we’ve been somewhat skeptical of randomized-controlled trials — not the technique itself, but the way it is over-hyped by its proponents — you may enjoy Angus Deaton’s critique of RCTs in development economics. I learned of Deaton’s arguments from this excellent piece by Chris Blattman in Foreign Policy. Here is the key paper, Deaton’s 2008 Keynes Lecture at the British Academy.

Instruments of Development: Randomization in the Tropics, and the Search for the Elusive Keys to Economic Development

Angus Deaton

There is currently much debate about the effectiveness of foreign aid and about what kind of projects can engender economic development. There is skepticism about the ability of econometric analysis to resolve these issues, or of development agencies to learn from their own experience. In response, there is movement in development economics towards the use of randomized controlled trials (RCTs) to accumulate credible knowledge of what works, without over-reliance on questionable theory or statistical methods. When RCTs are not possible, this movement advocates quasi-randomization through instrumental variable (IV) techniques or natural experiments. I argue that many of these applications are unlikely to recover quantities that are useful for policy or understanding: two key issues are the misunderstanding of exogeneity, and the handling of heterogeneity. I illustrate from the literature on aid and growth. Actual randomization faces similar problems as quasi-randomization, notwithstanding rhetoric to the contrary. I argue that experiments have no special ability to produce more credible knowledge than other methods, and that actual experiments are frequently subject to practical problems that undermine any claims to statistical or epistemic superiority. I illustrate using prominent experiments in development. As with IV methods, RCT-based evaluation of projects is unlikely to lead to scientific progress in the understanding of economic development. I welcome recent trends in development experimentation away from the evaluation of projects and towards the evaluation of theoretical mechanisms.

Blattman says Deaton has a new paper that presents a more nuanced critique, but it is apparently not online. I’ll share more when I have it.

12 October 2015 at 9:40 pm 2 comments

I Agree with Larry Summers

| Peter Klein |

Justin Fox reports on a recent high-powered behavioral economics conference featuring Raj Chetty, David Laibson, Antoinette Schoar, Maya Shankar, and other important contributors to this growing research stream. But he refers also to the “Summers critique,” the idea that key findings in behavioral economics research sound like recycled wisdom from business practitioners.

Summers [in 2012] told a story about a college acquaintance who as a cruel prank signed up another classmate for 60 different subscriptions of the Book-of-the-Month-Club ilk. The way these clubs worked is that once you signed up, you got a book in the mail every month and were charged for it unless you (a) sent the book back within a certain period of time or (b) went through the hassle of extricating yourself from the club altogether. Customers had to opt out in order to not keep buying books, so they bought more books than they otherwise would have. Book marketers, Summers said, had figured out the power of defaults long before economists had.

More generally, Fox asks, “Have behavioral economists really discovered anything new, or have they simply replaced some wrong-headed notions of post-World War II economics with insights that people in business have understood for decades and maybe even centuries?”

I took exactly the Summers line in a 2010 post, observing that behavioral economics “often seems to restate common, obvious, well-known ideas as if they are really novel insights (e.g., that preferences aren’t stable and predictable over time). More novel propositions are questionable at best.” I used a Dan Ariely column on compensation policy as an example:

He claims as a unique insight of behavioral economics that when people are evaluated according to quantitative measures of performance, they tend to focus on the measures, not the underlying behavior being measured. Well, duh. This is pretty much a staple of introductory lectures on agency theory (and features prominently in Steve Kerr’s classic 1975 article). Ariely goes on to suggest that CEOs should be rewarded not on the basis of a single measure of performance, but multiple measures. Double-duh. Holmström (1979) called this the “informativeness principle” and it’s in all the standard textbooks on contract design and compensation structure (e.g., Milgrom and Roberts, Brickley et al., etc.) (Of course, agency theory also recognizes that gathering information is costly, and that additional metrics are valuable, on the margin, only if the benefits exceed the costs, a point unmentioned by Ariely.)

Maybe Larry and I should hang out.

22 September 2015 at 12:03 pm 7 comments

More on the Linear Model of Science and Technology

| Peter Klein |

51eaYmA7XfL._SY344_BO1,204,203,200_As Joel Mokyr notes, one of Nathan Rosenberg’s important contributions was to debunk the “linear model” in which basic science begets applied science, which begets useful technology.

Technology in his view is not the mechanical “application of science” to production; it is a field of knowledge by itself, quite different in its incentives, its modes of transmission, and its culture. It is affected by science, but in turn provides “pure research” with its instruments and much of its agenda. In many cases, [Rosenberg] noted, scientists were confronted by the fact that things they had previously declared to be impossible were actually carried out by engineers and mechanics and had to admit somewhat sheepishly that were possible after all.

The same issue is raised in Margaret Jacob’s book The First Knowledge Economy: Human Capital and the European Economy, 1750-1850 (Cambridge University Press, 2014), which “argues persuasively for the critical importance of knowledge in Europe’s economic transformation during the period from 1750 to 1850, first in Britain and then in selected parts of northern and western Europe.” In other words, as noted by Erik Hornung:

She especially focusses on the marriage between theoretical sciences and applied mechanical knowledge which helped creating many technological innovations during the Industrial Revolution. She, thus, aims at rectifying the prevalent hypothesis that technological progress resulted from tinkering of skilled but science-ignorant engineers. An impressive set of new archival sources supports her argument that English engineers were, indeed, well aware of and heavily influenced by recent advances in natural sciences.

14 September 2015 at 5:03 pm Leave a comment

Entrepreneurial Opportunity: The Oxygen or Phlogiston of Entrepreneurship Research?

| Peter Klein |

PhlogistoncollectorDon’t miss this PDW at the upcoming Academy of Management conference in Vancouver. From organizer Per Davidssson:

I just wanted to bring your attention to a PDW I am organizing for the upcoming AoM meeting, where we will engage in frank and in-depth discussions about the problems and merits of the popular notion of “entrepreneurial opportunity”. We have been fortunate to gather a collection of very strong scholars and independent thinkers as presenters and discussants in this PDW: Richard J. Arend, Dimo Dimov, Denis Grégoire, Peter G. Klein, Moren Lévesque, Saras Sarasvathy, and Matthew Wood. . This illustrious group of colleagues will make sure the deliberations do not focus on a “beauty contest” between “discovery” and “creation” views but instead reach beyond limitations of both.  

I encourage you to join us for this session, and to make absolutely sure I won’t send you to the wrong place at the right time I have copied the details straight from the online program:

Title: Entrepreneurial Opportunity: The Oxygen or Phlogiston of Entrepreneurship Research? (session #365)

Date & Time: Saturday, August 08, 2015, 12:30:00 PM – 3:00:00 PM

Hotel & Room: Vancouver Convention Centre, Room 012

Further elaboration follows below. Heartily welcome!

(more…)

7 July 2015 at 10:51 am 3 comments

Are “Private” Universities Really Private?

| Peter Klein |

Jeffrey Selingo raises an important point about the distinction between “public” and “private” universities, but I disagree with his analysis and recommendation. Selingo points out that the elite private universities have huge endowments and land holdings, the income from which, because of the universities’ nonprofit status, is untaxed. He calls this an implicit subsidy, worth billions of dollars according to this study. “Such benefits account for $41,000 in hidden taxpayer subsidies per student annually, on average, at the top 10 wealthiest private universities. That’s more than three times the direct appropriations public universities in the same states as those schools get.”

I agree that the distinction between public and private universities is blurry, but not for the reasons Selingo gives. First, a tax break is not a “subsidy.” Second, there are many ways to measure the “private-ness” of an organization — not only budget, but also ownership and governance. In terms of governance, most US public universities look like crony capitalists. The University of Missouri’s Board of Curators consists of a handful of powerful local operatives, all political appointees (and all but one lawyers) and friends of the current and previous governors. At some levels, there is faculty governance, as there is at nominally private universities. In terms of budget, we don’t need to invent hidden subsidies, we need only look at the explicit ones. If we include federal research funding, the top private universities get a much larger share of their total operating budgets from government sources than do the mid-tier public research universities. (I recently read that Johns Hopkins gets 90% of its research budget from federal agencies, mostly NIH and NSF.) And of course federal student aid is relevant too.

So, what does it mean to be a “private” university?

10 April 2015 at 9:01 am 6 comments

Kealey and Ricketts on Science as a Contribution Good

| Peter Klein |

Two of my favorite writers on the economic organization of science, Terence Kealey and Martin Ricketts, have produced a recent paper on science as a “contribution good.” A contribution good is like a club good in that it is non-rivalrous but at least partly excludable. Here, the excludability is soft and tacit, resulting not from fixed barriers like membership fees, but from the inherent cognitive difficulty in processing the information. To join the club, one must be able to understand the science. And, as with Mancur Olson’s famous model, consumption is tied to contribution — to make full use of the science, the user must first master the underlying material, which typically involves becoming a scientist, and hence contributing to the science itself.

Kealey and Ricketts provide a formal model of contribution goods and describe some conditions favoring their production. In their approach, the key issue isn’t free-riding, but critical mass (what they call the “visible college,” as distinguished from additional contributions from the “invisible college”).

The paper is in the July 2014 issue of Research Policy and appears to be open-access, at least for the moment.

Modelling science as a contribution good
Terence Kealey, Martin Ricketts

The non-rivalness of scientific knowledge has traditionally underpinned its status as a public good. In contrast we model science as a contribution game in which spillovers differentially benefit contributors over non-contributors. This turns the game of science from a prisoner’s dilemma into a game of ‘pure coordination’, and from a ‘public good’ into a ‘contribution good’. It redirects attention from the ‘free riding’ problem to the ‘critical mass’ problem. The ‘contribution good’ specification suggests several areas for further research in the new economics of science and provides a modified analytical framework for approaching public policy.

9 April 2015 at 9:23 am 2 comments

Microeconomics of War

| Peter Klein |

The old Keynesian idea that war is good for the economy is not taken seriously by anyone outside the New York Times op-ed page. But much of the discussion still focuses on macroeconomic effects (on aggregate demand, labor-force mobilization, etc.). The more important effects, as we’ve often discussed on these pages, are microeconomic — namely, resources are reallocated from higher-valued, civilian and commercial uses, to lower-valued, military and governmental uses. There are huge distortions to capital, labor, and product markets, and even technological innovation — often seen as a benefit of wars, hot and cold — is hampered.

A new NBER paper by Zorina Khan looks carefully at the microeconomic effects of the US Civil War and finds substantial resource misallocation. Perhaps the most significant finding relates to entrepreneurial opportunity — individuals who would otherwise create significant economic value through establishing and running firms, developing new products and services, and otherwise improving the quality of life are instead motivated to pursue government military contracts (a point emphasized in the materials linked above). Here is the abstract (I don’t see an ungated version, but please share in the comments if you find one):

The Impact of War on Resource Allocation: ‘Creative Destruction’ and the American Civil War
B. Zorina Khan
NBER Working Paper No. 20944, February 2015

What is the effect of wars on industrialization, technology and commercial activity? In economic terms, such events as wars comprise a large exogenous shock to labor and capital markets, aggregate demand, the distribution of expenditures, and the rate and direction of technological innovation. In addition, if private individuals are extremely responsive to changes in incentives, wars can effect substantial changes in the allocation of resources, even within a decentralized structure with little federal control and a low rate of labor participation in the military. This paper examines war-time resource reallocation in terms of occupation, geographical mobility, and the commercialization of inventions during the American Civil War. The empirical evidence shows the war resulted in a significant temporary misallocation of resources, by reducing geographical mobility, and by creating incentives for individuals with high opportunity cost to switch into the market for military technologies, while decreasing financial returns to inventors. However, the end of armed conflict led to a rapid period of catching up, suggesting that the war did not lead to a permanent misallocation of inputs, and did not long inhibit the capacity for future technological progress.

10 February 2015 at 11:16 am 1 comment

Team Science and the Creative Genius

| Peter Klein |

Turing1_jpg_600x639_q85We’ve addressed the widely held, but largely mistaken, view of creative artists and entrepreneurs as auteurs, isolated and misunderstood, fighting the establishment and bucking the conventional wisdom. In the more typical case, the creative genius is part of a collaborative team and takes full advantage of the division of labor. After all, is our ability to cooperate through voluntary exchange, in line with comparative advantage, that distinguishes us from the animals.

Christian Caryl’s New Yorker review of The Imitation Game makes a similar point about Alan Turing. The film’s portrayal of Turing (played by Benedict Cumberbatch) “conforms to the familiar stereotype of the otherworldly nerd: he’s the kind of guy who doesn’t even understand an invitation to lunch. This places him at odds not only with the other codebreakers in his unit, but also, equally predictably, positions him as a natural rebel.” In fact, Turing was funny and could be quite charming, and got along well with his colleagues and supervisors.

As Caryl points out, these distortions

point to a much broader and deeply regrettable pattern. [Director] Tyldum and [writer] Moore are determined to suggest maximum dramatic tension between their tragic outsider and a blinkered society. (“You will never understand the importance of what I am creating here,” [Turing] wails when Denniston’s minions try to destroy his machine.) But this not only fatally miscasts Turing as a character—it also completely destroys any coherent telling of what he and his colleagues were trying to do.

In reality, Turing was an entirely willing participant in a collective enterprise that featured a host of other outstanding intellects who happily coexisted to extraordinary effect. The actual Denniston, for example, was an experienced cryptanalyst and was among those who, in 1939, debriefed the three Polish experts who had already spent years figuring out how to attack the Enigma, the state-of-the-art cipher machine the German military used for virtually all of their communications. It was their work that provided the template for the machines Turing would later create to revolutionize the British signals intelligence effort. So Turing and his colleagues were encouraged in their work by a military leadership that actually had a pretty sound understanding of cryptological principles and operational security. . . .

The movie version, in short, represents a bizarre departure from the historical record. In fact, Bletchley Park—and not only Turing’s legendary Hut 8—was doing productive work from the very beginning of the war. Within a few years its motley assortment of codebreakers, linguists, stenographers, and communications experts were operating on a near-industrial scale. By the end of the war there were some 9,000 people working on the project, processing thousands of intercepts per day.

The rebel outsider makes for good storytelling, but in most human endeavors, including science, art, and entrepreneurship, it is well-organized groups, not auteurs, who make the biggest breakthroughs.

10 January 2015 at 1:26 pm 1 comment

Bottom-up Approaches to Economic Development

| Peter Klein |

Via Michael Strong, a thoughtful review and critique of Western-style economic development programs and their focus on one-size-fits-all, “big idea” approaches. Writing in the New Republic, Michael Hobbs takes on not only Bono and Jeff Sachs and USAID and the usual suspects, but even the randomized-controlled-trials crowd, or “randomistas,” like Duflo and Banerjee.  Instead of searching for the big idea, thinking that “once we identify the correct one, we can simply unfurl it on the entire developing world like a picnic blanket,” we should support local, incremental, experimental, attempts to improve social and economic well being — a Hayekian bottom-up approach.

We all understand that every ecosystem, each forest floor or coral reef, is the result of millions of interactions between its constituent parts, a balance of all the aggregated adaptations of plants and animals to their climate and each other. Adding a non-native species, or removing one that has always been there, changes these relationships in ways that are too intertwined and complicated to predict. . . .

[I]nternational development is just such an invasive species. Why Dertu doesn’t have a vaccination clinic, why Kenyan schoolkids can’t read, it’s a combination of culture, politics, history, laws, infrastructure, individuals—all of a society’s component parts, their harmony and their discord, working as one organism. Introducing something foreign into that system—millions in donor cash, dozens of trained personnel and equipment, U.N. Land Rovers—causes it to adapt in ways you can’t predict.

19 November 2014 at 11:25 am 2 comments

Opportunity Discovery or Judgment? Fargo Edition

| Peter Klein |

In the opportunity-discovery perspective, profits result from the discovery and exploitation of disequilibrium “gaps” in the market. To earn profits an entrepreneur needs superior foresight or perception, but not risk capital or other productive assets. Capital is freely available from capitalists, who supply funds as requested by entrepreneurs but otherwise play a relatively minor, passive role. Residual decision and control rights are second-order phenomena, because the essence of entrepreneurship is alertness, not investing resources under uncertainty.

By contrast, the judgment-based view places capital, ownership, and uncertainty front and center. The essence of entrepreneurship is not ideation or imagination or creativity, but the constant combining and recombining of productive assets under uncertainty, in pursuit of profits. The entrepreneur is thus also a capitalist, and the capitalist is an entrepreneur. We can even imagine the alert individual — the entrepreneur of discovery theory — as a sort of consultant, bringing ideas to the entrepreneur-capitalist, who decides whether or not to act.

A scene from Fargo nicely illustrates the distinction. Protagonist Jerry Lundegaard thinks he’s found (“discovered”) a sure-fire profit opportunity; he just needs capital, which he hopes to get from his wealthy father-in-law Wade. Jerry sees himself as running the show and earning the profits. Wade, however, has other ideas — he thinks he’s making the investment and, if it pays off, pocketing the profits, paying Jerry a finder’s fee for bringing him the idea.

So, I ask you, who is the entrepreneur, Jerry or Wade?

11 November 2014 at 4:53 pm 15 comments

Rich Makadok on Formal Modeling and Firm Strategy

[A guest post from Rich Makadok, lifted from the comments section of the Tirole post below.]

Peter invited me to reply to [Warren Miller’s] comment, so I’ll try to offer a defense of formal economic modeling.

In answering Peter’s invitation, I’m at a bit of a disadvantage because I am definitely NOT an IO economist (perhaps because I actually CAN relax). Rather, I’m a strategy guy — far more interested in studying the private welfare of firms than the public welfare of economies (plus, it pays better and is more fun). So, I am in a much better position to comment on the benefits that the game-theoretic toolbox is currently starting to bring to the strategy field than on the benefits that it has brought to the economics discipline over the last four decades (i.e., since Akerlof’s 1970 Lemons paper really jump-started the trend).

Peter writes, “game theory was supposed to add transparency and ‘rigor’ to the analysis.” I have heard this argument many times (e.g., Adner et al, 2009 AMR), and I think it is a red herring, or at least a side show. Yes, formal modeling does add transparency and rigor, but that’s not its main benefit. If the only benefit of formal modeling were simply about improving transparency and rigor then I suspect that it would never have achieved much influence at all. Formal modeling, like any research tool or method, is best judged according to the degree of insight — not the degree of precision — that it brings to the field.

I can’t think of any empirical researcher who has gained fame merely by finding techniques to reduce the amount of noise in the estimate of a regression parameter that has already been the subject of other previous studies. Only if that improved estimation technique generates results that are dramatically different from previous results (or from expected results) would the improved precision of the estimate matter much — i.e., only if the improved precision led to a valuable new insight. In that case, it would really be the insight that mattered, not the precision. The impact of empirical work is proportionate to its degree of new insight, not to its degree of precision. The excruciatingly unsophisticated empirical methods in Ned Bowman’s highly influential “Risk-Return Paradox” and “Risk-Seeking by Troubled Firms” papers provide a great example of this point.

The same general principle is true of theoretical work as well. I can’t think of any formal modeler who has gained fame merely by sharpening the precision of an existing verbal theory. Such minor contributions, if they get published at all, are barely noticed and quickly forgotten. A formal model only has real impact when it generates some valuable new insight. As with empirics, the insight is what really matters, not the precision. (more…)

14 October 2014 at 11:25 am 18 comments

Tirole

| Peter Klein |

As a second-year economics PhD student I took the field sequence in industrial organization. The primary text in the fall course was Jean Tirole’s Theory of Industrial Organization, then just a year old. I found it a difficult book — a detailed overview of the “new,” game-theoretic IO, featuring straightforward explanations and numerous insights and useful observations but shot through with brash, unsubstantiated assumptions and written in an extremely terse, almost smug style that rubbed me the wrong way. After all, game theory was supposed to add transparency and “rigor” to the analysis, bringing to light the hidden assumptions of the old-fashioned, verbal models, but Tirole combined math and ad hoc verbal asides in equal measure. (Sample statement: “The Coase theorem (1960) asserts that an optimal allocation of resources can always be achieved through market forces, irrespective of the legal liability assignment, if information is perfect and transactions are costless.” And then: “We conclude that the Coase theorem is unlikely to apply here and that selective government intervention may be desirable.”) Well, that’s the way formal theorists write and, if you know the code and read wisely, you can gain insight into how these economists think about things. Is it the best way to learn about real markets and real competition? Tirole takes it as self-evident that MIT-style theory is a huge advance over the earlier IO literature, which he characterizes as “the old oral tradition of behavioral stories.” He does not, to my knowledge, deal with the “new learning” of the 1960s and 1970s, associated mainly with Chicago economists (but also Austrian and public choice economists) that emphasized informational and incentive problems of regulators as well as firms.

Tirole is one of the most important economists in modern theoretical IO, public economics, regulation, and corporate finance, and it’s no surprise that the Nobel committee honored him with today’s prize. The Nobel PR team struggled to summarize his contributions for the nonspecialist reader (settling on the silly phrase that his work shows how to “tame” big firms) but you can find decent summaries in the usual places (e.g., WSJ, NYTEconomist) and sympathetic, even hagiographic treatments in the blogosphere (Cowen, Gans). By all accounts Tirole is a nice guy and an excellent teacher, as well as the first French economics laureate since Maurice Allais, so bully for him.

I do think Tirole-style IO is an improvement over the old structure-conduct-performance paradigm, which focused on simple correlations, rather than causal explanations and eschewed comparative institutional analysis, modeling regulators as omniscient, benevolent dictators. The newer approach starts with agency theory and information theory — e.g., modeling regulators as imperfectly informed principals and regulated firms as agents whose actions might differ from those preferred by their principals — and thus draws attention to underlying mechanisms, differences in incentives and information, dynamic interaction, and so on. However, the newer approach ultimately rests on the old market structure / market power analysis in which monopoly is defined as the short-term ability to set price above marginal cost, consumer welfare is measured as the area under the static demand curve, and so on. It’s neoclassical monopoly and competition theory on steroids, and hence side-steps the interesting objections raised by the Austrians and UCLA price theorists. In other words, the new IO focuses on more complex interactions while still eschewing comparative institutional analysis and modeling regulators as benevolent, albeit imperfectly informed, “social planners.”

As a student I found Tirole’s analysis extremely abstract, with little attention to how these theories might work in practice. Even Tirole’s later book with Jean-Jacques Laffont, A Theory of Incentives in Procurement and Regulation, is not very applied. But evidently Tirole has played a large personal and professional role in training and advising European regulatory bodies, so his work seems to have had a substantial impact on policy. (See, however, Sam Peltzman’s unflattering review of the 1989 Handbook of Industrial Organization, which complains that game-theoretic IO seems more about solving clever puzzles than understanding real markets.)

13 October 2014 at 2:31 pm 17 comments

SBA Loans Reduce Economic Growth

| Peter Klein |

That’s the conclusion of a new NBER paper by Andy Young, Matthew Higgins, Don Lacombe, and Briana Sell, “The Direct and Indirect Effects of Small Business Administration Lending on Growth: Evidence from U.S. County-Level Data” (ungated version here). “We find evidence that a county’s SBA lending per capita is associated with direct negative effects on its income growth. We also find evidence of indirect negative effects on the growth rates of neighboring counties. Overall, a 10% increase in SBA loans per capita is associated with a cumulative decrease in income growth rates of about 2%.” As the authors point out, SBA loans represent funds that also have alternative uses, and SBA-sponsored clients may not be the most worthy recipients (in terms of generating economic growth).

The results are largely robust and, perhaps more importantly, we never find any evidence of positive growth effects associated with SBA lending. Even when the estimated effects are statistically insignificant, the point estimates are always negative. Our findings suggest that SBA lending to small businesses comes at the cost of loans that would have otherwise been made to more profitable and/or innovative firms. Furthermore, SBA lending in a given county results in negative spillover effects on income growth in neighboring counties. Given the popularity of pro-small business policies, our findings should give reason for policymakers and their constituents to reevaluate their priors.

6 October 2014 at 11:37 am 1 comment

More Skepticism of Behavioral Social Science

| Peter Klein |

ship-of-fools-56457563Here at O&M we have been somewhat skeptical of the behavioral social science literature. Sure, in laboratory experiments, people often behave in ways inconsistent with “rational” behavior (as defined by neoclassical economics). Yes, people seem to use various rules-of-thumb in making complex decisions. And yet, it’s not clear that the huge literature on such biases and heuristics tells us much we don’t already know.

An interesting essay by Steven Poole argues the behavioralists’ claims are overstated, mainly by relying on a narrow, superficial notion of rationality as the benchmark case. Contemporary psychology suggests that people interpret the questions posed in laboratory experiments in a nuanced, contextual manner in which their seemingly “irrational” answers are actually reasonable.

There are many other good reasons to give ‘wrong’ answers to questions that are designed to reveal cognitive biases. The cognitive psychologist Jonathan St B T Evans was one of the first to propose a ‘dual-process’ picture of reasoning in the 1980s, but he resists talk of ‘System 1’ and ‘System 2’ as though they are entirely discrete, and argues against the automatic inference from bias to irrationality. . . . In general, Evans concludes that a ‘strictly logical’ answer will be less ‘adaptive to everyday needs’ for most people in many such cases of deductive reasoning. ‘A related finding,’ he continues, ‘is that, even though people may be told to assume the premises of arguments are true, they are reluctant to draw conclusions if they personally do not believe the premises. In real life, of course, it makes perfect sense to base your reasoning only on information that you believe to be true.’ In any contest between what ‘makes perfect sense’ in normal life and what is defined as ‘rational’ by economists or logicians, you might think it rational, according to a more generous meaning of that term, to prefer the former. Evans concludes: ‘It is far from clear that such biases should be regarded as evidence of irrationality.’

Poole also argues strongly against the liberal-paternalist “nudges” advocated by Cass Sunstein and Richard Thaler, noting that “there is something troubling about the way in which [nudging] is able to marginalise political discussion.” Moreover, “nudge politics is at odds with public reason itself: its viability depends precisely on the public not overcoming their biases.” Poole concludes:

[T]here is less reason than many think to doubt humans’ ability to be reasonable. The dissenting critiques of the cognitive-bias literature argue that people are not, in fact, as individually irrational as the present cultural climate assumes. And proponents of debiasing argue that we can each become more rational with practice. But even if we each acted as irrationally as often as the most pessimistic picture implies, that would be no cause to flatten democratic deliberation into the weighted engineering of consumer choices, as nudge politics seeks to do. On the contrary, public reason is our best hope for survival. Even a reasoned argument to the effect that human rationality is fatally compromised is itself an exercise in rationality. Albeit rather a perverse, and – we might suppose – ultimately self-defeating one.

Worth a read. Even climate-change skepticism gets a nod, in a form consistent with some reflections here.

5 October 2014 at 11:08 pm 1 comment

“Why Managers Still Matter”

| Nicolai Foss |

BxrnIo-CQAA8lk7Here is a recent MIT Sloan Management Review piece by Peter and me, “Why Managers Still Matter.” We pick up on a number of themes of our 2012 book Organizing Entrepreneurial Judgment. A brief excerpt:

“Wikifying” the modern business has become a call to arms for some management scholars and pundits. As Tim Kastelle, a leading scholar on innovation management at the University of Queensland Business School in Australia, wrote: “It’s time to start reimagining management. Making everyone a chief is a good place to start.”

Companies, some of which operate in very traditional market sectors, have been crowing for years about their systems for “managing without managers” and how market forces and well-designed incentives can help decentralize management and motivate employees to take the initiative. . . .

From our perspective, the view that executive authority is increasingly passé is wrong. Indeed, we have found that it is essential in situations where (1) decisions are time-sensitive; (2) key knowledge is concentrated within the management team; and (3) there is need for internal coordination. . . . Such conditions are hallmarks of our networked, knowledge-intensive and hypercompetitive economy.

29 September 2014 at 10:02 am 8 comments

Older Posts


Authors

Nicolai J. Foss | home | posts
Peter G. Klein | home | posts
Richard Langlois | home | posts
Lasse B. Lien | home | posts

Guests

Former Guests | posts

Networking

Recent Posts

Categories

Feeds

Our Recent Books

Nicolai J. Foss and Peter G. Klein, Organizing Entrepreneurial Judgment: A New Approach to the Firm (Cambridge University Press, 2012).
Peter G. Klein and Micheal E. Sykuta, eds., The Elgar Companion to Transaction Cost Economics (Edward Elgar, 2010).
Peter G. Klein, The Capitalist and the Entrepreneur: Essays on Organizations and Markets (Mises Institute, 2010).
Richard N. Langlois, The Dynamics of Industrial Capitalism: Schumpeter, Chandler, and the New Economy (Routledge, 2007).
Nicolai J. Foss, Strategy, Economic Organization, and the Knowledge Economy: The Coordination of Firms and Resources (Oxford University Press, 2005).
Raghu Garud, Arun Kumaraswamy, and Richard N. Langlois, eds., Managing in the Modular Age: Architectures, Networks and Organizations (Blackwell, 2003).
Nicolai J. Foss and Peter G. Klein, eds., Entrepreneurship and the Firm: Austrian Perspectives on Economic Organization (Elgar, 2002).
Nicolai J. Foss and Volker Mahnke, eds., Competence, Governance, and Entrepreneurship: Advances in Economic Strategy Research (Oxford, 2000).
Nicolai J. Foss and Paul L. Robertson, eds., Resources, Technology, and Strategy: Explorations in the Resource-based Perspective (Routledge, 2000).