Posts filed under ‘Myths and Realities’
| Peter Klein |
US Defense Secretary Ash Carter is making the rounds with a speech about ISIL being a “cancer” that must be cured with aggressive treatment. “[L]ike all cancers, you can’t cure the disease just by cutting out the tumor. You have to eliminate it wherever it has spread, and stop it from coming back. . . . . [We have] three military objectives: One, destroy the ISIL parent tumor in Iraq and Syria by collapsing its two power centers in Mosul, Iraq and Raqqah, Syria. . . . Two, combat the emerging metastases of the ISIL tumor worldwide wherever it appears. . . .” Terrorism, in other words, is a cancer metastasizing from the underlying tumor of Islamic fundamentalism.
This language may rally the troops, but it is particularly unhelpful in understanding the nature, antecedents, consequences, and remedy for terrorism. As Robert Pape, David Card, and other social scientists have shown, terrorism is a tactic, a form of purposeful human action, and should be understood as such, not as a mindless, undirected biological phenomenon.
Edith Penrose warned more than sixty years ago about the limits of biological analogies in understanding social issues. “The chief danger of carrying sweeping analogies very far is that the problems they are designed to illuminate become framed in such a special way that significant matters are frequently inadvertently obscured. Biological analogies contribute little either to the theory of price or to the theory of growth and development of firms and in general tend to confuse the nature of the important issues.” I have written before about the problem of treating gun violence as a disease, rather than a legal, social, and criminological issue. To understand why people shoot guns, on purpose or accidentally, we need to focus on their preferences, beliefs, and actions. (This does not imply some kind of straw-man “rationality,” by the way.) Likewise, if we want to reduce terrorist acts, we should treat terrorism as a military tactic, designed to achieve specific ends, rather than a disease or epidemic whose “growth” we have to stop.
Update (27 Jan): From David Levine I learn of another example of a medical researcher trying to address a social science problem without apparently understanding the concept of selection bias (he samples on the dependent variable).
| Peter Klein |
As a skeptic of the evidence-based management movement (championed by Pfeffer, Sutton, et al.) I was amused by a recent spoof article in the Journal of Evaluation in Clinical Practice, “Maternal Kisses Are Not Effective in Alleviating Minor Childhood Injuries (Boo-Boos): A Randomized, Controlled, and Blinded Study,” authored by the Study of Maternal and Child Kissing (SMACK) Working Group. Maternal kisses were associated with a positive and statistically significant increase in the Toddler Discomfort Index (TDI):
Maternal kissing of boo-boos confers no benefit on children with minor traumatic injuries compared to both no intervention and sham kissing. In fact, children in the maternal kissing group were significantly more distressed at 5 minutes than were children in the no intervention group. The practice of maternal kissing of boo-boos is not supported by the evidence and we recommend a moratorium on the practice.
The actual author, Mark Tonelli, is a prominent critic of evidence-based medicine, described by the journal’s editor as a “collapsing” movement and in a recent British Journal of Medicine editorial as a “movement in crisis.” Most of the criticisms of evidence-based medicine will sound familiar to Austrian economists: overreliance on statistically significant, but clinically irrelevant, findings in large samples; failure to appreciate context and interpretation; lack of attention to underlying mechanisms rather than unexplained correlations; and a general disdain for tacit knowledge and understanding.
My guess is that evidence-based management, which is modeled after evidence-based medicine, is in for a similarly rocky ride. Teppo had some interesting orgtheory posts on this a few years ago (e.g., here and here). Evidence-based management has been criticized, as you might expect, by critical theorists and other postmodernists who don’t like the concept of “evidence” per se but the real problems are more mundane: what counts as evidence, and what conclusions can legitimately be drawn from this evidence, are far from obvious in most cases. Particularly in entrepreneurial settings, as we’ve written often on these pages, intuition, Verstehen, or judgment may be more reliable guides than quantitative, analytical reasoning.
Update: Thanks to Ivan Zupic for pointing me to a review and critique of EBM in the current issue of AMLE.
| Peter Klein |
We’ve written before on the institutions of scientific research which, like other human activities, involves expenditures of scarce resources, has benefits and costs that can be evaluated on the margin, and is affected by the preferences, beliefs, and incentives of scientific personnel (1, 2, 3). This sounds trite, but the view persists, especially among mainstream journalists, that science is fundamentally different, that scientists are disinterested truth-seekers immune from institutional and organizational constraints. This is the default assumption about scientists working within the general consensus of their discipline. By contrast, critics of the consensus position, whether inside our outside the core discipline, are presumed to be motivated by ideology or private interest.
You don’t need to be Thomas Kuhn, Imre Lakatos, or any modern historian or philosopher of science to find this asymmetry puzzling. But it is the usual assumption in particular areas, most notably climate science. A good example is this recent New York Times piece by Justin Gillis, “Short Answers to Hard Questions About Climate Change.” In response to the question, “Why do people question climate change?” Gillis gives us ideology and private interests.
Most of the attacks on climate science are coming from libertarians and other political conservatives who do not like the policies that have been proposed to fight global warming. Instead of negotiating over those policies and trying to make them more subject to free-market principles, they have taken the approach of blocking them by trying to undermine the science.
This ideological position has been propped up by money from fossil-fuel interests, which have paid to create organizations, fund conferences and the like. The scientific arguments made by these groups usually involve cherry-picking data, such as focusing on short-term blips in the temperature record or in sea ice, while ignoring the long-term trends.
Ignore the saucy rhetoric (critics of the consensus view don’t just question the theory or evidence, they “attack climate science”), and note that for Gillis, opposition to the mainstream view is a puzzle to be explained, and the most likely candidates are ideology and special interests. Honest disagreement is ruled out (though earlier in the piece he recognizes the vast uncertainties involved in climate research). Why so many scientists, private and public organizations, firms, etc. support the mainstream position is not, in Gillis’s opinion, worth exploring. It’s Because Science. The fact that billions of dollars are flowing into climate research — a flow that would slow to a trickle if policymakers believed that man-made carbon emissions are not contributing to global warming — apparently has no effect on scientific practice. The fact that many climate-change proponents are, in general, ideologically predisposed to policies that impose greater government control over markets, that reduce industrial activity, that favor particular technologies and products over others is, again irrelevant.
Of course, I’m not claiming that climate scientists in or outside the mainstream consensus are fanatics or money-grubbers. I’m saying you can’t have it both ways. If ideology and private interests are relevant on one side of a debate, they’re relevant on the other side as well. Perhaps the ideology and private interests of New York Times writers blind them to this simple point.
| Dick Langlois |
Further to Peter’s post on government science funding: I just received, hot off the (physical) press, a copy of Shane Greenstein’s new book How the Internet Became Commercial (Princeton, 2015). Among the myths that Greenstein — now apparently at Harvard Business School — debunks is the idea that the internet was in any sense a product of government industrial policy. Although government had many varied and uncoordinated influences on the development of the technology, the emergence of the Internet was ultimately an example of what the economic historian Robert Allen called collective invention. It was very much a spontaneous process. And it was not fundamentally different from other episodes of technological change in history.
| Peter Klein |
This piece by Matt Ridley builds on Terence Kealey’s critique of government science funding, and also echoes Nathan Rosenberg’s critique of the linear model of science and technology. We have pointed out similarly that arguments for public science funding are usually not very scientific.
When you examine the history of innovation, you find, again and again, that scientific breakthroughs are the effect, not the cause, of technological change. It is no accident that astronomy blossomed in the wake of the age of exploration. The steam engine owed almost nothing to the science of thermodynamics, but the science of thermodynamics owed almost everything to the steam engine. The discovery of the structure of DNA depended heavily on X-ray crystallography of biological molecules, a technique developed in the wool industry to try to improve textiles.
Technological advances are driven by practical men who tinkered until they had better machines; abstract scientific rumination is the last thing they do. As Adam Smith, looking around the factories of 18th-century Scotland, reported in “The Wealth of Nations”: “A great part of the machines made use in manufactures…were originally the inventions of common workmen,” and many improvements had been made “by the ingenuity of the makers of the machines.”
It follows that there is less need for government to fund science: Industry will do this itself. Having made innovations, it will then pay for research into the principles behind them. Having invented the steam engine, it will pay for thermodynamics.
I have argued repeatedly against the “laundry list” rationale for public funding, the listing of technologies and products that came out of government programs, as if that were justification for these programs. Ridley agrees:
Given that government has funded science munificently from its huge tax take, it would be odd if it had not found out something. This tells us nothing about what would have been discovered by alternative funding arrangements.
And we can never know what discoveries were not made because government funding crowded out philanthropic and commercial funding, which might have had different priorities.
| Peter Klein |
Because we’ve been somewhat skeptical of randomized-controlled trials — not the technique itself, but the way it is over-hyped by its proponents — you may enjoy Angus Deaton’s critique of RCTs in development economics. I learned of Deaton’s arguments from this excellent piece by Chris Blattman in Foreign Policy. Here is the key paper, Deaton’s 2008 Keynes Lecture at the British Academy.
Instruments of Development: Randomization in the Tropics, and the Search for the Elusive Keys to Economic Development
There is currently much debate about the effectiveness of foreign aid and about what kind of projects can engender economic development. There is skepticism about the ability of econometric analysis to resolve these issues, or of development agencies to learn from their own experience. In response, there is movement in development economics towards the use of randomized controlled trials (RCTs) to accumulate credible knowledge of what works, without over-reliance on questionable theory or statistical methods. When RCTs are not possible, this movement advocates quasi-randomization through instrumental variable (IV) techniques or natural experiments. I argue that many of these applications are unlikely to recover quantities that are useful for policy or understanding: two key issues are the misunderstanding of exogeneity, and the handling of heterogeneity. I illustrate from the literature on aid and growth. Actual randomization faces similar problems as quasi-randomization, notwithstanding rhetoric to the contrary. I argue that experiments have no special ability to produce more credible knowledge than other methods, and that actual experiments are frequently subject to practical problems that undermine any claims to statistical or epistemic superiority. I illustrate using prominent experiments in development. As with IV methods, RCT-based evaluation of projects is unlikely to lead to scientific progress in the understanding of economic development. I welcome recent trends in development experimentation away from the evaluation of projects and towards the evaluation of theoretical mechanisms.
Blattman says Deaton has a new paper that presents a more nuanced critique, but it is apparently not online. I’ll share more when I have it.
| Peter Klein |
Justin Fox reports on a recent high-powered behavioral economics conference featuring Raj Chetty, David Laibson, Antoinette Schoar, Maya Shankar, and other important contributors to this growing research stream. But he refers also to the “Summers critique,” the idea that key findings in behavioral economics research sound like recycled wisdom from business practitioners.
Summers [in 2012] told a story about a college acquaintance who as a cruel prank signed up another classmate for 60 different subscriptions of the Book-of-the-Month-Club ilk. The way these clubs worked is that once you signed up, you got a book in the mail every month and were charged for it unless you (a) sent the book back within a certain period of time or (b) went through the hassle of extricating yourself from the club altogether. Customers had to opt out in order to not keep buying books, so they bought more books than they otherwise would have. Book marketers, Summers said, had figured out the power of defaults long before economists had.
More generally, Fox asks, “Have behavioral economists really discovered anything new, or have they simply replaced some wrong-headed notions of post-World War II economics with insights that people in business have understood for decades and maybe even centuries?”
I took exactly the Summers line in a 2010 post, observing that behavioral economics “often seems to restate common, obvious, well-known ideas as if they are really novel insights (e.g., that preferences aren’t stable and predictable over time). More novel propositions are questionable at best.” I used a Dan Ariely column on compensation policy as an example:
He claims as a unique insight of behavioral economics that when people are evaluated according to quantitative measures of performance, they tend to focus on the measures, not the underlying behavior being measured. Well, duh. This is pretty much a staple of introductory lectures on agency theory (and features prominently in Steve Kerr’s classic 1975 article). Ariely goes on to suggest that CEOs should be rewarded not on the basis of a single measure of performance, but multiple measures. Double-duh. Holmström (1979) called this the “informativeness principle” and it’s in all the standard textbooks on contract design and compensation structure (e.g., Milgrom and Roberts, Brickley et al., etc.) (Of course, agency theory also recognizes that gathering information is costly, and that additional metrics are valuable, on the margin, only if the benefits exceed the costs, a point unmentioned by Ariely.)
Maybe Larry and I should hang out.