Posts filed under ‘Methods/Methodology/Theory of Science’
| Peter Klein |
Because we’ve been somewhat skeptical of randomized-controlled trials — not the technique itself, but the way it is over-hyped by its proponents — you may enjoy Angus Deaton’s critique of RCTs in development economics. I learned of Deaton’s arguments from this excellent piece by Chris Blattman in Foreign Policy. Here is the key paper, Deaton’s 2008 Keynes Lecture at the British Academy.
Instruments of Development: Randomization in the Tropics, and the Search for the Elusive Keys to Economic Development
There is currently much debate about the effectiveness of foreign aid and about what kind of projects can engender economic development. There is skepticism about the ability of econometric analysis to resolve these issues, or of development agencies to learn from their own experience. In response, there is movement in development economics towards the use of randomized controlled trials (RCTs) to accumulate credible knowledge of what works, without over-reliance on questionable theory or statistical methods. When RCTs are not possible, this movement advocates quasi-randomization through instrumental variable (IV) techniques or natural experiments. I argue that many of these applications are unlikely to recover quantities that are useful for policy or understanding: two key issues are the misunderstanding of exogeneity, and the handling of heterogeneity. I illustrate from the literature on aid and growth. Actual randomization faces similar problems as quasi-randomization, notwithstanding rhetoric to the contrary. I argue that experiments have no special ability to produce more credible knowledge than other methods, and that actual experiments are frequently subject to practical problems that undermine any claims to statistical or epistemic superiority. I illustrate using prominent experiments in development. As with IV methods, RCT-based evaluation of projects is unlikely to lead to scientific progress in the understanding of economic development. I welcome recent trends in development experimentation away from the evaluation of projects and towards the evaluation of theoretical mechanisms.
Blattman says Deaton has a new paper that presents a more nuanced critique, but it is apparently not online. I’ll share more when I have it.
| Peter Klein |
Justin Fox reports on a recent high-powered behavioral economics conference featuring Raj Chetty, David Laibson, Antoinette Schoar, Maya Shankar, and other important contributors to this growing research stream. But he refers also to the “Summers critique,” the idea that key findings in behavioral economics research sound like recycled wisdom from business practitioners.
Summers [in 2012] told a story about a college acquaintance who as a cruel prank signed up another classmate for 60 different subscriptions of the Book-of-the-Month-Club ilk. The way these clubs worked is that once you signed up, you got a book in the mail every month and were charged for it unless you (a) sent the book back within a certain period of time or (b) went through the hassle of extricating yourself from the club altogether. Customers had to opt out in order to not keep buying books, so they bought more books than they otherwise would have. Book marketers, Summers said, had figured out the power of defaults long before economists had.
More generally, Fox asks, “Have behavioral economists really discovered anything new, or have they simply replaced some wrong-headed notions of post-World War II economics with insights that people in business have understood for decades and maybe even centuries?”
I took exactly the Summers line in a 2010 post, observing that behavioral economics “often seems to restate common, obvious, well-known ideas as if they are really novel insights (e.g., that preferences aren’t stable and predictable over time). More novel propositions are questionable at best.” I used a Dan Ariely column on compensation policy as an example:
He claims as a unique insight of behavioral economics that when people are evaluated according to quantitative measures of performance, they tend to focus on the measures, not the underlying behavior being measured. Well, duh. This is pretty much a staple of introductory lectures on agency theory (and features prominently in Steve Kerr’s classic 1975 article). Ariely goes on to suggest that CEOs should be rewarded not on the basis of a single measure of performance, but multiple measures. Double-duh. Holmström (1979) called this the “informativeness principle” and it’s in all the standard textbooks on contract design and compensation structure (e.g., Milgrom and Roberts, Brickley et al., etc.) (Of course, agency theory also recognizes that gathering information is costly, and that additional metrics are valuable, on the margin, only if the benefits exceed the costs, a point unmentioned by Ariely.)
Maybe Larry and I should hang out.
| Peter Klein |
This is actually Richard Epstein writing about Henry Manne, but Richard nicely captures the essence of Henry’s thinking:
The combination of law and economics is a major discipline in … modern law schools, but I do not think that it was always presented to Henry’s liking. In his view, the student’s purpose was to show the power of markets to overcome key problems of information and coordination, not to run a set of exhaustive empirical studies to show that corporate boards would function better if they increased their number of independent directors by 5 percent.
Other Manne items on O&M are here. As I noted in another post, Manne was expert in specific technical areas of law (most obviously, insider trading and corporate takeovers) but very much a generalist in his overall outlook. As Manne once recalled about a 1962 seminar led by Armen Alchian, “All of a sudden, everything that I had done intellectually for thirteen years came together, with this one idea of Alchian’s about the real nature of property rights and the Misesian notion of people making choices, with every choice being a tradeoff,” In other words, a simple and powerful theoretical framework goes a long way in analyzing a broad range of issues — much different from today’s emphasis on behavioral quirks, clever experiments, and similar minutiae.
| Peter Klein |
Thanks to Andrew for the pointer to this weekend’s Reading-UNCTAD International Business Conference featuring Mark Casson, Tim Devinney, Marcus Larsen, and many others. Mark’s talk (not yet online) focused on the need for methodological individualism in international business research. “Firms don’t take decisions, individuals do. When you say that a firm pursued an international strategy, you really mean that that the CEO persuaded the individuals on the board to go along with his or her strategy.” As Andrew summarizes:
Casson spoke at great length about the need for research that focuses on named individuals, is based on the extensive study of primary sources in archives, takes social and political context into account, and which looks at case studies of entrepreneurs in different time periods. In effect, he was calling for the re-integration of Business History into International Business research.
And a renewed emphasis on entrepreneurship, not as a standalone subject dealing with startups or self-employment, but as central to the study of organizations — a theme heartily endorsed on this blog.
| Peter Klein |
Quick, what do the following articles have in common?
- Maslow, Abraham. 1943. “A theory of human motivation.” Psychological Review 50(4): 370-376.
- Forrester, Jay W. 1958. “Industrial dynamics: a major breakthrough for decision makers.” Harvard Business Review 36(4): 37-66.
- Fisher, Irving. 1933. “The debt-deflation theory of great depressions.”
Econometrica 1(4): 337-357.
- Fornell, Claes, and David F. Larker. 1981. “Evaluating structural equation models with unobservable variables and measurement error.” Journal of Marketing Research 18(1): 39-50.
- Wechsler, Herbert. 1959. “Toward neutral principles of constitutional law.” Harvard Law Review 73(1): 1-35.
- Ellsberg, Daniel. 1961. “Risk, ambiguity, and the savage axioms.” Quarterly Journal of Economics 75(4): 643-669.
All are designated as “sleeping beauties,” papers that lie dormant for years after publication, then suddenly become highly influential. The term was coined by Anthony van Raan, but sleeping beauties were thought to be rare. A new paper in PNAS by Qing Ke, Emilio Ferrara, Filippo Radicchi, and Alessandro Flammini finds, by contrast, that sleeping beauties are fairly common. Formally, “The beauty coefficient value B for a given paper is based on the comparison between its citation history and a reference line that is determined only by its publication year, the maximum number of citations
received in a year (within a multiyear observation period), and the year when such maximum is achieved.” The authors take a large sample of papers from the American Physical Society and Web of Science and identify, describe, and analyze some prominent sleeping beauties. They focus mostly on the physical science, but include a few social science datasets in an online appendix, finding several papers including those above. (Most of the sleeping beauties in their social science sample are either experimental psychology papers or statistical or methodological papers that are not really about core social science theory or application.) I assume the social science papers also come from Web of Science, which may not include journals like Economica (hence no Coase 1937), and hence the list above is not totally intuitive.
Anyway, this should provoke some interesting discussion about the diffusion of knowledge. The presence of sleeping beauties could simply mean that some discoveries are difficult to understand and take a while to be appreciated, but could also reflect bandwagon effects, faddish citation practices, and other phenomena that cast doubt on the whig theory of science.
| Peter Klein |
Scientific progress, like economic progress, largely consists of combining and recombining existing resources and knowledge. At least that’s the way I interpret a new paper from Santa Fe Institute researchers Hyejin Youn, Luis Bettencourt, Jose Lobo, and Deborah Strumsky, “Invention as a Combinatorial Process: Evidence from US Patents” (via Steve Fiore):
Invention has been commonly conceptualized as a search over a space of combinatorial possibilities. Despite the existence of a rich literature, spanning a variety of disciplines, elaborating on the recombinant nature of invention, we lack a formal and quantitative characterization of the combinatorial process underpinning inventive activity. Here, we use US patent records dating from 1790 to 2010 to formally characterize invention as a combinatorial process. To do this, we treat patented inventions as carriers of technologies and avail ourselves of the elaborate system of technology codes used by the United States Patent and Trademark Office to classify the technologies responsible for an invention’s novelty. We find that the combinatorial inventive process exhibits an invariant rate of ‘exploitation’ (refinements of existing combinations of technologies) and ‘exploration’ (the development of new technological combinations). This combinatorial dynamic contrasts sharply with the creation of new technological capabilities—the building blocks to be combined—that has significantly slowed down. We also find that, notwithstanding the very reduced rate at which new technologies are introduced, the generation of novel technological combinations engenders a practically infinite space of technological configurations.
Or, as the Santa Fe press release puts it, “Most new patents are combinations of existing ideas and pretty much always have been, even as the stream of fundamentally new core technologies has slowed.” See also the authors’ earlier paper, “Atypical Combinations and Scientific Impact.”
| Peter Klein |
Great illustration from the Mad Scientist Confectioner’s Club (via Fan Xia).