Posts filed under ‘Methods/Methodology/Theory of Science’
| Peter Klein |
As a skeptic of the evidence-based management movement (championed by Pfeffer, Sutton, et al.) I was amused by a recent spoof article in the Journal of Evaluation in Clinical Practice, “Maternal Kisses Are Not Effective in Alleviating Minor Childhood Injuries (Boo-Boos): A Randomized, Controlled, and Blinded Study,” authored by the Study of Maternal and Child Kissing (SMACK) Working Group. Maternal kisses were associated with a positive and statistically significant increase in the Toddler Discomfort Index (TDI):
Maternal kissing of boo-boos confers no benefit on children with minor traumatic injuries compared to both no intervention and sham kissing. In fact, children in the maternal kissing group were significantly more distressed at 5 minutes than were children in the no intervention group. The practice of maternal kissing of boo-boos is not supported by the evidence and we recommend a moratorium on the practice.
The actual author, Mark Tonelli, is a prominent critic of evidence-based medicine, described by the journal’s editor as a “collapsing” movement and in a recent British Journal of Medicine editorial as a “movement in crisis.” Most of the criticisms of evidence-based medicine will sound familiar to Austrian economists: overreliance on statistically significant, but clinically irrelevant, findings in large samples; failure to appreciate context and interpretation; lack of attention to underlying mechanisms rather than unexplained correlations; and a general disdain for tacit knowledge and understanding.
My guess is that evidence-based management, which is modeled after evidence-based medicine, is in for a similarly rocky ride. Teppo had some interesting orgtheory posts on this a few years ago (e.g., here and here). Evidence-based management has been criticized, as you might expect, by critical theorists and other postmodernists who don’t like the concept of “evidence” per se but the real problems are more mundane: what counts as evidence, and what conclusions can legitimately be drawn from this evidence, are far from obvious in most cases. Particularly in entrepreneurial settings, as we’ve written often on these pages, intuition, Verstehen, or judgment may be more reliable guides than quantitative, analytical reasoning.
Update: Thanks to Ivan Zupic for pointing me to a review and critique of EBM in the current issue of AMLE.
| Peter Klein |
Pierre Azoulay has written a number of important and interesting papers on the economics and sociology of science: How does teamwork effect science? What are the relationships among scientists and students, collaborators, and rivals? A new paper with Christian Fons-Rosen, Joshua S. Graff Zivin looks at the unexpected death of a “star” scientist to identify the (exogenous) impact of the star’s research on her field. The main result — that stars matter — is perhaps not surprising, but the magnitude of the effect is remarkable.
Consistent with previous research, the flow of articles by collaborators into affected fields decreases precipitously after the death of a star scientist (relative to control fields). In contrast, we find that the flow of articles by non-collaborators increases by 8% on average. These additional contributions are disproportionately likely to be highly cited. They are also more likely to be authored by scientists who were not previously active in the deceased superstar’s field. Overall, these results suggest that outsiders are reluctant to challenge leadership within a field when the star is alive and that a number of barriers may constrain entry even after she is gone.
Update: Here is a non-technical summary on Vox.com.
| Peter Klein |
The AOM’s Entrepreneurship Division listserv has been featuring an interesting discussion on the incentives facing junior (and senior) scholars for doing “high-risk” research. To be sure, most early-career scholars focus on making incremental contributions to well-established research programs; after securing tenure, the argument goes, they can be bolder and more experimental. The problem is that, in many academic fields, junior scholars have the greatest capacity for novelty and creativity (in mathematics, for example, you may be past your prime at 35). I’m not sure this true in the social sciences, which may place too much emphasis on clever technique over mature reasoning. But certainly many academics worry that the need to publish or perish makes it difficult for junior scholars to take chances, to the detriment of scientific progress.
I really liked Jeff McMullen‘s comments on the problem, reproduced here with permission:
Dean Shepherd and I wrote a paper about this issue several years ago, which grappled with some of these issues, especially what “risky research” means to tenure track researchers. Here’s the reference:
McMullen, J. S., & Shepherd, D. A. (2006). Encouraging Consensus‐Challenging Research in Universities. Journal of Management Studies, 43(8), 1643-1669.
I wanted to write that paper because I was starting off my career and wanted to do consensus-challenging research, but I also wanted to understand the consequences of employing such a career strategy. Much of what Dean and I discovered in that research has only intensified over the years as competitive pressures have made institutional incentives that much more uniform.
The challenge for me personally, however, is not the incentives and institutional pressures; instead, it is having the moral courage to conduct research that I believe is important and valuable even though I know the academy may not yet value it, at least not yet. Will I be able to meet the high productivity bar of my colleagues whose research or approach is more mainstream? Some of us are drawn to topics that are mainstream (count your blessings you lucky dogs), but some of us just have to let our freak flags fly. What is the cost of doing research we care about and do we have the courage to pay this price?
Like other innovations, consensus-challenging research is uncertain. Just like routine must be the norm for innovation to mean anything, incremental, consensus building research has to be the norm for any notion of uncertain, consensus-challenging research to make sense. Sometimes uncertainty bearing pays off economically, but more often it does not. Therefore, uncertain payoffs are likely to be motivated by incentives that are not economic — e.g., intrinsic motivation such as intellectual curiosity or feeling like we have said something original if that’s even possible. Perhaps, this is how it should be.
So, the real question for me is and has been through much of my career: how much is it worth to me in terms of institutional status, job security, promotion, or raises to forgo incremental publications and the accolades that come with those to write papers I care about? What is the optimal blend that I might stay employed yet truly care deeply about what I write? Can I live with socio-emotional costs of not being as productive as my colleagues?
For the most part, I have been blessed to be surrounded by colleagues who have valued me and what I do, but I also sought to work for institutions and with colleagues who I believed valued what I valued or at least had that capacity.
Can the system be better? Absolutely, it could be more forgiving. We could lower the institutional costs of innovative research. But, the system only has as much power as you and I choose to give it over our hearts and minds. Great leaders throughout history ranging from Jesus to Gandhi to King to Mandela have confronted a similar choice between compliance and civil disobedience and have had the moral courage to choose civil disobedience despite consequences that dwarf what you and I face. Changing the system starts first with having the moral courage to make peace with the worst possible outcome and yet still having the conviction to advance what we believe in.
So, let us ask what we might change “out there” to make science more inclusive, but let us not forget to ask what we need to change in ourselves. Like the entrepreneurs we study, meaningful work has a price, and may only be meaningful because it does.
| Peter Klein |
We’ve written before on the institutions of scientific research which, like other human activities, involves expenditures of scarce resources, has benefits and costs that can be evaluated on the margin, and is affected by the preferences, beliefs, and incentives of scientific personnel (1, 2, 3). This sounds trite, but the view persists, especially among mainstream journalists, that science is fundamentally different, that scientists are disinterested truth-seekers immune from institutional and organizational constraints. This is the default assumption about scientists working within the general consensus of their discipline. By contrast, critics of the consensus position, whether inside our outside the core discipline, are presumed to be motivated by ideology or private interest.
You don’t need to be Thomas Kuhn, Imre Lakatos, or any modern historian or philosopher of science to find this asymmetry puzzling. But it is the usual assumption in particular areas, most notably climate science. A good example is this recent New York Times piece by Justin Gillis, “Short Answers to Hard Questions About Climate Change.” In response to the question, “Why do people question climate change?” Gillis gives us ideology and private interests.
Most of the attacks on climate science are coming from libertarians and other political conservatives who do not like the policies that have been proposed to fight global warming. Instead of negotiating over those policies and trying to make them more subject to free-market principles, they have taken the approach of blocking them by trying to undermine the science.
This ideological position has been propped up by money from fossil-fuel interests, which have paid to create organizations, fund conferences and the like. The scientific arguments made by these groups usually involve cherry-picking data, such as focusing on short-term blips in the temperature record or in sea ice, while ignoring the long-term trends.
Ignore the saucy rhetoric (critics of the consensus view don’t just question the theory or evidence, they “attack climate science”), and note that for Gillis, opposition to the mainstream view is a puzzle to be explained, and the most likely candidates are ideology and special interests. Honest disagreement is ruled out (though earlier in the piece he recognizes the vast uncertainties involved in climate research). Why so many scientists, private and public organizations, firms, etc. support the mainstream position is not, in Gillis’s opinion, worth exploring. It’s Because Science. The fact that billions of dollars are flowing into climate research — a flow that would slow to a trickle if policymakers believed that man-made carbon emissions are not contributing to global warming — apparently has no effect on scientific practice. The fact that many climate-change proponents are, in general, ideologically predisposed to policies that impose greater government control over markets, that reduce industrial activity, that favor particular technologies and products over others is, again irrelevant.
Of course, I’m not claiming that climate scientists in or outside the mainstream consensus are fanatics or money-grubbers. I’m saying you can’t have it both ways. If ideology and private interests are relevant on one side of a debate, they’re relevant on the other side as well. Perhaps the ideology and private interests of New York Times writers blind them to this simple point.
| Peter Klein |
Barry Weingast remembers Doug North at EH.Net (also at the SIOE blog):
His first book, The Economic Growth of the United States, 1790-1860 (1960), helped foster the revolution that came to be known as the “new economic history,” the application of frontier economics to the study problems of the past. He and Bob Fogel were awarded the Nobel Prize in Economics (1993) largely for their leadership in this new research program.
But Doug understood that the neoclassical economics on which he was raised was inadequate to address the problems he sought to answer, namely, why are a few countries rich while most remain poor, some in dire poverty? Much of his best work addressed this question.
Read the whole thing here.
| Peter Klein |
Because we’ve been somewhat skeptical of randomized-controlled trials — not the technique itself, but the way it is over-hyped by its proponents — you may enjoy Angus Deaton’s critique of RCTs in development economics. I learned of Deaton’s arguments from this excellent piece by Chris Blattman in Foreign Policy. Here is the key paper, Deaton’s 2008 Keynes Lecture at the British Academy.
Instruments of Development: Randomization in the Tropics, and the Search for the Elusive Keys to Economic Development
There is currently much debate about the effectiveness of foreign aid and about what kind of projects can engender economic development. There is skepticism about the ability of econometric analysis to resolve these issues, or of development agencies to learn from their own experience. In response, there is movement in development economics towards the use of randomized controlled trials (RCTs) to accumulate credible knowledge of what works, without over-reliance on questionable theory or statistical methods. When RCTs are not possible, this movement advocates quasi-randomization through instrumental variable (IV) techniques or natural experiments. I argue that many of these applications are unlikely to recover quantities that are useful for policy or understanding: two key issues are the misunderstanding of exogeneity, and the handling of heterogeneity. I illustrate from the literature on aid and growth. Actual randomization faces similar problems as quasi-randomization, notwithstanding rhetoric to the contrary. I argue that experiments have no special ability to produce more credible knowledge than other methods, and that actual experiments are frequently subject to practical problems that undermine any claims to statistical or epistemic superiority. I illustrate using prominent experiments in development. As with IV methods, RCT-based evaluation of projects is unlikely to lead to scientific progress in the understanding of economic development. I welcome recent trends in development experimentation away from the evaluation of projects and towards the evaluation of theoretical mechanisms.
Blattman says Deaton has a new paper that presents a more nuanced critique, but it is apparently not online. I’ll share more when I have it.
| Peter Klein |
Justin Fox reports on a recent high-powered behavioral economics conference featuring Raj Chetty, David Laibson, Antoinette Schoar, Maya Shankar, and other important contributors to this growing research stream. But he refers also to the “Summers critique,” the idea that key findings in behavioral economics research sound like recycled wisdom from business practitioners.
Summers [in 2012] told a story about a college acquaintance who as a cruel prank signed up another classmate for 60 different subscriptions of the Book-of-the-Month-Club ilk. The way these clubs worked is that once you signed up, you got a book in the mail every month and were charged for it unless you (a) sent the book back within a certain period of time or (b) went through the hassle of extricating yourself from the club altogether. Customers had to opt out in order to not keep buying books, so they bought more books than they otherwise would have. Book marketers, Summers said, had figured out the power of defaults long before economists had.
More generally, Fox asks, “Have behavioral economists really discovered anything new, or have they simply replaced some wrong-headed notions of post-World War II economics with insights that people in business have understood for decades and maybe even centuries?”
I took exactly the Summers line in a 2010 post, observing that behavioral economics “often seems to restate common, obvious, well-known ideas as if they are really novel insights (e.g., that preferences aren’t stable and predictable over time). More novel propositions are questionable at best.” I used a Dan Ariely column on compensation policy as an example:
He claims as a unique insight of behavioral economics that when people are evaluated according to quantitative measures of performance, they tend to focus on the measures, not the underlying behavior being measured. Well, duh. This is pretty much a staple of introductory lectures on agency theory (and features prominently in Steve Kerr’s classic 1975 article). Ariely goes on to suggest that CEOs should be rewarded not on the basis of a single measure of performance, but multiple measures. Double-duh. Holmström (1979) called this the “informativeness principle” and it’s in all the standard textbooks on contract design and compensation structure (e.g., Milgrom and Roberts, Brickley et al., etc.) (Of course, agency theory also recognizes that gathering information is costly, and that additional metrics are valuable, on the margin, only if the benefits exceed the costs, a point unmentioned by Ariely.)
Maybe Larry and I should hang out.