Archive for November, 2008
| Peter Klein |
Remember, it’s not how much you pay, but how. Today’s WSJ profile of Robert Rubin provides some interesting numbers. Citigroup losses over the last year: $20 billion. US government bailout money going to Citigroup in the last month: $45 billion. Rubin’s compensation since becoming senior counselor and a director at Citigroup in 1999: $115 million. Naturally, Rubin says Citi’s near bankruptcy has nothing to do with his leadership. Critics say he encouraged the firm to increase its risk taking in 2004 and 2005. Ah well, another former Golden Boy brought down to earth. Thank goodness something positive is coming out of this mess.
Consider this today’s friendly reminder that corporate welfare is a bipartisan scam.
Update: See also Larry Ribstein.
| Nicolai Foss |
We have blogged extensively on “micro-foundations” here on O&M (particularly in those Golden Days of O&M when the present blogger was more active). The micro-foundations theme seems to be gaining a lot of ground in management recently. About two years ago I had a paper on the subject rejected from a leading management journal because the reviewers and the editor argued that the micro-foundations theme was essentially non-controversial and scholars handled it in a pragmatic manner — i.e., there was no need to raise it as an issue. That same editor, I hear, is now actively talking about the pressing need for micro-foundations in management ;-) This reflects an increasingly widespread discourse on the subject. At the recent Strategic Management Society Conference in Köln micro-foundations were explicitly discussed in several of the PDWs, in David Teece’s keynote speech, in the keynote panel that I participated in, and in lots of paper sessions.
The work of Teppo Felin and I on the micro-foundations issue (e.g., here) has been particularly taken up with the lack of clear micro-foundations in the dominant capabilities view of the firm and strategy. Back in 2005 Teppo and I organized a two-days conference in Copenhagen on the subject. Koen Heimeriks, then an Assistant Professor at Copenhagen Business School, is organizing something like a follow-up conference at the Rotterdam School of Management where he is now an Assistant Professor. Keynote speakers are Sid Winter, Maurizio Zollo, and yours truly. Here is the homepage for Koen’s conference. Koen is a very good and meticulous conference organizer, the subject is inherently interesting, so please submit a paper!
| Peter Klein |
- Jennifer Arlen and Eric Talley, “Experimental Law and Economics”
This chapter provides a framework for assessing the contributions of experiments in Law and Economics. We identify criteria for determining the validity of an experiment and find that these criteria depend upon both the purpose of the experiment and the theory of behavior implicated by the experiment. While all experiments must satisfy the standard experimental desiderata of control, falsifiability of theory, internal consistency, external consistency and replicability, the question of whether an experiment also must be “contextually attentive” — in the sense of matching the real world choice being studied — depends on the underlying theory of decision-making being tested or implicated by the experiment.
- Matthew J. Holian, “Optimal Decentralization in Corporations and Federations”
Oates’ Theorem and the M-form Hypothesis are both organizational theories of decentralization, though they deal with different types of organizations. This brief note describes how the two theories complement one another, through both verbal description and mathematical models. The result is a simple but comprehensive account of the delegation problem.
- Abhijit V. Banerjee, Esther Duflo, “The Experimental Approach to Development Economics”
Randomized experiments have become a popular tool in development economics research, and have been the subject of a number of criticisms. This paper reviews the recent literature, and discusses the strengths and limitations of this approach in theory and in practice. We argue that the main virtue of randomized experiments is that, due to the close collaboration between researchers and implementers, they allow the estimation of parameters that it would not otherwise be possible to evaluate. We discuss the concerns that have been raised regarding experiments, and generally conclude that while they are real, they are often not specific to experiments. We conclude by discussing the relationship between theory and experiments.
| Peter Klein |
At a recent workshop the subject of econometric identification came up. Identification is of course the major issue of our day among mainstream empirical economists. Some have described the dissertation process as the “search for a good instrument.” Instrumental-variables estimators have their critics, of course, but these critics are in the minority.
One of the workshop participants, a regular attendee at NBER events, summarized the consensus view among the elites of the profession with the following diagram:
A research problem can be important, and it can be well identified. The ideal problem is one in quadrant B, both important and identified. However, a problem in quadrant C is much more likely to be published in a top journal than a problem in quadrant A.
What does this say about the economics profession?
| Peter Klein |
The line of questioning he endures is hilariously naive and idiotic. We think we have a Keynesian problem now; it’s clear that these people really believe that policy makers can manipulate the economy like a machine, trading off unemployment for inflation and back again, with no trouble.
John Cochran suggests another Hayek appearance from 1975, this one a lecture at the University of Colorado (provided by Fred Glahe). Here are a few more from the Mises.org audio archive. And see also the new book.
| Dick Langlois |
First let me apologize for being out of circulation for so long. I’ve been inundated with teaching and committee work this semester, but I hope to get back in the swing of things as the year winds down.
The New York Times had an interesting article the other day on a company called Super Micro Computer, a public family-run company in San Jose that puts together leading-edge servers and other hardware for clients that include eBay and Yahoo. The company sells high performance and speed, both the speed of the computer and the speed of the company in designing and delivering its products.
Whereas rivals long ago sent key design work to Asia to take advantage of cheaper, plentiful labor, Super Micro still relies on hundreds of expensive engineers working at its San Jose headquarters. These workers are charged with grabbing the latest and greatest components from suppliers and coming up with new designs months ahead of lumbering heavyweights like Hewlett-Packard and Dell.
Clayton Christensen and his coauthors have argued that a premium on high performance calls for vertical integration and systemic integration in order to fine tune and customize systems, whereas a premium on cost reduction leads to modularity, standardization, and vertical disintegration. The Super Micro case seems to question this conclusion. On the one hand, the company emphasizes design and produces customized units. On the other hand, however, the company is really just a systems integrator — not a vertically integrated company — whose advantage lies in discovering and making use of the innovation of others. In Carliss Baldwin’s phrase, the company “leverages modularity” along the performance margin in much the same way that Dell does (or at least once did) along the cost margin. My conjecture is that, the more inherently modular (whatever that means) the product is, the more systemic integration can be squeezed into a single independent stage of production (systems integration) and the less necessary is genuine vertical integration — even when performance is what matters.
| Peter Klein |
Obama has named Christy Romer, one of my old professors, to head the Council of Economic Advisers. She’s smart, organized, a great communicator; I expect her to be highly effective. She is a moderate Keynesian, of the New Keynesian variety, best known for her revisionist work challenging the postwar Keynesian consensus view that activist monetary and fiscal policy has lessened the severity of the business cycle compared to the bad old laissez-faire days. See, for example, “Is the Stabilization of the Postwar Economy a Figment of the Data?” AER, June 1986, and “Remeasuring Business Cycles,” JEH, September 1994. A recent Journal of Economic Perspectives piece and her entry on business cycles in the Concise Encyclopedia of Economics summarizes this work. Here is more. (Of course, doubts about the effectiveness of Keynesian stabilization policy have not dampened most macroeconomists’ enthusiasm for, well, Keynesian stabilization policy.)
I met Christy in my first year of graduate school when she co-taught, with Barry Eichengreen, my course in US economic history. I subsequently served for two semesters as her head TA in the large economics principles course (600 students, 16 TAs, one head TA, one professor — quite an operation). Berkeley had in those days a system in which the dissertation adviser (in my case, Oliver Williamson) does not serve on the dissertation proposal committee, and Christy kindly chaired the proposal committee for me, even though the topic (conglomerate diversification) was not in her general area. She is a great teacher and a great manager, careful, patient, and fair. Naturally my top choice for CEA chair would have been somone with views just like, um, mine, but of the realistic candidates Christy is an excellent choice.