Posts filed under ‘Methods/Methodology/Theory of Science’
| Nicolai Foss |
As readers of this blog will know, probably to a nauseating extent, microfoundations have been central in much (macro) management theory over the last decade. Several articles, special issues, and conferences have been dedicated to microfoundations, most recently a Strategic Management Society Special Conference at the Copenhagen Business School. Some, uhm, highlyspirited exchanges have taken place (e.g., AoM 2013), with proponents of those foundations being accused of economics imperalism and whatnot, and critics of microfoundations receiving push-back for endorsing defunct Durkheimian collectivism (an obviously justified criticism). Here is recent civilized exchange on the subject between Professor Rodolphe Durand, HEC Paris, and myself. Complete with heavy Euro accents of different origins.
| Peter Klein |
Carl Menger’s methodology has been described as essentialist. Rather than building artificial models that mimic some attributes or outcomes of an economic process, Menger sought to understand the essential characteristics of phenomena like value, price, and exchange. As Menger explained to his contemporary Léon Walras, Menger and his colleagues “do not simply study quantitative relationships but also the nature [or essence] of economic phenomena.” Abstract models that miss these essential features — even if useful for prediction — do not give the insight needed to understand how economies work, what entrepreneurs do, how government intervention affects outcomes, and so on.
I was reminded of the contrast between Menger and Walras when reading about Henri Matisse and Pablo Picasso, the great twentieth-century pioneers of abstract art. Both painters sought to go beyond traditional, representational forms of visual art, but they tackled the problem in different ways. As Jack D. Flam writes in his 2003 book Matisse and Picasso: The Story of Their Rivalry and Friendship:
Picasso characterized the arbitrariness of representation in his Cubist paintings as resulting from his desire for “a greater plasticity.” Rendering an object as a square or a cube, he said, was not a negation, for “reality was no longer in the object. Reality was in the painting. When the Cubist painter said to himself, ‘I will paint a bowl,’ he set out to do it with the full realization that a bowl in a painting has nothing to do with a bowl in real life.” Matisse, too, was making a distinction between real things and painted things, and fully understood that the two could not be confused. But for Matisse, a painting should evoke the essence of the things it was representing, rather than substitute a completely new and different reality for them. In contract to Picasso’s monochromatic, geometric, and difficult-to-read pictures, Matisse’s paintings were brightly colored, based on organic rhythms, and clearly legible. For all their expressive distortions, they did not have to be “read” in terms of some special language or code.
Menger’s essentialism is concisely described in Larry White’s monograph The Methodology of the Austrian School Economists and treated more fully in Menger’s 1883 book Investigations Into the Method of the Social Sciences. For more on economics and art, see Paul Cantor’s insightful lecture series, “Commerce and Culture” (here and here).
[An earlier version of this post appeared at Circle Bastiat.]
| Nicolai Foss |
Here is a new paper by major Stanford finance scholar, Paul Pfleiderer on what he calls “chameleon models” and their misuse in finance and economics. Lots of catchy concepts, e.g., “theoretical cherry picking” and “bookshelf models,” and an fine critical discussion of Friedmanite instrumentalism. The essence of the paper is this:
Chameleons arise and are often nurtured by the following dynamic. First a bookshelf model is constructed that involves terms and elements that seem to have some relation to the real world and assumptions that are not so unrealistic that they would be dismissed out of hand. The intention of the author, let’s call him or her “Q,” in developing the model may to say something about the real world or the goal may simply be to explore the implications of making a certain set of assumptions. Once Q’s model and results
become known, references are made to it, with statements such as “Q shows that X.” This should be taken as short-hand way of saying “Q shows that under a certain set of assumptions it follows (deductively) that X,” but some people start taking X as a plausible statement about the real world. If someone skeptical about X challenges the assumptions made by Q, some will say that a model shouldn’t be judged by the realism of its assumptions, since all models have assumptions that are unrealistic. Another rejoinder made by those supporting X as something plausibly applying to the real world might be that the truth or falsity of X is an empirical matter and until the appropriate empirical tests or analyses have been conducted and have rejected X, X must be taken seriously. In other words, X is innocent until proven guilty. Now these statements may not be made in quite the stark manner that I have made them here, but the underlying notion still prevails that because there is a model for X, because questioning the assumptions behind X is not appropriate, and because the testable implications of the model supporting X have not been empirically rejected, we must take X seriously. Q’s model (with X as a result) becomes a chameleon that avoids the real world filters.
| Nicolai Foss |
After about a decade of methodological discussion (involving some preaching on both sides of the debate), the micro-foundations project in macro-management research is now beginning to take off in the “doing” dimension. Specifically, scholars are building micro-foundational theory and they are wrestling with the empirical challenges in the micro-foundations. The theoretical and empirical challenges largely derive from the inherent multi-level nature of the micro-foundations project. Theory-building cannot just be somehow moving, say, individual-level organizational behavior insights to the organizational level, but must be genuinely multi-level which raises tricky issues of aggregation and downward causation. Data sampling will necessarily have to take place at at least two levels. This is complicated and usually expensive. Access to good micro-level data is particularly troublesome (one of the advantages of living in a socialist country like Denmark is that the Big Nanny literally looks after her children: We have register data that is incredibly detailed regarding human capital dimensions (i.e., not just gender, age, education, etc., but also complete job history, school and university grades , criminal record, household income, history of medication, etc. — and this is for each and every employee in the DK economy)).
One of the areasis in which the micro-foundations project is being realized in the theoretical and empirical dimensions is what is increasingly often referred to as “strategic human capital.” This is an emerging field (it has its own interest group at the Strategic Management Society) that is quite overlapping with “strategic human resource management,” and which links strategic management, traditional SHR and HR, and human capital theory. The February special issue of Journal of Management, expertly edited by Patrick Wright, Russ Coff and Thomas Moliterno, three key drivers in the SHRM/SHC field, contains ten fine papers on SHC. The introductory essay by the editors nicely lays out the main challenges and issues. Many of the challenges are quite “low-practical” — e.g., people trained in strategy focus a lot on endogeneity, where HR and OB people focus a lot on construct validity issues that strategy folks care less about. Yet, such differences may be quite decisive–as the editors learned while handling the review process! The editors also deal with key issues, such as what are the important dimensions of human capital for the purposes of the SHM field, how can human capital be characterized at different analytical levels, and what are the antecedents and consequences of human capital. I look forward to sinking my teeth into the research articles in the coming week.
| Peter Klein |
Researchers: You rate products and sellers on Amazon and Ebay, you describe your travel experiences on TripAdvisor, and your students judge you on ratemyprofessors.com. I don’t know any systematic evidence on this, but consultants and journalists seem to think that companies are better off letting customers rate (and rant) online, even if this makes it more difficult to manage the brand.
A new venture called SciRev is encouraging researchers to rate their experiences with particular journals: how long are papers turned around, how good are the referee reports, how responsive is the editorial office. It’s not quite open-source peer review, because the specific papers and authors are anonymous (as far as I can tell), but it represents an interesting experiment in opening up the publishing process, at least in terms of author feedback on journals.
SciRev is a website made by researchers for researchers. The information provided by you and your fellow authors is freely available to the entire research community. In this way we aim to make the scientific review process more transparent. Efficient journals get credits for their efforts to improve their review process and the way they handle manuscripts. Less efficient journals are stimulated to put energy in organizing things better. Researchers can search for a journal with a speedy review procedure and have their papers published earlier. Editors get the opportunity to compare their journal’s performance with that of others and to provide information about their journal at our website.
SciRev aims to help science by making the peer review process more efficient. This process is one of the weakest links in the process of scientific knowledge production. Valuable papers may spent several months to over a year at reviewers’ desks and editorial offices before a decision is taken. This is a serious time loss in a process that in other respects has become much more efficient in the last decades. SciRev helps speeding up this process by making it more transparent. Researchers get the possibility to share their review experiences with their colleagues, who therefore can make a better informed choice for a journal to submit their work to. Journals that manage to set up a more efficient review process and which handle manuscripts better are rewarded for this and may attract more and better manuscripts.
There are only a few reviews at this point, so not much information to consume, but I like the concept. And I may be submitting some reviews of my own… (Thanks to Bronwyn Hall for the tip.)
| Peter Klein |
We’ve previously discussed attempts to blame the accounting scandals of the early 2000s on the teaching of transaction cost economics and agency theory. By describing the hazards of opportunistic behavior and shirking, professors were allegedly encouraging students to be opportunistic and to shirk. Then we were told that business schools teach “a particular brand of free-market ideology” — the view that “the market always ‘gets prices right’ and “[a]n individual’s worth can be reduced to one’s worth in the market” — and that this ideology was partly responsible for the financial crisis. (My initial reaction: Where to I sign up for these courses?!)
The Guardian reports now on a movement in the UK to address “the crisis in economics teaching, which critics say has remained largely unchanged since the 2008 financial crash despite the failure of many in the profession to spot the looming credit crunch and worst recession for 100 years.” If you think this refers to a movement to discredit orthodox Keynesianism, which dominates monetary theory and practice in all countries, and its view that discretionary fiscal and (especially) monetary policy are needed to steer the economy on a smooth course, with particular attention to asset markets where prices must be rising at all times, you’d be wrong. No, the reformers are calling for “economics courses to embrace the teachings of Marx and Keynes to undermine the dominance of neoclassical free-market theories.” To their credit, the reformers appear also to want more attention to economic history and the history of economic thought, which is all to the good. But the reformers’ basic premise seems to be that mainstream economics is too friendly toward the free market, and that this has left students unprepared to understand the “post-2008″ world.
To a non-Keyensian and non-Marixian like me, these arguments seem to come from a bizarro world where the sky is green, water runs uphill, and Janet Yellen is seven feet tall. It’s true that most economists reject economy-wide central planning, but the vast majority endorse some version of Keynesian economic policy complete with activist fiscal and monetary interventions, substantial regulation of markets (especially financial markets), fiat money under the control of a central bank, social policy to encourage home ownership, and all the rest. We’ve pointed many times on this blog to research on the social and political views of economists, who lean “left” by a ratio of about 2.5 to 1 — yes, nothing like the sociologists’ zillion to 1, but hardly evidence for a rigid, free-market orthodoxy. I note that the reformers described in the Guardian piece never, ever offer any kind of empirical evidence on the views of economists, the content of economics courses, or the influence of economics courses on economic policy. They simply assert that they don’t like this or that economic theory or pedagogy, which somehow contributed to this or that economic problem. They seem blissfully unaware of the possibility that their own policy preferences might actually be favored in the textbooks and classrooms, and might have just a teeny bit to do with bad economic policies.
I’m reminded of Sheldon Richman’s pithy summary: “No matter how much the government controls the economic system, any problem will be blamed on whatever small zone of freedom that remains.”
| Dick Langlois |
Surprisingly, the following passage is not from O’Driscoll and Rizzo (1985). It is the abstract of a new paper by Brian Arthur called “Complexity Economics: A Different Framework for Economic Thought.”
This paper provides a logical framework for complexity economics. Complexity economics builds from the proposition that the economy is not necessarily in equilibrium: economic agents (firms, consumers, investors) constantly change their actions and strategies in response to the outcome they mutually create. This further changes the outcome, which requires them to adjust afresh. Agents thus live in a world where their beliefs and strategies are constantly being “tested” for survival within an outcome or “ecology” these beliefs and strategies together create. Economics has largely avoided this nonequilibrium view in the past, but if we allow it, we see patterns or phenomena not visible to equilibrium analysis. These emerge probabilistically, last for some time and dissipate, and they correspond to complex structures in other fields. We also see the economy not as something given and existing but forming from a constantly developing set of technological innovations, institutions, and arrangements that draw forth further innovations, institutions and arrangements. Complexity economics sees the economy as in motion, perpetually “computing” itself— perpetually constructing itself anew. Where equilibrium economics emphasizes order, determinacy, deduction, and stasis, complexity economics emphasizes contingency, indeterminacy, sense-making, and openness to change. In this framework time, in the sense of real historical time, becomes important, and a solution is no longer necessarily a set of mathematical conditions but a pattern, a set of emergent phenomena, a set of changes that may induce further changes, a set of existing entities creating novel entities. Equilibrium economics is a special case of nonequilibrium and hence complexity economics, therefore complexity economics is economics done in a more general way. It shows us an economy perpetually inventing itself, creating novel structures and possibilities for exploitation, and perpetually open to response.
Arthur does acknowledge that people like Marshall, Veblen, Schumpeter, Hayek, and Shackle have had much to say about exactly these issues. “But the thinking was largely history-specific, particular, case-based, and intuitive—in a word, literary—and therefore held to be beyond the reach of generalizable reasoning, so in time what had come to be called political economy became pushed to the side, acknowledged as practical and useful but not always respected.” So what Arthur has in mind is a mathematical theory, no doubt a form of what Roger Koppl – who is cited obscurely in a footnote – calls BRACE Economics.