Posts filed under ‘Methods/Methodology/Theory of Science’

Essentialism in Economics and Art

| Peter Klein |

11101494_1_lCarl Menger’s methodology has been described as essentialist. Rather than building artificial models that mimic some attributes or outcomes of an economic process, Menger sought to understand the essential characteristics of phenomena like value, price, and exchange. As Menger explained to his contemporary Léon Walras, Menger and his colleagues “do not simply study quantitative relationships but also the nature [or essence] of economic phenomena.” Abstract models that miss these essential features — even if useful for prediction — do not give the insight needed to understand how economies work, what entrepreneurs do, how government intervention affects outcomes, and so on.

picasso early analytic cubismI was reminded of the contrast between Menger and Walras when reading about Henri Matisse and Pablo Picasso, the great twentieth-century pioneers of abstract art. Both painters sought to go beyond traditional, representational forms of visual art, but they tackled the problem in different ways. As Jack D. Flam writes in his 2003 book Matisse and Picasso: The Story of Their Rivalry and Friendship:

Picasso characterized the arbitrariness of representation in his Cubist paintings as resulting from his desire for “a greater plasticity.” Rendering an object as a square or a cube, he said, was not a negation, for “reality was no longer in the object. Reality was in the painting. When the Cubist painter said to himself, ‘I will paint a bowl,’ he set out to do it with the full realization that a bowl in a painting has nothing to do with a bowl in real life.” Matisse, too, was making a distinction between real things and painted things, and fully understood that the two could not be confused. But for Matisse, a painting should evoke the essence of the things it was representing, rather than substitute a completely new and different reality for them. In contract to Picasso’s monochromatic, geometric, and difficult-to-read pictures, Matisse’s paintings were brightly colored, based on organic rhythms, and clearly legible. For all their expressive distortions, they did not have to be “read” in terms of some special language or code.

Menger’s essentialism is concisely described in Larry White’s monograph The Methodology of the Austrian School Economists and treated more fully in Menger’s 1883 book Investigations Into the Method of the Social Sciences. For more on economics and art, see Paul Cantor’s insightful lecture series, “Commerce and Culture” (here and here).

[An earlier version of this post appeared at Circle Bastiat.]

30 April 2014 at 8:50 am 5 comments

Chameleon Models and Their Dangers

| Nicolai Foss |

Here is a new paper by major Stanford finance scholar, Paul Pfleiderer on what he calls “chameleon models” and their misuse in finance and economics. Lots of catchy concepts, e.g., “theoretical cherry picking” and “bookshelf models,” and an fine critical discussion of Friedmanite instrumentalism. The essence of the paper is this:

Chameleons arise and are often nurtured by the following dynamic. First a bookshelf model is constructed that involves terms and elements that seem to have some relation to the real world and assumptions that are not so unrealistic that they would be dismissed out of hand. The intention of the author, let’s call him or her “Q,” in developing the model may to say something about the real world or the goal may simply be to explore the implications of making a certain set of assumptions. Once Q’s model and results
become known, references are made to it, with statements such as “Q shows that X.” This should be taken as short-hand way of saying “Q shows that under a certain set of  assumptions it follows (deductively) that X,” but some people start taking X as a plausible statement about the real world. If someone skeptical about X challenges the assumptions made by Q, some will say that a model shouldn’t be judged by the realism of its assumptions, since all models have assumptions that are unrealistic. Another rejoinder made by those supporting X as something plausibly applying to the real world might be that the truth or falsity of X is an empirical matter and until the appropriate empirical tests or analyses have been conducted and have rejected X, X must be taken seriously. In other words, X is innocent until proven guilty. Now these statements may not be made in quite the stark manner that I have made them here, but the underlying notion still prevails that because there is a model for X, because questioning the assumptions behind X is not appropriate, and because the testable implications of the model supporting X have not been empirically rejected, we must take X seriously. Q’s model (with X as a result) becomes a chameleon that avoids the real world filters.

29 March 2014 at 5:26 am Leave a comment

Micro-foundations Happening: Strategic Human Capital

| Nicolai Foss |

After about a decade of methodological discussion (involving some preaching on both sides of the debate), the micro-foundations project in macro-management research is now beginning to take off in the “doing” dimension. Specifically, scholars are building micro-foundational theory and they are wrestling with the empirical challenges in the micro-foundations. The theoretical and empirical challenges largely derive from the inherent multi-level nature of the micro-foundations project.  Theory-building cannot just be somehow moving, say, individual-level organizational behavior insights to the organizational level, but must be genuinely multi-level which raises tricky issues of aggregation and downward causation. Data sampling will necessarily have to take place at at least two levels. This  is complicated and usually expensive. Access to good micro-level data is particularly troublesome (one of the advantages of living in a socialist country like Denmark is that the Big Nanny literally looks after her children: We have register data that is incredibly detailed regarding human capital dimensions (i.e., not just gender, age, education, etc., but also complete job history, school and university grades , criminal record, household income, history of medication, etc. — and this is for each and every employee in the DK economy)).

One of the areasis in which the micro-foundations project is being realized in the theoretical and empirical dimensions is what is increasingly often referred to as “strategic human capital.” This is an emerging field (it has its own interest group at the Strategic Management Society) that is quite overlapping with “strategic human resource management,” and which links strategic management, traditional SHR and HR, and human capital theory.  The February special issue of Journal of Management, expertly edited by Patrick Wright, Russ Coff and Thomas Moliterno, three key drivers in the SHRM/SHC field, contains ten fine papers on SHC.  The introductory essay by the editors nicely lays out the main challenges and issues. Many of the challenges are quite “low-practical” — e.g., people trained in strategy focus a lot on endogeneity, where HR and OB people focus a lot on construct validity issues that strategy folks care less about. Yet, such differences may be quite decisive–as the editors learned while handling the review process!  The editors also deal with key issues, such as what are the important dimensions of human capital for the purposes of the SHM field, how can human capital be characterized at different analytical levels, and what are the antecedents and consequences of human capital. I look forward to sinking my teeth into the research articles in the coming week.

16 February 2014 at 5:19 am 1 comment

Rate My Journals

| Peter Klein |

Researchers: You rate products and sellers on Amazon and Ebay, you describe your travel experiences on TripAdvisor, and your students judge you on ratemyprofessors.com. I don’t know any systematic evidence on this, but consultants and journalists seem to think that companies are better off letting customers rate (and rant) online, even if this makes it more difficult to manage the brand.

A new venture called SciRev is encouraging researchers to rate their experiences with particular journals: how long are papers turned around, how good are the referee reports, how responsive is the editorial office. It’s not quite open-source peer review, because the specific papers and authors are anonymous (as far as I can tell), but it represents an interesting experiment in opening up the publishing process, at least in terms of author feedback on journals.

SciRev is a website made by researchers for researchers. The information provided by you and your fellow authors is freely available to the entire research community. In this way we aim to make the scientific review process more transparent. Efficient journals get credits for their efforts to improve their review process and the way they handle manuscripts. Less efficient journals are stimulated to put energy in organizing things better. Researchers can search for a journal with a speedy review procedure and have their papers published earlier. Editors get the opportunity to compare their journal’s performance with that of others and to provide information about their journal at our website.

SciRev aims to help science by making the peer review process more efficient. This process is one of the weakest links in the process of scientific knowledge production. Valuable papers may spent several months to over a year at reviewers’ desks and editorial offices before a decision is taken. This is a serious time loss in a process that in other respects has become much more efficient in the last decades. SciRev helps speeding up this process by making it more transparent. Researchers get the possibility to share their review experiences with their colleagues, who therefore can make a better informed choice for a journal to submit their work to. Journals that manage to set up a more efficient review process and which handle manuscripts better are rewarded for this and may attract more and better manuscripts.

There are only a few reviews at this point, so not much information to consume, but I like the concept. And I may be submitting some reviews of my own… (Thanks to Bronwyn Hall for the tip.)

16 December 2013 at 12:42 pm Leave a comment

Solution to the Economic Crisis? More Keynes and Marx

| Peter Klein |

We’ve previously discussed attempts to blame the accounting scandals of the early 2000s on the teaching of transaction cost economics and agency theory. By describing the hazards of opportunistic behavior and shirking, professors were allegedly encouraging students to be opportunistic and to shirk. Then we were told that business schools teach “a particular brand of free-market ideology” — the view that “the market always ‘gets prices right’ and “[a]n individual’s worth can be reduced to one’s worth in the market” — and that this ideology was partly responsible for the financial crisis. (My initial reaction: Where to I sign up for these courses?!)

The Guardian reports now on a movement in the UK to address “the crisis in economics teaching, which critics say has remained largely unchanged since the 2008 financial crash despite the failure of many in the profession to spot the looming credit crunch and worst recession for 100 years.” If you think this refers to a movement to discredit orthodox Keynesianism, which dominates monetary theory and practice in all countries, and its view that discretionary fiscal and (especially) monetary policy are needed to steer the economy on a smooth course, with particular attention to asset markets where prices must be rising at all times, you’d be wrong. No, the reformers are calling for “economics courses to embrace the teachings of Marx and Keynes to undermine the dominance of neoclassical free-market theories.” To their credit, the reformers appear also to want more attention to economic history and the history of economic thought, which is all to the good. But the reformers’ basic premise seems to be that mainstream economics is too friendly toward the free market, and that this has left students unprepared to understand the “post-2008″ world.

To a non-Keyensian and non-Marixian like me, these arguments seem to come from a bizarro world where the sky is green, water runs uphill, and Janet Yellen is seven feet tall. It’s true that most economists reject economy-wide central planning, but the vast majority endorse some version of Keynesian economic policy complete with activist fiscal and monetary interventions, substantial regulation of markets (especially financial markets), fiat money under the control of a central bank, social policy to encourage home ownership, and all the rest. We’ve pointed many times on this blog to research on the social and political views of economists, who lean “left” by a ratio of about 2.5 to 1 — yes, nothing like the sociologists’ zillion to 1, but hardly evidence for a rigid, free-market orthodoxy. I note that the reformers described in the Guardian piece never, ever offer any kind of empirical evidence on the views of economists, the content of economics courses, or the influence of economics courses on economic policy. They simply assert that they don’t like this or that economic theory or pedagogy, which somehow contributed to this or that economic problem. They seem blissfully unaware of the possibility that their own policy preferences might actually be favored in the textbooks and classrooms, and might have just a teeny bit to do with bad economic policies.

I’m reminded of Sheldon Richman’s pithy summary: “No matter how much the government controls the economic system, any problem will be blamed on whatever small zone of freedom that remains.”

11 November 2013 at 10:24 am 4 comments

Bulletin: Brian Arthur Has Just Invented Austrian Economics

| Dick Langlois |

Surprisingly, the following passage is not from O’Driscoll and Rizzo (1985). It is the abstract of a new paper by Brian Arthur called “Complexity Economics: A Different Framework for Economic Thought.”

This paper provides a logical framework for complexity economics. Complexity economics builds from the proposition that the economy is not necessarily in equilibrium: economic agents (firms, consumers, investors) constantly change their actions and strategies in response to the outcome they mutually create. This further changes the outcome, which requires them to adjust afresh. Agents thus live in a world where their beliefs and strategies are constantly being “tested” for survival within an outcome or “ecology” these beliefs and strategies together create. Economics has largely avoided this nonequilibrium view in the past, but if we allow it, we see patterns or phenomena not visible to equilibrium analysis. These emerge probabilistically, last for some time and dissipate, and they correspond to complex structures in other fields. We also see the economy not as something given and existing but forming from a constantly developing set of technological innovations, institutions, and arrangements that draw forth further innovations, institutions and arrangements. Complexity economics sees the economy as in motion, perpetually “computing” itself— perpetually constructing itself anew. Where equilibrium economics emphasizes order, determinacy, deduction, and stasis, complexity economics emphasizes contingency, indeterminacy, sense-making, and openness to change. In this framework time, in the sense of real historical time, becomes important, and a solution is no longer necessarily a set of mathematical conditions but a pattern, a set of emergent phenomena, a set of changes that may induce further changes, a set of existing entities creating novel entities. Equilibrium economics is a special case of nonequilibrium and hence complexity economics, therefore complexity economics is economics done in a more general way. It shows us an economy perpetually inventing itself, creating novel structures and possibilities for exploitation, and perpetually open to response.

Arthur does acknowledge that people like Marshall, Veblen, Schumpeter, Hayek, and Shackle have had much to say about exactly these issues. “But the thinking was largely history-specific, particular, case-based, and intuitive—in a word, literary—and therefore held to be beyond the reach of generalizable reasoning, so in time what had come to be called political economy became pushed to the side, acknowledged as practical and useful but not always respected.” So what Arthur has in mind is a mathematical theory, no doubt a form of what Roger Koppl – who is cited obscurely in a footnote – calls BRACE Economics.

9 October 2013 at 11:48 am 12 comments

From MOOC to MOOR

| Peter Klein |

Via John Hagel, a story on MOOR — Massively Open Online Research. A UC San Diego computer science and engineering professor is teaching a MOOC (massively open online course) that includes a research component. “All students who sign up for the course will be given an opportunity to work on specific research projects under the leadership of prominent bioinformatics scientists from different countries, who have agreed to interact and mentor their respective teams.” The idea of crowdsourcing research isn’t completely new, but this particular blend of MOO-ish teaching and research constitutes an interesting experiment (see also this). The MOO model is gaining some traction in the scientific publishing world as well.

3 October 2013 at 8:48 am 1 comment

Pindyck on Climate Science

| Peter Klein |

Further to my previous post on misplaced confidence, here is Robert Pindyck on one of the critical tools used by climate scientists.

Climate Change Policy: What Do the Models Tell Us?
Robert S. Pindyck
NBER Working Paper No. 19244, July 2013

Very little. A plethora of integrated assessment models (IAMs) have been constructed and used to estimate the social cost of carbon (SCC) and evaluate alternative abatement policies. These models have crucial flaws that make them close to useless as tools for policy analysis: certain inputs (e.g. the discount rate) are arbitrary, but have huge effects on the SCC estimates the models produce; the models’ descriptions of the impact of climate change are completely ad hoc, with no theoretical or empirical foundation; and the models can tell us nothing about the most important driver of the SCC, the possibility of a catastrophic climate outcome. IAM-based analyses of climate policy create a perception of knowledge and precision, but that perception is illusory and misleading.

Thanks to Bob Murphy for the pointer.

28 August 2013 at 6:15 am Leave a comment

Climate Science and the Scientific Method

| Peter Klein |

This article on climate science skeptics is making the rounds, and drawing the expected denouncements in the usual quarters. It actually makes some reasonable, and quite mild, statements, namely that climate science, like astronomy or evolutionary biology, is a different kind of “science” than physics or chemistry or geology or other mundane sciences. In the former fields, the fundamental assumptions and key mechanisms are usually not falsifiable, the data are often fuzzier than usual, and there is frequently a lot of hand-waiving to fill in gaps.

Many climate sceptics worry climate science cannot be dubbed scientific as it is not falsifiable (as in Popper’s demarcation criterion). They claim that while elements of climate science may be testable in the lab, the complexity of interactions and feedback loops, as well as the levels of uncertainty in climate models, are too high to be a useful basis for public policy. The relationship of observations to these models are also a worry for climate sceptics. In particular, the role of climate sensitivity.

As well as their use of models, the quality of observations themselves have been open to criticism; some of which have been attempts to clean up issues deriving from the messiness of data collection in the real world (eg the positioning of weather stations), while others have focused on perceived weaknesses in the proxy methods required to calculate historic temperature data such as cross-sections of polar ice sheets and fossilised tree rings.

Such claims are of variable quality, but what unites them is a conviction that data quality in various branches of climate science are below those required by “real science”.

What strikes me the most about these “big” sciences is the language and tone typically used to communicate the results to the public. Where scientists in mundane fields express their conclusions cautiously, emphasizing that results and conclusions are tentative and always subject to challenge and revision, climate scientists seem to view themselves as Brave Crusaders for Truth, striking down “Deniers” (who must be funded by the oil industry or some other evil group). They shout that we “know” this or that about climate change, what the planet will be like in 5,000 years, etc. You hardly ever hear other scientists talk like this, or act as if skeptics are necessarily prejudiced and irrational.

31 July 2013 at 8:34 am 28 comments

Two AoM PDWs of Interest

| Peter Klein |

O&Mers attending the AoM conference may find these Professional Development Workshops, sponsored by the Academy of Management Perspectives and based on recent AMP symposia, of particular interest:

The first PDW is on “Private Equity” and features presentations on the managerial, strategic, and public policy implications of private equity transactions. Presenters include Robert Hoskisson (Rice University), Nick Bacon (City University London), Mike Wright (Imperial College London), and Peter Klein (University of Missouri). The private equity session takes place Saturday, Aug 10, 2013 from 11:30AM – 12:30PM at WDW Dolphin Resort in Oceanic 5.

The second is on “Microfoundations of Management,” and features presentations from Nicolai Foss (Copenhagen Business School), Henrich Greve (INSEAD), Sidney Winter (Wharton), Jay Barney (Utah), Teppo Felin (Oxford), Andrew Van de Ven (Minnesota), and Arik Lifschitz (Minnesota). The microfoundations session takes place Monday, Aug 12, 2013 from 9:00AM
– 10:30AM at WDW Dolphin Resort in Oceanic 5

Preregistration isn’t required but please let Don Siegel or Tim Devinney know if you plan to attend, as space is limited.

25 July 2013 at 8:47 am Leave a comment

Sampling on the Dependent Variable, Robert Putnam Edition

| Peter Klein |

Famed sociologist Robert Putnam makes his case for government funding of social science research:

One of the harshest critics of National Science Foundation funding of political science has even praised my study [on civil society and democracy] as “one of the most influential pieces of practical research in the last half-century.”

Ironically, however, if the recent amendment by Sen. Tom Coburn (R-Okla.) that restricts NSF funding for political science had been in effect when I began this research, it never would have gotten off the ground since the foundational grant that made this project possible came from the NSF Political Science Program.

Well, yes, if it hadn’t been for NASA, we wouldn’t have put a man on the moon. What this shows about the average or marginal productivity of government science funding is a little unclear to me.

Of course, Putnam’s piece is a short editorial making an emotional, rather than logical, appeal. But this kind of appeal seems to be all the political scientists have offered in response to the hated Coburn Amendment.

11 July 2013 at 10:56 am 4 comments

Autocrats in the Lab

| Peter Klein |

We noted before the Taylorite quality of many great restaurant kitchens. From Pierre Azoulay we learn that scientific laboratories are also sometimes organized as rigid hierarchies, presided over by an autocratic PI. (The key references is Pasteur.) Pierre suggests a sorting between PI and researcher characteristics so that labs run by autocrats are about as productive as labs run by softies. Probably the same is true in many groups. This reminds us that the Demsetz-Lehn critique applies to lots of work in management. If there is competition among organizational forms, and heterogeneity among individuals, then we shouldn’t expect one form to outperform the others, on average — a lesson often forgotten in empirical management research.

24 June 2013 at 2:13 pm 2 comments

Blind Review Blindly Reviewing itself

| Lasse Lien |

From Scott Masten we received this classic gem:

A growing interest in and concern about the adequacy and fairness of modern peer-review practices in publication and funding are apparent across a wide range of scientific disciplines. Although questions about reliability, accountability, reviewer bias, and competence have been raised, there has been very little direct research on these variables.

The present investigation was an attempt to study the peer-review process directly, in the natural setting of actual journal referee evaluations of submitted manuscripts. As test materials we selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.

With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.” A number of possible interpretations of these data are reviewed and evaluated.

(Peters and Ceci, 1982)

Are these findings specific to the 80s and psychology? Care to replicate?

30 May 2013 at 11:32 am 3 comments

Google-Linked Scholarship

| Peter Klein |

Have you noticed that when you search for a person on Google, the sidebar shows you other linked people searches (“People also search for”)? E.g., if you search for yours truly, it pulls up Nicolai Foss, Joe Salerno, Bob Murphy, and Israel Kirzner. I’m not sure how the algorithm works; is it the likelihood these searches are combined, or searched in sequence, or does it have to do with cross-links in search results? Anyway, it’s interesting to see who Google things is related to whom. For instance

Peter G. Klein ==> Nicolai Foss, Joseph Salerno, Robert P. Murphy, Israel Kirzner

Nicolai Foss ==> Peter G. Klein, Edith Penrose, Israel Kirzner, Oliver E. Williamson

Oliver E. Williamson ==> Ronald Coase, Elinor Ostrom, Douglass North, Armen Alchian

Murray N. Rothbard ==> Ludwig von Mises, Friedrich Hayek, Frédéric Bastiat, Henry Hazlitt

Paul Krugman ==> John Law, P. T. Barnum, Charles Ponzi, Beelzebub

3 May 2013 at 9:54 pm 3 comments

The Future of Publishing

| Peter Klein |

Logo155The current issue of Nature features a special section on “The Future of Publishing” (thanks to Jason for the tip). The lead editorial discusses the results of a survey of scientists which shows, perhaps surprisingly, that support for online, open-access publishing is lukewarm. It’s not just the commercial publishers who want to maintain the paywalls. The entire issue is filled with interesting stuff, so check it out.

5 April 2013 at 5:12 pm 1 comment

Hard and Soft Obscurantism

| Lasse Lien |

I recently attended a presentation by the great social scientist Jon Elster, in which he lamented the state of affairs in social science. Elster has – quite nicely, IMHO – coined the terms hard and soft obscurantism as the main problems. To Elster, obscurantism generally refers to endeavors that are unlikely to produce anything of value, and where this can be predicted in advance. This in contrast to more honorable failures, where a plausible hypothesis turns out to be wrong, leaving much effort without much value.

Soft obscurantism is exactly what it sounds like. Unfalsifiable, impenetrable theories which often proudly ignores standards for argument and evidence that elsewhere constitute the hallmark of the scientific method. Examples are post modernism (Latour), structuralism (Lévi-Strauss), Functionalism (Bourdieu, Foucault), Marxism (Badiou) and psychoanalysis.

But there is a ditch on the other side of the road too. Hard obscurantism refers to mathematical exercises without any tangent to reality, which is useful neither as mathematics nor social science. Another form of hard obscurantism is data mining, or misuse of fancy econometrics, or a combination of the two. Both mathematical games and econometric voodoo give the appearance of “scientificness”, but Elster doesn’t hold his guns about the value created by hard obscurantism either:

“I believe that much work in economics and political science that is inspired by rational-choice theory is devoid of any explanatory, aesthetic or mathematical interest, which means that it has no value at all”

One can of course argue about the size of the size of the total problem, and relative size of each type (personally, I would bet on soft obscurantism as the bigger problem), but the key question is perhaps why obscurantism of either type isn’t gradually rooted out. According to Elster their combined “market share” in the social sciences seems to be growing.

For more, see this and this and this.

23 January 2013 at 5:38 am 12 comments

The “Market Power” of Top Journals

| Peter Klein |

When elite academic journals impose stricter submission requirements, authors comply. When lower-ranked journals impose these restrictions, authors submit elsewhere. Key insight for editors: know your place.

Revealed Preferences for Journals: Evidence from Page Limits
David Card, Stefano DellaVigna
NBER Working Paper No. 18663, December 2012

Academic journals set a variety of policies that affect the supply of new manuscripts. We study the impact of page limit policies adopted by the American Economic Review (AER) in 2008 and the Journal of the European Economic Association (JEEA) in 2009 in response to a substantial increase in the length of articles in economics. We focus the analysis on the decision by potential authors to either shorten a longer manuscript in response to the page limit, or submit to another journal. For the AER we find little indication of a loss of longer papers – instead, authors responded by shortening the text and reformatting their papers. For JEEA, in contrast, we estimate that the page length policy led to nearly complete loss of longer manuscripts. These findings provide a revealed-preference measure of competition between journals and indicate that a top-5 journal has substantial monopoly power over submissions, unlike a journal one notch below. At both journals we find that longer papers were more likely to receive a revise and resubmit verdict prior to page limits, suggesting that the loss of longer papers may have had a detrimental effect on quality at JEEA. Despite a modest impact of the AER’s policy on the average length of submissions (-5%), the policy had little or no effect on the length of final accepted manuscripts. Our results highlight the importance of evaluating editorial policies.

2 January 2013 at 10:37 am 1 comment

Review Paper on Prospect Theory

| Peter Klein |

We haven’t been entirely kind to behavioral economics, but we certainly recognize its importance, and have urged our colleagues to keep up with the latest arguments and findings. A new NBER paper by Nicholas Barberis summarizes the literature, focusing on prospect theory, and is worth a read.

Thirty Years of Prospect Theory in Economics: A Review and Assessment
Nicholas C. Barberis
NBER Working Paper No. 18621, December 2012

Prospect theory, first described in a 1979 paper by Daniel Kahneman and Amos Tversky, is widely viewed as the best available description of how people evaluate risk in experimental settings. While the theory contains many remarkable insights, economists have found it challenging to apply these insights, and it is only recently that there has been real progress in doing so. In this paper, after first reviewing prospect theory and the difficulties inherent in applying it, I discuss some of this recent work. While it is too early to declare this research effort an unqualified success, the rapid progress of the last decade makes me optimistic that at least some of the insights of prospect theory will eventually find a permanent and significant place in mainstream economic analysis.

18 December 2012 at 10:18 am 2 comments

Against Scientism

| Peter Klein |

Hayek defined “scientism” or the “scientistic prejudice” as”slavish imitation of the method and language of Science” when applied to the social sciences, history, management, etc. Scientism represents “a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed, and as such is “not an unprejudiced but a very prejudiced approach which, before it has considered its subject, claims to know what is the most appropriate way of investigating it.” (Hayek’s Economica essays on scientism were collected in his 1952 Counter-Revolution of Science and reprinted in volume 13 of the Collected Works.)

Austin L. Hughes has a thoughtful essay on scientism in the current issue of the New Atlantis (HT: Barry Arrington). Hughes thinks “the reach of scientism exceeds its grasp.” The essay is worth a careful read — he misses Hayek but discusses Popper and other important critics. One focus is the “institutional” definition of science, defined with the trite phrase “science is what scientists do.” Here’s Hughes:

The fundamental problem raised by the identification of “good science” with “institutional science” is that it assumes the practitioners of science to be inherently exempt, at least in the long term, from the corrupting influences that affect all other human practices and institutions. Ladyman, Ross, and Spurrett explicitly state that most human institutions, including “governments, political parties, churches, firms, NGOs, ethnic associations, families … are hardly epistemically reliable at all.” However, “our grounding assumption is that the specific institutional processes of science have inductively established peculiar epistemic reliability.” This assumption is at best naïve and at worst dangerous. If any human institution is held to be exempt from the petty, self-serving, and corrupting motivations that plague us all, the result will almost inevitably be the creation of a priestly caste demanding adulation and required to answer to no one but itself.

6 December 2012 at 1:13 pm 3 comments

A Paper You Might Want to Read

| Lasse Lien |

Here’s a link to the “online first” version of a new Org. Science paper by Peter and myself. This one has been in the pipeline for some time, and we’ve blogged about the WP version before, but this is the final and substantially upgraded version. Please read it and cite it, or we will be forced to kidnap your cat:

The survivor principle holds that the competitive process weeds out inefficient firms, so that hypotheses about efficient behavior can be tested by observing what firms actually do. This principle underlies a large body of empirical work in strategy, economics, and management. But do competitive markets really select for efficient behavior? Is the survivor principle reliable? We evaluate the survivor principle in the context of corporate diversification, asking if survivor-based measures of interindustry relatedness are good predictors of firms’ decisions to exit particular lines of business, controlling for other firm and industry characteristics that affect firms’ portfolio choices. We find strong, robust evidence that survivor-based relatedness is an important determinant of exit. This empirical regularity is consistent with an efficiency rationale for firm-level diversification, though we cannot rule out alternative explanations based on firms’ desire for legitimacy by imitation and attempts to temper multimarket competition.

2 December 2012 at 6:51 pm Leave a comment

Older Posts


Authors

Nicolai J. Foss | home | posts
Peter G. Klein | home | posts
Richard Langlois | home | posts
Lasse B. Lien | home | posts

Guests

Former Guests | posts

Networking

Recent Posts

Categories

Feeds

Our Recent Books

Nicolai J. Foss and Peter G. Klein, Organizing Entrepreneurial Judgment: A New Approach to the Firm (Cambridge University Press, 2012).
Peter G. Klein and Micheal E. Sykuta, eds., The Elgar Companion to Transaction Cost Economics (Edward Elgar, 2010).
Peter G. Klein, The Capitalist and the Entrepreneur: Essays on Organizations and Markets (Mises Institute, 2010).
Richard N. Langlois, The Dynamics of Industrial Capitalism: Schumpeter, Chandler, and the New Economy (Routledge, 2007).
Nicolai J. Foss, Strategy, Economic Organization, and the Knowledge Economy: The Coordination of Firms and Resources (Oxford University Press, 2005).
Raghu Garud, Arun Kumaraswamy, and Richard N. Langlois, eds., Managing in the Modular Age: Architectures, Networks and Organizations (Blackwell, 2003).
Nicolai J. Foss and Peter G. Klein, eds., Entrepreneurship and the Firm: Austrian Perspectives on Economic Organization (Elgar, 2002).
Nicolai J. Foss and Volker Mahnke, eds., Competence, Governance, and Entrepreneurship: Advances in Economic Strategy Research (Oxford, 2000).
Nicolai J. Foss and Paul L. Robertson, eds., Resources, Technology, and Strategy: Explorations in the Resource-based Perspective (Routledge, 2000).

Follow

Get every new post delivered to your Inbox.

Join 241 other followers