Posts filed under ‘Methods/Methodology/Theory of Science’
| Peter Klein |
Have you noticed that when you search for a person on Google, the sidebar shows you other linked people searches (“People also search for”)? E.g., if you search for yours truly, it pulls up Nicolai Foss, Joe Salerno, Bob Murphy, and Israel Kirzner. I’m not sure how the algorithm works; is it the likelihood these searches are combined, or searched in sequence, or does it have to do with cross-links in search results? Anyway, it’s interesting to see who Google things is related to whom. For instance
Peter G. Klein ==> Nicolai Foss, Joseph Salerno, Robert P. Murphy, Israel Kirzner
Nicolai Foss ==> Peter G. Klein, Edith Penrose, Israel Kirzner, Oliver E. Williamson
Oliver E. Williamson ==> Ronald Coase, Elinor Ostrom, Douglass North, Armen Alchian
Murray N. Rothbard ==> Ludwig von Mises, Friedrich Hayek, Frédéric Bastiat, Henry Hazlitt
Paul Krugman ==> John Law, P. T. Barnum, Charles Ponzi, Beelzebub
| Peter Klein |
The current issue of Nature features a special section on “The Future of Publishing” (thanks to Jason for the tip). The lead editorial discusses the results of a survey of scientists which shows, perhaps surprisingly, that support for online, open-access publishing is lukewarm. It’s not just the commercial publishers who want to maintain the paywalls. The entire issue is filled with interesting stuff, so check it out.
| Lasse Lien |
I recently attended a presentation by the great social scientist Jon Elster, in which he lamented the state of affairs in social science. Elster has – quite nicely, IMHO – coined the terms hard and soft obscurantism as the main problems. To Elster, obscurantism generally refers to endeavors that are unlikely to produce anything of value, and where this can be predicted in advance. This in contrast to more honorable failures, where a plausible hypothesis turns out to be wrong, leaving much effort without much value.
Soft obscurantism is exactly what it sounds like. Unfalsifiable, impenetrable theories which often proudly ignores standards for argument and evidence that elsewhere constitute the hallmark of the scientific method. Examples are post modernism (Latour), structuralism (Lévi-Strauss), Functionalism (Bourdieu, Foucault), Marxism (Badiou) and psychoanalysis.
But there is a ditch on the other side of the road too. Hard obscurantism refers to mathematical exercises without any tangent to reality, which is useful neither as mathematics nor social science. Another form of hard obscurantism is data mining, or misuse of fancy econometrics, or a combination of the two. Both mathematical games and econometric voodoo give the appearance of “scientificness”, but Elster doesn’t hold his guns about the value created by hard obscurantism either:
“I believe that much work in economics and political science that is inspired by rational-choice theory is devoid of any explanatory, aesthetic or mathematical interest, which means that it has no value at all”
One can of course argue about the size of the size of the total problem, and relative size of each type (personally, I would bet on soft obscurantism as the bigger problem), but the key question is perhaps why obscurantism of either type isn’t gradually rooted out. According to Elster their combined “market share” in the social sciences seems to be growing.
| Peter Klein |
When elite academic journals impose stricter submission requirements, authors comply. When lower-ranked journals impose these restrictions, authors submit elsewhere. Key insight for editors: know your place.
Revealed Preferences for Journals: Evidence from Page Limits
David Card, Stefano DellaVigna
NBER Working Paper No. 18663, December 2012
Academic journals set a variety of policies that affect the supply of new manuscripts. We study the impact of page limit policies adopted by the American Economic Review (AER) in 2008 and the Journal of the European Economic Association (JEEA) in 2009 in response to a substantial increase in the length of articles in economics. We focus the analysis on the decision by potential authors to either shorten a longer manuscript in response to the page limit, or submit to another journal. For the AER we find little indication of a loss of longer papers – instead, authors responded by shortening the text and reformatting their papers. For JEEA, in contrast, we estimate that the page length policy led to nearly complete loss of longer manuscripts. These findings provide a revealed-preference measure of competition between journals and indicate that a top-5 journal has substantial monopoly power over submissions, unlike a journal one notch below. At both journals we find that longer papers were more likely to receive a revise and resubmit verdict prior to page limits, suggesting that the loss of longer papers may have had a detrimental effect on quality at JEEA. Despite a modest impact of the AER’s policy on the average length of submissions (-5%), the policy had little or no effect on the length of final accepted manuscripts. Our results highlight the importance of evaluating editorial policies.
| Peter Klein |
We haven’t been entirely kind to behavioral economics, but we certainly recognize its importance, and have urged our colleagues to keep up with the latest arguments and findings. A new NBER paper by Nicholas Barberis summarizes the literature, focusing on prospect theory, and is worth a read.
Thirty Years of Prospect Theory in Economics: A Review and Assessment
Nicholas C. Barberis
NBER Working Paper No. 18621, December 2012
Prospect theory, first described in a 1979 paper by Daniel Kahneman and Amos Tversky, is widely viewed as the best available description of how people evaluate risk in experimental settings. While the theory contains many remarkable insights, economists have found it challenging to apply these insights, and it is only recently that there has been real progress in doing so. In this paper, after first reviewing prospect theory and the difficulties inherent in applying it, I discuss some of this recent work. While it is too early to declare this research effort an unqualified success, the rapid progress of the last decade makes me optimistic that at least some of the insights of prospect theory will eventually find a permanent and significant place in mainstream economic analysis.
| Peter Klein |
Hayek defined “scientism” or the “scientistic prejudice” as”slavish imitation of the method and language of Science” when applied to the social sciences, history, management, etc. Scientism represents “a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed, and as such is “not an unprejudiced but a very prejudiced approach which, before it has considered its subject, claims to know what is the most appropriate way of investigating it.” (Hayek’s Economica essays on scientism were collected in his 1952 Counter-Revolution of Science and reprinted in volume 13 of the Collected Works.)
Austin L. Hughes has a thoughtful essay on scientism in the current issue of the New Atlantis (HT: Barry Arrington). Hughes thinks “the reach of scientism exceeds its grasp.” The essay is worth a careful read — he misses Hayek but discusses Popper and other important critics. One focus is the “institutional” definition of science, defined with the trite phrase “science is what scientists do.” Here’s Hughes:
The fundamental problem raised by the identification of “good science” with “institutional science” is that it assumes the practitioners of science to be inherently exempt, at least in the long term, from the corrupting influences that affect all other human practices and institutions. Ladyman, Ross, and Spurrett explicitly state that most human institutions, including “governments, political parties, churches, firms, NGOs, ethnic associations, families … are hardly epistemically reliable at all.” However, “our grounding assumption is that the specific institutional processes of science have inductively established peculiar epistemic reliability.” This assumption is at best naïve and at worst dangerous. If any human institution is held to be exempt from the petty, self-serving, and corrupting motivations that plague us all, the result will almost inevitably be the creation of a priestly caste demanding adulation and required to answer to no one but itself.
| Lasse Lien |
Here’s a link to the “online first” version of a new Org. Science paper by Peter and myself. This one has been in the pipeline for some time, and we’ve blogged about the WP version before, but this is the final and substantially upgraded version. Please read it and cite it, or we will be forced to kidnap your cat:
The survivor principle holds that the competitive process weeds out inefficient firms, so that hypotheses about efficient behavior can be tested by observing what firms actually do. This principle underlies a large body of empirical work in strategy, economics, and management. But do competitive markets really select for efficient behavior? Is the survivor principle reliable? We evaluate the survivor principle in the context of corporate diversification, asking if survivor-based measures of interindustry relatedness are good predictors of firms’ decisions to exit particular lines of business, controlling for other firm and industry characteristics that affect firms’ portfolio choices. We find strong, robust evidence that survivor-based relatedness is an important determinant of exit. This empirical regularity is consistent with an efficiency rationale for firm-level diversification, though we cannot rule out alternative explanations based on firms’ desire for legitimacy by imitation and attempts to temper multimarket competition.
| Peter Klein |
Ronald Coase has a short piece in the December 2012 Harvard Business Review, “Saving Economics from the Economists” (thanks to Geoff Manne for the tip). Not bad for a fellow about to turn 102! I always learn from Coase, even when I don’t fully agree. Here Coase decries the irrelevance of contemporary economic theory, condemning economics for “giving up the real-world economy as its subject matter.” He also provides a killer quote: “Economics thus becomes a convenient instrument the state uses to manage the economy, rather than a tool the public turns to for enlightenment about how the economy operates.”
I’m sure that’s true for many economists and for some branches of the field, such as Keynesian macroeconomics. But Coase seems to reject economic theorizing altogether, even the “causal-realist” approach popular in these parts. To be useful, he argues, economics should provide practical guidance for the businessperson. However, “[s]ince economics offers little in the way of practical insight, managers and entrepreneurs depend on their own business acumen, personal judgment, and rules of thumb in making decisions.”
Well, that sounds about right to me. Economics provides general principles, or laws, about human action and interaction, mostly stated as “if-then” propositions. Applying the principles to concrete, historical cases requires Verstehen, and is the task of economic historians (as analysts) and entrepreneurs (as actors), not economic theorists. Deductive theory does not replace judgment. Without deductive theory, however, we’d have no principles to apply, and nothing to contribute to our understanding of the economy except — to quote Coase’s own critique of the Old Institutionalists — “a mass of descriptive material waiting for a theory, or a fire.” To be sure, Coase’s own inductive method has led to several brilliant insights. Coase himself has a knack for intuiting general principles from concrete cases (e.g., theorizing about transaction costs from observing automobile plants, or about property rights from studying the history of spectrum allocation), though not perfectly. But, as I noted before, Coase himself is probably the exception that proves the rule — namely that induction is a mess.
| Nicolai Foss |
In a SOapBox Essay in 2005, Teppo Felin and I called for “micro-foundations” for macro management theory, specifically the dominant routines and capabilities (etc.) stream in strategic management. (check Teppo’s site for the paper, commentaries by Jay Barney and Bruce Kogut, and various other Felin & Foss papers on the subject). We thought our argument was fairly simple, not really that novel (economists have been talking about micro-foundations for decades), and “obviously true.” Yet, the argument was apparently provocative (or, perhaps more correctly, our formulation of it was…), and it met with considerable hostility. For example, the DRUID 2008 conference in Copenhagen featured a panel on micro-foundations with opposing sides represented by Sidney Winter and Thorbjørn Knudsen, and Peter Abell and yours truly, respectively. I remember seeing several (extremely) prominent management scholars shaking their heads in disbelief about the folly of micro-foundations. (The debate, though not the head-shaking, can be accessed through the DRUID site).
And yet, 7 years later the micro-foundations project appears to have met with general acceptance, although it is sometimes referred to as the “Foss Fuss,” by at least one very prominent contributor to our field. In fact, some of the head-shaking persons from DRUID 2008 now themselves talk about micro-foundations. Both Sid Winter and Thorbjørn Knudsen (not headshakers) now embrace micro-foundations–albeit of the “right” kind (e.g., behavioralist and informed by neuroscience and experiments). Papers in leading journals have “micro-foundations” in the title. Specific examples: :
- The Journal of Management Studies just published a special issue on “Micro-origins of Routines and Capabilities,” edited by Teppo, me, Koen Heimeriks, and Tammy Madsen, and featuring contributions by various luminaries.
- The European Management Review’s December issue (not yet online) will feature a transcribed exchange between Sid Winter, me and Maurizio Zollo on micro-foundations.
- A leading association in our field will adopt “micro-foundations” as the theme of one its conferences (to be held in 2014). Details to be disclosed (soon).
Micro-foundations are “everywhere.” List der Vernunft, I reckon.
UPDATE: The Academy of Management Perspectives will feature a paper symposium next year on micro-foundations. Contributors: Jay Barney, Teppo Felin, Henrich Greve, Siegwart Lindenberg, Andrew van de Ven, Sid Winter, and me.
| Peter Klein |
A new Milken Institute report purports to show that “[t]he benefit from every dollar invested by National Institutes of Health (NIH) outweighs the cost by many times. When we consider the economic benefits realized as a result of decrease in mortality and morbidity of all other diseases, the direct and indirect effects (such as increases in work-related productivity) are phenomenal.” There are so many problems with the study I hardly know where to begin. For instance:
1. The authors measure long-term benefit to society as real GDP for the bioscience industries. This is a strange proxy. It is well-known that one of the major impact of public science funding is higher wages for science workers. It is hardly surprising that NIH funding results in higher wages and profits for those in the bioscience industry. Moreover, even if industry activity were the variable of interest, don’t we care about the composition of that activity, not the amount? Which projects were stimulated by NIH funding, and were they the right ones?
2. The results are based on a panel regression of the following equation:
Real GDP for the bioscience industries = f (employment in bioscience industry, labor skill, capital stock, real NIH funding, Industrial R&D in all industries) + state fixed effects + error term.
They interpret the coefficient on NIH funding as the causal effect of NIH funding on bioscience performance. E.g.: “Preliminary results show that the long-term effect of a $1.00 increase in NIH funding will increase the size (output) of the bioscience industry by at least $1.70.” But all the right-hand-side variables are potentially endogenous. For instance, the positive correlation between the dependent variable and NIH funding could reflect winner-picking: the NIH funds projects that are likely to be successful, with or without NIH funding. (The authors briefly mention endogeneity but dismiss it as unimportant.)
This is a version of the basic methodological flaw I attributed to the the political scientists lobbying for NSF money. The issue in question — even assuming the dependent variable is a reasonable measure of social benefit — is what bioscience industry output would have been in the absence of NIH funding. (And, even more important, what would have been the direction of that activity.) Public funding could crowd out private funding, and almost certainly changes the direction of research activity, for good or ill.
3. There are a host of econometric problems, aside from endogeneity — no year fixed effects, no interactions between federal and private funds, the imposition of linear relationships, etc.
If I’m being unfair to the authors, I hope readers will correct me. But this looks to me like another example of special pleading, not careful analysis.
| Peter Klein |
This looks like a mighty interesting conference:
Scientific theory choice is guided by judgments of simplicity, a bias frequently referred to as “Ockham’s Razor”. But what is simplicity and how, if at all, does it help science find the truth? Should we view simple theories as means for obtaining accurate predictions, as classical statisticians recommend? Or should we believe the theories themselves, as Bayesian methods seem to justify? The aim of this workshop is to re-examine the foundations of Ockham’s razor, with a firm focus on the connections, if any, between simplicity and truth.
The conference started yesterday; here’s a report on day 1 from Cosma Shalizi. Parsimony, for example, turns out to be more complicated than it appears; here is Shalizi on (recent University of Missouri visitor) Elliott Sober:
What he mostly addressed is when parsimony . . . ranks hypotheses in the same order as likelihood. . . . The conditions needed for parsimony and likelihood to agree are rather complicated and disjunctive, making parsimony seem like a mere short-cut or hack — if you think it should be matching likelihood. He was, however, clear in saying that he didn’t think hypotheses should always be evaluated in terms of likelihood alone. He ended by suggesting that “parsimony” or “simplicity” is probably many different things in many different areas of science (safe enough), and that when there is a legitimate preference for parsimony, it can be explained “reductively”, in terms of service to some more compelling goal than sheer simplicity.
| Peter Klein |
I’ve received quite a few emails from various academic organizations asking me to help defeat the Flake Amendment, which would eliminate National Science Foundation funding for political science research. The American Political Science Association is all over this, even publishing a virtual special issue of APSR highlighting NSF-funded research results.
Ironically, none of the arguments I’ve seen for preserving public funding of social science research makes an argument consistent with, well, social-science research. All take the form: “Government funding has supported the following important research findings, which have had the following social benefits.” This argument receives three Fs for research design. First, there is no counterfactual. The point isn’t whether government-funded research result X is good, but whether it’s better than Y, the research result that would have obtained in the absence of government funding. Government funding doesn’t simply increase the quantity of research, it shapes the direction of research. How do we know NSF-funded work isn’t crowding out even more valuable work?
Second, there is no attempt at causal identification. Where are the natural experiments, the randomized controlled trials, the valid instruments? We already know that a main effect of government funding of hard science is to increase the wages of scientists, not the quality or quantity of research. Even if NSF funds good political science research, how do we know the funding is the cause, not the consequence, of the research?
Third, there is no cost-benefit analysis. The lobbying statements simply list purported benefits. Well, sure, the government could give me hundreds of millions of dollars and I’d do some good with it too. Would those benefits exceed the costs? “Political science research has wide-spread effects beyond specific projects,” say the APSA’s talking points. Maybe so, but what about the effects of those goods and services that would have been produced with the taxpayer dollars that went to NSF? Has nobody at the Monkey Cage read Bastiat?
Put differently, I’m certain the APSR would desk-reject an empirical paper with the logical structure of this argument for funding!
My advice to social scientists seeking government funding is to start by acting like social scientists, not K Streeters.
| Peter Klein |
It’s called “fractional scholarship.”
American universities produce far more Ph.D’s than there are faculty positions for them to fill, say the report’s authors, Samuel Arbesman, senior scholar at the Kauffman Foundation, and Jon Wilkins, founder of the Ronin Institute. Thus, the traditional academic path may not be an option for newly minted Ph.D.s. Other post-graduate scientists may eschew academia for careers in positions that don’t take direct advantage of the skills they acquired in graduate school.
Consequently, “America has a glut of talented, highly educated, underemployed individuals who wish to and are quite capable of effectively pursuing scholarship, but are unable to do so,” said Arbesman. “Ideally, groups of these individuals would come together to identify, define and tackle the questions that offer the greatest potential for important scientific results and economic growth.”
Given the level of relationship-specific investment many research projects require, this isn’t likely to work without some kinds of long-term commitments. But the model may be effective for other projects. And it beats the alternative.
| Peter Klein |
Did you know this year is the semicentennial of Kuhn’s Structure of Scientific Revolutions? David Kaiser offers some reflections at Nature.
At the heart of Kuhn’s account stood the tricky notion of the paradigm. British philosopher Margaret Masterman famously isolated 21 distinct ways in which Kuhn used the slippery term throughout his slim volume. Even Kuhn himself came to realize that he had saddled the word with too much baggage: in later essays, he separated his intended meanings into two clusters. One sense referred to a scientific community’s reigning theories and methods. The second meaning, which Kuhn argued was both more original and more important, referred to exemplars or model problems, the worked examples on which students and young scientists cut their teeth. As Kuhn appreciated from his own physics training, scientists learned by immersive apprenticeship; they had to hone what Hungarian chemist and philosopher of science Michael Polanyi had called “tacit knowledge” by working through large collections of exemplars rather than by memorizing explicit rules or theorems. More than most scholars of his era, Kuhn taught historians and philosophers to view science as practice rather than syllogism.
Kuhn did not, to my knowledge, say much about the social sciences, though in a later essay he described them in somewhat unflattering terms:
[T]here are many fields — I shall call them proto-sciences — in which practice does not generate testable conclusions but which nonetheless resemble ph9ilosophy and the arts rather than the established sciences in their developmental patters. I think, for example, of fields like chemistry and electricity before the mid-eighteenth century, of the study of heredity and phylogeny before the mid-nineteenth, or many of the social sciences today. In those fields, . . . though they satisfy [Popper's] demarcation criterion, incessant criticism and continual striving for a fresh start as primary forces, and need to be. No more than in philosophy and the arts, however, do they result in clear-cut progress.
Murray Rothbard took an explicitly Kuhnian approach to his history of economic thought, agreeing with Kuhn that there is no linear, upward progression and condemning what he called the “Whig theory” of intellectual history.
| Peter Klein |
Kate Maxwell, writing at Growthology, is concerned about the distance between those who do entrepreneurship and those who teach or research entrepreneurship:
In my reading of the entrepreneurship literature I have been struck by the large gap between entrepreneurs and people who study entrepreneurship. The group of people who self select into entrepreneurship is almost entirely disjoint from the group of people who self select to study it. Such a gap exists in other fields to greater and lesser degrees. Sociologists, for instance, study phenomenon in which they are clearly participants whereas political scientists are rarely career politicians but are often actors in political systems.
But in the case of entrepreneurship the gap is cause for concern. My sense is that all too often those studying entrepreneurship don’t understand, even through exposure, the messy process of creating a business, nor, due to selection effects, are they naturally inclined to think like an entrepreneur might.
I agree entirely with this description, but am not sure I understand the concern. Kate seems to assume a particular concept of entrepreneurship — the day-to-day mechanics of starting and growing a business — that applies only to a fraction of the entrepreneurship literature. Surely one can study the effects of entrepreneurship on economic outcomes like growth and industry structure without “thinking like an entrepreneur.” Same for antecedents to entrepreneurship such as the legal and political environment, social and cultural norms, the behavior of universities, etc. Even more so, if we treat entrepreneurship as an economic function (alertness, innovation, adaptation, or judgment) rather than an employment category or a firm type, then solid training in economics and related disciplines seems the main prerequisite for doing good research.
Of course, this doesn’t mean that entrepreneurship scholars shouldn’t talk to entrepreneurs or study their lives and work. Want to know how if feels to throw the winning Superbowl pass? Ask Tom Brady or Eli Manning. The stat sheet won’t tell you that. But this doesn’t mean that only ex-NFL players can be competent announcers, analysts, sportswriters, etc. Similarly, I like to read about food, and have enjoyed the recent memoirs of great chefs like Jacques Pépin and Julia Child. These first-hand accounts are full of unique insights and colorful observations. But there are plenty of great books on the restaurant industry, on the relationship between food and culture, on culinary innovation, etc. by authors who couldn’t cook their way out of a paper bag.
What do you think?
| Nicolai Foss |
As readers of this blog will know, the dialogue between the firm capabilities literature and organizational economics has a long history in management research and economics. Co-blogger Dick Langlois has been an important contributor in this space. The forty years long discussion (dating it from George B. Richardson’s 1972 hint that his newly coined notion of capability is complementary to Coasian transaction cost analysis) has proceeded through several stages. Thus, the initial wave of capabilities theory (i.e., beginning to mid-1990s) was strongly critical of organizational economic. This gave way to a recognition that perhaps the two perspectives were complementary in a more additive manner. Thus, whereas capabilities theory provided insight in which assets firms need to access to compete successfully, organizational economics provide insight into how such access is contractually organized. However, increasingly work has stressed deeper relations of complementarity: Capabilities mechanisms are intertwined with the explanatory mechanisms identified by organizational economists.
In a paper, “The Organizational Economics of Organizational Capability and Heterogeneity: A Research Agenda,” that is forthcoming as the Introduction to a special issue of Organization Science on the the relation between capabilities and organizational economics ideas, Nick Argyres, Teppo Felin, Todd Zenger and I argue, however, that the discussion has been lopsided—hardly qualifying as a real debate—and that a reorientation is necessary.Specifically, the terms of the discussion have largely been defined by capabilities theorists. Part of the explanation for this dominance is that capability theorists have had a rhetorical advantage, because everyone seems to have accepted that organizational economics has very little to say about organizational heterogeneity. We argue that this rests on a misreading of organizational economics: while it is true that organizational economics was not (directly) designed to address and explain organizational heterogeneity, this does not imply that the theory is and must remain silent about such heterogeneity. In fact, we discuss a number of ways in which organizational economics is quite centrally focused on explaining organizational heterogeneity. Specifically, we argue that organizational economics provides guidance around how organizational design and boundaries facilitate the formation of knowledge, insight, and learning that are central to the heterogeneity of firms. We also demonstrate how efficient governance can itself be a source of competitive heterogeneity. We thus call on organizational economists to actively and vigorously enter the discussion, turning something closer to a monologue into real dialogue. (more…)
| Nicolai Foss |
Economists have typically been suspicious of data generated by (mail, telephone) surveys and interviews, and have idolized register data. The former are soft and mushy data, the latter are hard and serious ones. I have always been a bit sceptical regarding whether the traditional economist’s suspicion of soft data is really that well-founded; after all, the statistical agencies of the world and other government institutions that are in the business of data collection are populated by fallible individuals and respondents are the same ones that respond to, say, a mail survey conducted by Prof. N. J. Foss, PhD. (Having recently conducted a major data collection effort with a public statistical agency, my skepticism has dramatically increased!)
The argument is sometimes made that there may be a legal duty to respond to the queries of a government agency and this means a high response rate and accurate reporting. However, it appears that we know rather little about the accuracy of data generated in this way, and it is quite conceivable that measurement error is high, exactly because the provision of data is “forced” (those anarcho-capitalist types out there may delight in providing errorneous data!). The serious content of the traditional economist’s prejudice is rather, I think, that surveys often have respondents reacting to subjective scales rather than providing absolute numbers. This is a warranted concern, but not a critique of surveys and interviews per se, because these methods do not imply commitment to subjective scales per se.
As a rule register data are not available that can be used to address numerous interesting issues in organizational economics, labor economics, productivity research and so on. Scholars working on these issues have to resort to those softy surveys and interviews that have been the workhorses of business school faculty for decades. This is a new recognition in economics. Case in point: A recent paper by Nicholas Bloom and John Van Reenen, “New approaches to surveying organizations.” There is absolutely nothing, I submit, in this short, well-written paper that would surprise virtually any empirically oriented business school professor (i.e., virtually all bschool professors) to whom this would not be anything “new” at all, but rather old hat.
This is not a critique of Profs. Bloom and Van Reenen at all (on the contrary, it is excellent that they educate their economist colleagues in this way). It is just striking and a little bit amusing, however, that we have had to wait until 2010 until empirical approaches that have been mainstream in management research for decades reach the pages of the American Economic Review.
| Peter Klein |
Our QOTD comes from the 2002 version of Hayek’s “Competition as a Discovery Procedure.” (Thanks to REW for the inspiration.) Hayek delivered two versions of the lecture, both in 1968, one in English and one in German. The former appeared in Hayek’s 1978 collection New Studies in Philosophy, Politics, Economics, and the History of Ideas, and is the version most familiar to English-speaking scholars. In 2002 the QJAE published a new English translation of the German version which includes two sections (II and VII) omitted from the earlier English version. In this passage from section II Hayek distinguishes macroeconomics (“macrotheory”) from microeconomics (“microtheory”):
About many important conditions we have only statistical information rather than data regarding changes in the fine structure. Macrotheory then often affords approximate values or, probably, predictions that we are unable to obtain in any other way. It might often be worthwhile, for example, to base our reasoning on the assumption that an increase of aggregate demand will in general lead to a greater increase in investment, although we know that under certain circumstances the opposite will be the case. These theorems of macrotheory are certainly valuable as rules of thumb for generating predictions in the presence of insufficient information. But they are not only not more scientific than is microtheory; in a strict sense they do not have the character of scientific theories at all.
In this regard I must confess that I still sympathize more with the views of the young Schumpeter than with those of the elder, the latter being responsible to so great an extent for the rise of macrotheory. Exactly 60 years ago, in his brilliant first publication, a few pages after having introduced the concept of “methodological individualism” to designate the method of economic theory, he wrote:
If one erects the edifice of our theory uninfluenced by prejudices and outside demands, one does not encounter these concepts [namely “national income,” “national wealth,” “social capital”] at all. Thus we will not be further concerned with them. If we wanted to do so, however, we would see how greatly they are afflicted with obscurities and difficulties, and how closely they are associated with numerous false notions, without yielding a single truly valuable theorem.
The reference is to Schumpeter’s 1908 book, Das Wesen und der Hauptinhalt der theoretischen Nationalökonomie which, to my knowledge, has never been translated (though an excerpt, and some commentary, are here). For more on the different versions of Hayek’s essay see here and here.
NB: Krugman blogged over the weekend about microfoundations, offering a remarkably (sic) shallow and misguided critique based on what Hayek would call the scientistic fallacy. E.g.: “meteorologists were using concepts like cold and warm fronts long before they had computational weather models, because those concepts seemed to make sense and to work. Why, then, do some economists think that concepts like the IS curve or the multiplier are illegitimate because they aren’t necessarily grounded in optimization from the ground up?” Ugh.
| Peter Klein |
An old post of Nicolai’s on journal impact factors is still attracting attention. Two recent comments are reproduced here so they don’t get buried in the comments.
There is an interesting paper by Joel Baum on this, “Free Riding on Power Laws: Questioning the Validity of the Impact Factor as a Measure of Research Quality in Organization Studies,” Organization 18(4) (2011): 449-66. He does a nice analysis of citations, and shows (what many of us suspected) that citations are highly skewed to a small subset of articles, so the idea of an impact factor which is based on a mean citation rate is erroneous. He concludes that “Impact Factor has little credibility as a proxy for the quality of eitehr organization studies journals of the articles they publish, resulting in attributions of journal or article quality that are incorrect as much or more than half the time. The clear implication is that we need to cease our reliances on such non-scientific, quantitative characterisation to evaluate the quality of our work.”
To which Ram Mudambi responds:
This analysis was already done in a paper we wrote in 2005, finally now published in Scientometrics.
We have the further and stronger result that in many years, the top 10 percent of papers in A- journals like Research Policy outperform the top 10 percent of papers in A journals like AMJ.
So it is the paper that matters, NOT the journal in which it was published. Evaluating scholars on the basis of where they have published is pretty meaningless. Some years ago, we had a senior job candidate with EIGHTEEN real “A” publications — it turned out he had only 118 total citations on Google scholar. So his work was pretty trivial, even though it appeared in top journals.
See also the good twin blog for further discussion.
| Peter Lewin |
The second review article in the latest issue of AMR by Venkataraman, Sarasvathy, Dew, and Forster (VSDF) is more ambitious than the first by Shane, discussed in Part 1. In fact one might describe the ambition motivating the article as grandiose. VSDF “seek to recast entrepreneurship as a science of the artificial” an entirely new way of looking at entrepreneurship in the interest of uncovering (what I take to be universal) principles that can serve as the basis of a new empirical and policy-useful science of entrepreneurship. [I see this article as a companion piece to the article by Sarasvathy and Venkataraman (SV) in ET&P January, 2011, in which this grandiose vision is even more apparent.]
The science of the artificial(supposedly a distinct category of science from natural or social science) is derived from the work of Herbert Simon (1996).
As a theory develops it splits into two streams: (1) “basic” research that continues to refine the causal explanations and (2) “applied” research that seeks to alter the variables of explanation. At that point the phenomenon of interest has become an artifact. …
A science of the artificial is interested in phenomena that can be designed [and controlled]. … Design lies is the choice of the boundary values; control lies in the means to change them. (24).
So a useful theory is itself an artifact something that can be used to understand and (importantly) control aspects of the (social) world. And, I suppose, the new science of entrepreneurship will eventually develop such artifacts. [At the end of the article they talk about “recasting opportunities as artifacts” – so I am not sure how this is all connected.]
My lack of expertise regarding the work of Herbert Simon (something which I am now more encouraged to remedy) prevents me from pronouncing with confidence on this part of the article. Suffice it to say that the meaning and contribution of this new “science of the artificial” is far from clear to me. I am left with a feeling that if it is indeed such an important and path-breaking meta-scientific turn, the authors should be able to explain it better. It should be more accessible and transparent. I am left highly skeptical, but I urge readers of this post to read the article and perhaps enlighten me and others. (more…)