Posts filed under ‘Methods/Methodology/Theory of Science’
Against Scientism
| Peter Klein |
Hayek defined “scientism” or the “scientistic prejudice” as”slavish imitation of the method and language of Science” when applied to the social sciences, history, management, etc. Scientism represents “a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed, and as such is “not an unprejudiced but a very prejudiced approach which, before it has considered its subject, claims to know what is the most appropriate way of investigating it.” (Hayek’s Economica essays on scientism were collected in his 1952 Counter-Revolution of Science and reprinted in volume 13 of the Collected Works.)
Austin L. Hughes has a thoughtful essay on scientism in the current issue of the New Atlantis (HT: Barry Arrington). Hughes thinks “the reach of scientism exceeds its grasp.” The essay is worth a careful read — he misses Hayek but discusses Popper and other important critics. One focus is the “institutional” definition of science, defined with the trite phrase “science is what scientists do.” Here’s Hughes:
The fundamental problem raised by the identification of “good science” with “institutional science” is that it assumes the practitioners of science to be inherently exempt, at least in the long term, from the corrupting influences that affect all other human practices and institutions. Ladyman, Ross, and Spurrett explicitly state that most human institutions, including “governments, political parties, churches, firms, NGOs, ethnic associations, families … are hardly epistemically reliable at all.” However, “our grounding assumption is that the specific institutional processes of science have inductively established peculiar epistemic reliability.” This assumption is at best naïve and at worst dangerous. If any human institution is held to be exempt from the petty, self-serving, and corrupting motivations that plague us all, the result will almost inevitably be the creation of a priestly caste demanding adulation and required to answer to no one but itself.
A Paper You Might Want to Read
| Lasse Lien |
Here’s a link to the “online first” version of a new Org. Science paper by Peter and myself. This one has been in the pipeline for some time, and we’ve blogged about the WP version before, but this is the final and substantially upgraded version. Please read it and cite it, or we will be forced to kidnap your cat:
The survivor principle holds that the competitive process weeds out inefficient firms, so that hypotheses about efficient behavior can be tested by observing what firms actually do. This principle underlies a large body of empirical work in strategy, economics, and management. But do competitive markets really select for efficient behavior? Is the survivor principle reliable? We evaluate the survivor principle in the context of corporate diversification, asking if survivor-based measures of interindustry relatedness are good predictors of firms’ decisions to exit particular lines of business, controlling for other firm and industry characteristics that affect firms’ portfolio choices. We find strong, robust evidence that survivor-based relatedness is an important determinant of exit. This empirical regularity is consistent with an efficiency rationale for firm-level diversification, though we cannot rule out alternative explanations based on firms’ desire for legitimacy by imitation and attempts to temper multimarket competition.
Coase on the Economists
| Peter Klein |
Ronald Coase has a short piece in the December 2012 Harvard Business Review, “Saving Economics from the Economists” (thanks to Geoff Manne for the tip). Not bad for a fellow about to turn 102! I always learn from Coase, even when I don’t fully agree. Here Coase decries the irrelevance of contemporary economic theory, condemning economics for “giving up the real-world economy as its subject matter.” He also provides a killer quote: “Economics thus becomes a convenient instrument the state uses to manage the economy, rather than a tool the public turns to for enlightenment about how the economy operates.”
I’m sure that’s true for many economists and for some branches of the field, such as Keynesian macroeconomics. But Coase seems to reject economic theorizing altogether, even the “causal-realist” approach popular in these parts. To be useful, he argues, economics should provide practical guidance for the businessperson. However, “[s]ince economics offers little in the way of practical insight, managers and entrepreneurs depend on their own business acumen, personal judgment, and rules of thumb in making decisions.”
Well, that sounds about right to me. Economics provides general principles, or laws, about human action and interaction, mostly stated as “if-then” propositions. Applying the principles to concrete, historical cases requires Verstehen, and is the task of economic historians (as analysts) and entrepreneurs (as actors), not economic theorists. Deductive theory does not replace judgment. Without deductive theory, however, we’d have no principles to apply, and nothing to contribute to our understanding of the economy except — to quote Coase’s own critique of the Old Institutionalists — “a mass of descriptive material waiting for a theory, or a fire.” To be sure, Coase’s own inductive method has led to several brilliant insights. Coase himself has a knack for intuiting general principles from concrete cases (e.g., theorizing about transaction costs from observing automobile plants, or about property rights from studying the history of spectrum allocation), though not perfectly. But, as I noted before, Coase himself is probably the exception that proves the rule — namely that induction is a mess.
Micro-foundations All Over the Place …
| Nicolai Foss |
In a SOapBox Essay in 2005, Teppo Felin and I called for “micro-foundations” for macro management theory, specifically the dominant routines and capabilities (etc.) stream in strategic management. (check Teppo’s site for the paper, commentaries by Jay Barney and Bruce Kogut, and various other Felin & Foss papers on the subject). We thought our argument was fairly simple, not really that novel (economists have been talking about micro-foundations for decades), and “obviously true.” Yet, the argument was apparently provocative (or, perhaps more correctly, our formulation of it was…), and it met with considerable hostility. For example, the DRUID 2008 conference in Copenhagen featured a panel on micro-foundations with opposing sides represented by Sidney Winter and Thorbjørn Knudsen, and Peter Abell and yours truly, respectively. I remember seeing several (extremely) prominent management scholars shaking their heads in disbelief about the folly of micro-foundations. (The debate, though not the head-shaking, can be accessed through the DRUID site).
And yet, 7 years later the micro-foundations project appears to have met with general acceptance, although it is sometimes referred to as the “Foss Fuss,” by at least one very prominent contributor to our field. In fact, some of the head-shaking persons from DRUID 2008 now themselves talk about micro-foundations. Both Sid Winter and Thorbjørn Knudsen (not headshakers) now embrace micro-foundations–albeit of the “right” kind (e.g., behavioralist and informed by neuroscience and experiments). Papers in leading journals have “micro-foundations” in the title. Specific examples: :
- The Journal of Management Studies just published a special issue on “Micro-origins of Routines and Capabilities,” edited by Teppo, me, Koen Heimeriks, and Tammy Madsen, and featuring contributions by various luminaries.
- The European Management Review’s December issue (not yet online) will feature a transcribed exchange between Sid Winter, me and Maurizio Zollo on micro-foundations.
- A leading association in our field will adopt “micro-foundations” as the theme of one its conferences (to be held in 2014). Details to be disclosed (soon).
Micro-foundations are “everywhere.” List der Vernunft, I reckon.
UPDATE: The Academy of Management Perspectives will feature a paper symposium next year on micro-foundations. Contributors: Jay Barney, Teppo Felin, Henrich Greve, Siegwart Lindenberg, Andrew van de Ven, Sid Winter, and me.
The Wrong Way to Measure Returns to Public Science Funding
| Peter Klein |
A new Milken Institute report purports to show that “[t]he benefit from every dollar invested by National Institutes of Health (NIH) outweighs the cost by many times. When we consider the economic benefits realized as a result of decrease in mortality and morbidity of all other diseases, the direct and indirect effects (such as increases in work-related productivity) are phenomenal.” There are so many problems with the study I hardly know where to begin. For instance:
1. The authors measure long-term benefit to society as real GDP for the bioscience industries. This is a strange proxy. It is well-known that one of the major impact of public science funding is higher wages for science workers. It is hardly surprising that NIH funding results in higher wages and profits for those in the bioscience industry. Moreover, even if industry activity were the variable of interest, don’t we care about the composition of that activity, not the amount? Which projects were stimulated by NIH funding, and were they the right ones?
2. The results are based on a panel regression of the following equation:
Real GDP for the bioscience industries = f (employment in bioscience industry, labor skill, capital stock, real NIH funding, Industrial R&D in all industries) + state fixed effects + error term.
They interpret the coefficient on NIH funding as the causal effect of NIH funding on bioscience performance. E.g.: “Preliminary results show that the long-term effect of a $1.00 increase in NIH funding will increase the size (output) of the bioscience industry by at least $1.70.” But all the right-hand-side variables are potentially endogenous. For instance, the positive correlation between the dependent variable and NIH funding could reflect winner-picking: the NIH funds projects that are likely to be successful, with or without NIH funding. (The authors briefly mention endogeneity but dismiss it as unimportant.)
This is a version of the basic methodological flaw I attributed to the the political scientists lobbying for NSF money. The issue in question — even assuming the dependent variable is a reasonable measure of social benefit — is what bioscience industry output would have been in the absence of NIH funding. (And, even more important, what would have been the direction of that activity.) Public funding could crowd out private funding, and almost certainly changes the direction of research activity, for good or ill.
3. There are a host of econometric problems, aside from endogeneity — no year fixed effects, no interactions between federal and private funds, the imposition of linear relationships, etc.
If I’m being unfair to the authors, I hope readers will correct me. But this looks to me like another example of special pleading, not careful analysis.
Ockham’s Razor
| Peter Klein |
This looks like a mighty interesting conference:
Scientific theory choice is guided by judgments of simplicity, a bias frequently referred to as “Ockham’s Razor”. But what is simplicity and how, if at all, does it help science find the truth? Should we view simple theories as means for obtaining accurate predictions, as classical statisticians recommend? Or should we believe the theories themselves, as Bayesian methods seem to justify? The aim of this workshop is to re-examine the foundations of Ockham’s razor, with a firm focus on the connections, if any, between simplicity and truth.
The conference started yesterday; here’s a report on day 1 from Cosma Shalizi. Parsimony, for example, turns out to be more complicated than it appears; here is Shalizi on (recent University of Missouri visitor) Elliott Sober:
What he mostly addressed is when parsimony . . . ranks hypotheses in the same order as likelihood. . . . The conditions needed for parsimony and likelihood to agree are rather complicated and disjunctive, making parsimony seem like a mere short-cut or hack — if you think it should be matching likelihood. He was, however, clear in saying that he didn’t think hypotheses should always be evaluated in terms of likelihood alone. He ended by suggesting that “parsimony” or “simplicity” is probably many different things in many different areas of science (safe enough), and that when there is a legitimate preference for parsimony, it can be explained “reductively”, in terms of service to some more compelling goal than sheer simplicity.
“Give Me Money!”
| Peter Klein |
I’ve received quite a few emails from various academic organizations asking me to help defeat the Flake Amendment, which would eliminate National Science Foundation funding for political science research. The American Political Science Association is all over this, even publishing a virtual special issue of APSR highlighting NSF-funded research results.
Ironically, none of the arguments I’ve seen for preserving public funding of social science research makes an argument consistent with, well, social-science research. All take the form: “Government funding has supported the following important research findings, which have had the following social benefits.” This argument receives three Fs for research design. First, there is no counterfactual. The point isn’t whether government-funded research result X is good, but whether it’s better than Y, the research result that would have obtained in the absence of government funding. Government funding doesn’t simply increase the quantity of research, it shapes the direction of research. How do we know NSF-funded work isn’t crowding out even more valuable work?
Second, there is no attempt at causal inference. Where are the natural experiments, the randomized controlled trials, the valid instruments? There is evidence that a main effect of government funding of hard science is to increase the wages of scientists, not the quality or quantity of research. Even if NSF funds good political science research, how do we know the funding is the cause, not the consequence, of the research?
Third, there is no cost-benefit analysis. The lobbying statements simply list purported benefits. Well, sure, the government could give me hundreds of millions of dollars and I’d do some good with it too. Would those benefits exceed the costs? “Political science research has wide-spread effects beyond specific projects,” say the APSA’s talking points. Maybe so, but what about the effects of those goods and services that would have been produced with the taxpayer dollars that went to NSF? Has nobody at the Monkey Cage read Bastiat?
Put differently, I’m certain the APSR would desk-reject an empirical paper with the logical structure of this argument for funding!
My advice to social scientists seeking government funding is to start by acting like social scientists, not K Streeters.
Crowdsourcing in Academia
| Peter Klein |
It’s called “fractional scholarship.”
American universities produce far more Ph.D’s than there are faculty positions for them to fill, say the report’s authors, Samuel Arbesman, senior scholar at the Kauffman Foundation, and Jon Wilkins, founder of the Ronin Institute. Thus, the traditional academic path may not be an option for newly minted Ph.D.s. Other post-graduate scientists may eschew academia for careers in positions that don’t take direct advantage of the skills they acquired in graduate school.
Consequently, “America has a glut of talented, highly educated, underemployed individuals who wish to and are quite capable of effectively pursuing scholarship, but are unable to do so,” said Arbesman. “Ideally, groups of these individuals would come together to identify, define and tackle the questions that offer the greatest potential for important scientific results and economic growth.”
Given the level of relationship-specific investment many research projects require, this isn’t likely to work without some kinds of long-term commitments. But the model may be effective for other projects. And it beats the alternative.

Paradigm Shift
| Peter Klein |
Did you know this year is the semicentennial of Kuhn’s Structure of Scientific Revolutions? David Kaiser offers some reflections at Nature.
At the heart of Kuhn’s account stood the tricky notion of the paradigm. British philosopher Margaret Masterman famously isolated 21 distinct ways in which Kuhn used the slippery term throughout his slim volume. Even Kuhn himself came to realize that he had saddled the word with too much baggage: in later essays, he separated his intended meanings into two clusters. One sense referred to a scientific community’s reigning theories and methods. The second meaning, which Kuhn argued was both more original and more important, referred to exemplars or model problems, the worked examples on which students and young scientists cut their teeth. As Kuhn appreciated from his own physics training, scientists learned by immersive apprenticeship; they had to hone what Hungarian chemist and philosopher of science Michael Polanyi had called “tacit knowledge” by working through large collections of exemplars rather than by memorizing explicit rules or theorems. More than most scholars of his era, Kuhn taught historians and philosophers to view science as practice rather than syllogism.
Kuhn did not, to my knowledge, say much about the social sciences, though in a later essay he described them in somewhat unflattering terms:
[T]here are many fields — I shall call them proto-sciences — in which practice does not generate testable conclusions but which nonetheless resemble ph9ilosophy and the arts rather than the established sciences in their developmental patters. I think, for example, of fields like chemistry and electricity before the mid-eighteenth century, of the study of heredity and phylogeny before the mid-nineteenth, or many of the social sciences today. In those fields, . . . though they satisfy [Popper’s] demarcation criterion, incessant criticism and continual striving for a fresh start as primary forces, and need to be. No more than in philosophy and the arts, however, do they result in clear-cut progress.
Murray Rothbard took an explicitly Kuhnian approach to his history of economic thought, agreeing with Kuhn that there is no linear, upward progression and condemning what he called the “Whig theory” of intellectual history.
And If You Can’t Teach, Teach Gym
| Peter Klein |
Kate Maxwell, writing at Growthology, is concerned about the distance between those who do entrepreneurship and those who teach or research entrepreneurship:
In my reading of the entrepreneurship literature I have been struck by the large gap between entrepreneurs and people who study entrepreneurship. The group of people who self select into entrepreneurship is almost entirely disjoint from the group of people who self select to study it. Such a gap exists in other fields to greater and lesser degrees. Sociologists, for instance, study phenomenon in which they are clearly participants whereas political scientists are rarely career politicians but are often actors in political systems.
But in the case of entrepreneurship the gap is cause for concern. My sense is that all too often those studying entrepreneurship don’t understand, even through exposure, the messy process of creating a business, nor, due to selection effects, are they naturally inclined to think like an entrepreneur might.
I agree entirely with this description, but am not sure I understand the concern. Kate seems to assume a particular concept of entrepreneurship — the day-to-day mechanics of starting and growing a business — that applies only to a fraction of the entrepreneurship literature. Surely one can study the effects of entrepreneurship on economic outcomes like growth and industry structure without “thinking like an entrepreneur.” Same for antecedents to entrepreneurship such as the legal and political environment, social and cultural norms, the behavior of universities, etc. Even more so, if we treat entrepreneurship as an economic function (alertness, innovation, adaptation, or judgment) rather than an employment category or a firm type, then solid training in economics and related disciplines seems the main prerequisite for doing good research.
Of course, this doesn’t mean that entrepreneurship scholars shouldn’t talk to entrepreneurs or study their lives and work. Want to know how if feels to throw the winning Superbowl pass? Ask Tom Brady or Eli Manning. The stat sheet won’t tell you that. But this doesn’t mean that only ex-NFL players can be competent announcers, analysts, sportswriters, etc. Similarly, I like to read about food, and have enjoyed the recent memoirs of great chefs like Jacques Pépin and Julia Child. These first-hand accounts are full of unique insights and colorful observations. But there are plenty of great books on the restaurant industry, on the relationship between food and culture, on culinary innovation, etc. by authors who couldn’t cook their way out of a paper bag.
What do you think?
Capabilities and Organizational Economics Once More
| Nicolai Foss |
As readers of this blog will know, the dialogue between the firm capabilities literature and organizational economics has a long history in management research and economics. Co-blogger Dick Langlois has been an important contributor in this space. The forty years long discussion (dating it from George B. Richardson’s 1972 hint that his newly coined notion of capability is complementary to Coasian transaction cost analysis) has proceeded through several stages. Thus, the initial wave of capabilities theory (i.e., beginning to mid-1990s) was strongly critical of organizational economic. This gave way to a recognition that perhaps the two perspectives were complementary in a more additive manner. Thus, whereas capabilities theory provided insight in which assets firms need to access to compete successfully, organizational economics provide insight into how such access is contractually organized. However, increasingly work has stressed deeper relations of complementarity: Capabilities mechanisms are intertwined with the explanatory mechanisms identified by organizational economists.
In a paper, “The Organizational Economics of Organizational Capability and Heterogeneity: A Research Agenda,” that is forthcoming as the Introduction to a special issue of Organization Science on the the relation between capabilities and organizational economics ideas, Nick Argyres, Teppo Felin, Todd Zenger and I argue, however, that the discussion has been lopsided—hardly qualifying as a real debate—and that a reorientation is necessary.Specifically, the terms of the discussion have largely been defined by capabilities theorists. Part of the explanation for this dominance is that capability theorists have had a rhetorical advantage, because everyone seems to have accepted that organizational economics has very little to say about organizational heterogeneity. We argue that this rests on a misreading of organizational economics: while it is true that organizational economics was not (directly) designed to address and explain organizational heterogeneity, this does not imply that the theory is and must remain silent about such heterogeneity. In fact, we discuss a number of ways in which organizational economics is quite centrally focused on explaining organizational heterogeneity. Specifically, we argue that organizational economics provides guidance around how organizational design and boundaries facilitate the formation of knowledge, insight, and learning that are central to the heterogeneity of firms. We also demonstrate how efficient governance can itself be a source of competitive heterogeneity. We thus call on organizational economists to actively and vigorously enter the discussion, turning something closer to a monologue into real dialogue. (more…)
Economists, (Hard) Data, and (Soft) Data
| Nicolai Foss |
Economists have typically been suspicious of data generated by (mail, telephone) surveys and interviews, and have idolized register data. The former are soft and mushy data, the latter are hard and serious ones. I have always been a bit sceptical regarding whether the traditional economist’s suspicion of soft data is really that well-founded; after all, the statistical agencies of the world and other government institutions that are in the business of data collection are populated by fallible individuals and respondents are the same ones that respond to, say, a mail survey conducted by Prof. N. J. Foss, PhD. (Having recently conducted a major data collection effort with a public statistical agency, my skepticism has dramatically increased!)
The argument is sometimes made that there may be a legal duty to respond to the queries of a government agency and this means a high response rate and accurate reporting. However, it appears that we know rather little about the accuracy of data generated in this way, and it is quite conceivable that measurement error is high, exactly because the provision of data is “forced” (those anarcho-capitalist types out there may delight in providing errorneous data!). The serious content of the traditional economist’s prejudice is rather, I think, that surveys often have respondents reacting to subjective scales rather than providing absolute numbers. This is a warranted concern, but not a critique of surveys and interviews per se, because these methods do not imply commitment to subjective scales per se.
As a rule register data are not available that can be used to address numerous interesting issues in organizational economics, labor economics, productivity research and so on. Scholars working on these issues have to resort to those softy surveys and interviews that have been the workhorses of business school faculty for decades. This is a new recognition in economics. Case in point: A recent paper by Nicholas Bloom and John Van Reenen, “New approaches to surveying organizations.” There is absolutely nothing, I submit, in this short, well-written paper that would surprise virtually any empirically oriented business school professor (i.e., virtually all bschool professors) to whom this would not be anything “new” at all, but rather old hat.
This is not a critique of Profs. Bloom and Van Reenen at all (on the contrary, it is excellent that they educate their economist colleagues in this way). It is just striking and a little bit amusing, however, that we have had to wait until 2010 until empirical approaches that have been mainstream in management research for decades reach the pages of the American Economic Review.
Hayek on Schumpeter on Methodological Individualism
| Peter Klein |
Our QOTD comes from the 2002 version of Hayek’s “Competition as a Discovery Procedure.” (Thanks to REW for the inspiration.) Hayek delivered two versions of the lecture, both in 1968, one in English and one in German. The former appeared in Hayek’s 1978 collection New Studies in Philosophy, Politics, Economics, and the History of Ideas, and is the version most familiar to English-speaking scholars. In 2002 the QJAE published a new English translation of the German version which includes two sections (II and VII) omitted from the earlier English version. In this passage from section II Hayek distinguishes macroeconomics (“macrotheory”) from microeconomics (“microtheory”):
About many important conditions we have only statistical information rather than data regarding changes in the fine structure. Macrotheory then often affords approximate values or, probably, predictions that we are unable to obtain in any other way. It might often be worthwhile, for example, to base our reasoning on the assumption that an increase of aggregate demand will in general lead to a greater increase in investment, although we know that under certain circumstances the opposite will be the case. These theorems of macrotheory are certainly valuable as rules of thumb for generating predictions in the presence of insufficient information. But they are not only not more scientific than is microtheory; in a strict sense they do not have the character of scientific theories at all.
In this regard I must confess that I still sympathize more with the views of the young Schumpeter than with those of the elder, the latter being responsible to so great an extent for the rise of macrotheory. Exactly 60 years ago, in his brilliant first publication, a few pages after having introduced the concept of “methodological individualism” to designate the method of economic theory, he wrote:
If one erects the edifice of our theory uninfluenced by prejudices and outside demands, one does not encounter these concepts [namely “national income,” “national wealth,” “social capital”] at all. Thus we will not be further concerned with them. If we wanted to do so, however, we would see how greatly they are afflicted with obscurities and difficulties, and how closely they are associated with numerous false notions, without yielding a single truly valuable theorem.
The reference is to Schumpeter’s 1908 book, Das Wesen und der Hauptinhalt der theoretischen Nationalökonomie which, to my knowledge, has never been translated (though an excerpt, and some commentary, are here). For more on the different versions of Hayek’s essay see here and here.
NB: Krugman blogged over the weekend about microfoundations, offering a remarkably (sic) shallow and misguided critique based on what Hayek would call the scientistic fallacy. E.g.: “meteorologists were using concepts like cold and warm fronts long before they had computational weather models, because those concepts seemed to make sense and to work. Why, then, do some economists think that concepts like the IS curve or the multiplier are illegitimate because they aren’t necessarily grounded in optimization from the ground up?” Ugh.
Hoisted from the Comments: Journal Impact Factors
| Peter Klein |
An old post of Nicolai’s on journal impact factors is still attracting attention. Two recent comments are reproduced here so they don’t get buried in the comments.
Bruce writes:
There is an interesting paper by Joel Baum on this, “Free Riding on Power Laws: Questioning the Validity of the Impact Factor as a Measure of Research Quality in Organization Studies,” Organization 18(4) (2011): 449-66. He does a nice analysis of citations, and shows (what many of us suspected) that citations are highly skewed to a small subset of articles, so the idea of an impact factor which is based on a mean citation rate is erroneous. He concludes that “Impact Factor has little credibility as a proxy for the quality of eitehr organization studies journals of the articles they publish, resulting in attributions of journal or article quality that are incorrect as much or more than half the time. The clear implication is that we need to cease our reliances on such non-scientific, quantitative characterisation to evaluate the quality of our work.”
To which Ram Mudambi responds:
This analysis was already done in a paper we wrote in 2005, finally now published in Scientometrics.
We have the further and stronger result that in many years, the top 10 percent of papers in A- journals like Research Policy outperform the top 10 percent of papers in A journals like AMJ.
So it is the paper that matters, NOT the journal in which it was published. Evaluating scholars on the basis of where they have published is pretty meaningless. Some years ago, we had a senior job candidate with EIGHTEEN real “A” publications — it turned out he had only 118 total citations on Google scholar. So his work was pretty trivial, even though it appeared in top journals.
See also the good twin blog for further discussion.
Perceptions of Opportunities – Part 2
| Peter Lewin |
The second review article in the latest issue of AMR by Venkataraman, Sarasvathy, Dew, and Forster (VSDF) is more ambitious than the first by Shane, discussed in Part 1. In fact one might describe the ambition motivating the article as grandiose. VSDF “seek to recast entrepreneurship as a science of the artificial” an entirely new way of looking at entrepreneurship in the interest of uncovering (what I take to be universal) principles that can serve as the basis of a new empirical and policy-useful science of entrepreneurship. [I see this article as a companion piece to the article by Sarasvathy and Venkataraman (SV) in ET&P January, 2011, in which this grandiose vision is even more apparent.]
The science of the artificial(supposedly a distinct category of science from natural or social science) is derived from the work of Herbert Simon (1996).
As a theory develops it splits into two streams: (1) “basic” research that continues to refine the causal explanations and (2) “applied” research that seeks to alter the variables of explanation. At that point the phenomenon of interest has become an artifact. …
A science of the artificial is interested in phenomena that can be designed [and controlled]. … Design lies is the choice of the boundary values; control lies in the means to change them. (24).
So a useful theory is itself an artifact something that can be used to understand and (importantly) control aspects of the (social) world. And, I suppose, the new science of entrepreneurship will eventually develop such artifacts. [At the end of the article they talk about “recasting opportunities as artifacts” – so I am not sure how this is all connected.]
My lack of expertise regarding the work of Herbert Simon (something which I am now more encouraged to remedy) prevents me from pronouncing with confidence on this part of the article. Suffice it to say that the meaning and contribution of this new “science of the artificial” is far from clear to me. I am left with a feeling that if it is indeed such an important and path-breaking meta-scientific turn, the authors should be able to explain it better. It should be more accessible and transparent. I am left highly skeptical, but I urge readers of this post to read the article and perhaps enlighten me and others. (more…)
Reference Bloat
| Peter Klein |
Nature News (via Bronwyn Hall):
One in five academics in a variety of social science and business fields say they have been asked to pad their papers with superfluous references in order to get published. The figures, from a survey published today in Science, also suggest that journal editors strategically target junior faculty, who in turn were more willing to acquiesce.
I think reference bloat is a problem, particularly in management journals (not so much in economics journals). Too many papers include tedious lists of references supporting even trivial or obvious points. It’s a bit like blog entries that ritually link every technical term or proper noun to its corresponding wikipedia entry. “Firms seek to position themselves and acquire resources to achieve competitive advantage (Porter, 1980; Wernerfelt, 1984; Barney, 1986).” Unless the reference is non-obvious, narrowly linked to a specific argument, etc., why include it? Readers can do their Google Scholar searches if needed.
In management this strikes me as a cultural issue, not necessarily the result of editors or reviewers wanting to build up their own citation counts. But I’d be curious to hear about reader’s experiences, either as authors or (confession time!) editors or reviewers.
Interdisciplinarity Chart of the Day
| Peter Klein |
This is from a study of economics PhD dissertations at one French university, the EHESS (École des hautes études en sciences sociales).
In the 1960s, three-fourths of economics PhD dissertation committee members were from another discipline, and in the 1990s, less than 15 percent. Other disciplines have also become more self-reliant, but in much less dramatic fashion.
The paper, “The Mainstreaming of French Economics” by Olivier Godechot is here and the pointer goes to Art Goldhammer. The paper focuses on the transformation of the French profession led by US-trained or -oriented economists such as Jacques Mairesse, Jean-Jacques Laffont, and Robert Boyer. Godechot concludes that “scientific life in general and, moreover, paradigmatic change are not only a question of truth, of evidences, and of proofs but also of politics. Evaluating, influencing, building coalitions, voting, and selecting are regular practices both within disciplines and in wider interdisciplinary arenas when articles are submitted, grants are distributed (Lamont, 2009), positions are opened (Musselin, 2005), and candidates are selected.” Right on that.
“Illusions in Regression Analysis”
| Peter Klein |
Apropos Lasse’s post, check out Scott Armstrong’s “Illusions in Regression Analysis,” via Craig Newmark, who highlights passages like this:
This illusion [that correlation implies causality] has led people to make poor decisions about such things as what to eat (e.g., coffee, once bad,is now good for health), what medical procedures to use (e.g., the frequently recommended PSA test for prostate cancer has now been shown to be harmful), and what economic policies the government should adopt in recessions (e.g., trusting the government to be more efficient than the market).
And this:
Do not use regression to search for causal relationships. And do not try to predict by using variables that were not specified in the a priori analysis. Thus, avoid data mining, stepwise regression, and related methods.
Too Freaky
| Peter Klein |
We’ve been somewhat critical on this blog of the Freakonomics approach, but not as critical as Andrew Gelman. Here’s his latest (with Kaiser Fung) in the American Scientist:
On the heels of Freakonomics, the pop-economics or pop-statistics genre has attracted a surge of interest, with more authors adopting an anecdotal, narrative style.
As the authors of statistics-themed books for general audiences, we can attest that Levitt and Dubner’s success is not easily attained. And as teachers of statistics, we recognize the challenge of creating interest in the subject without resorting to clichéd examples such as baseball averages, movie grosses and political polls. The other side of this challenge, though, is presenting ideas in interesting ways without oversimplifying them or misleading readers. We and others have noted a discouraging tendency in the Freakonomics body of work to present speculative or even erroneous claims with an air of certainty. Considering such problems yields useful lessons for those who wish to popularize statistical ideas.
Here’s some additional commentary from Andrew.
My unease with Freakonomics is not its anecdotal, narrative style, but the emphasis on clever puzzles rather than substantive problems, over-reliance on weird instrumental variables, and belief that one can tackle almost any phenomenon with only the barest knowledge of its history and prior literature. Economic theory is indeed quite general and powerful, but not to be thrown around willy-nilly. After all, with great power comes great responsibility.
Theory Construction Bleg
| Peter Klein |
A friend writes:
I am trying to improve the theory writing skills of my doctoral students. . . . [In my field] we don’t often build complicated mathematical models; our theory tends to be more story telling. But nevertheless there is good and bad theory. I have found some papers that discuss how to write theory and what constitutes a theoretical contribution. But I really would like for you to recommend a book on the theory of theory construction. I want to assign chapters from it to my students as well as learn something myself. Since the principles of theory construction are generic, I don’t care what literature the author comes from . The insights will be useful regardless.
What would you suggest?










Recent Comments