Posts filed under ‘Methods/Methodology/Theory of Science’
| Peter Klein |
We’ve previously discussed attempts to blame the accounting scandals of the early 2000s on the teaching of transaction cost economics and agency theory. By describing the hazards of opportunistic behavior and shirking, professors were allegedly encouraging students to be opportunistic and to shirk. Then we were told that business schools teach “a particular brand of free-market ideology” — the view that “the market always ‘gets prices right’ and “[a]n individual’s worth can be reduced to one’s worth in the market” — and that this ideology was partly responsible for the financial crisis. (My initial reaction: Where to I sign up for these courses?!)
The Guardian reports now on a movement in the UK to address “the crisis in economics teaching, which critics say has remained largely unchanged since the 2008 financial crash despite the failure of many in the profession to spot the looming credit crunch and worst recession for 100 years.” If you think this refers to a movement to discredit orthodox Keynesianism, which dominates monetary theory and practice in all countries, and its view that discretionary fiscal and (especially) monetary policy are needed to steer the economy on a smooth course, with particular attention to asset markets where prices must be rising at all times, you’d be wrong. No, the reformers are calling for “economics courses to embrace the teachings of Marx and Keynes to undermine the dominance of neoclassical free-market theories.” To their credit, the reformers appear also to want more attention to economic history and the history of economic thought, which is all to the good. But the reformers’ basic premise seems to be that mainstream economics is too friendly toward the free market, and that this has left students unprepared to understand the “post-2008″ world.
To a non-Keyensian and non-Marixian like me, these arguments seem to come from a bizarro world where the sky is green, water runs uphill, and Janet Yellen is seven feet tall. It’s true that most economists reject economy-wide central planning, but the vast majority endorse some version of Keynesian economic policy complete with activist fiscal and monetary interventions, substantial regulation of markets (especially financial markets), fiat money under the control of a central bank, social policy to encourage home ownership, and all the rest. We’ve pointed many times on this blog to research on the social and political views of economists, who lean “left” by a ratio of about 2.5 to 1 — yes, nothing like the sociologists’ zillion to 1, but hardly evidence for a rigid, free-market orthodoxy. I note that the reformers described in the Guardian piece never, ever offer any kind of empirical evidence on the views of economists, the content of economics courses, or the influence of economics courses on economic policy. They simply assert that they don’t like this or that economic theory or pedagogy, which somehow contributed to this or that economic problem. They seem blissfully unaware of the possibility that their own policy preferences might actually be favored in the textbooks and classrooms, and might have just a teeny bit to do with bad economic policies.
I’m reminded of Sheldon Richman’s pithy summary: “No matter how much the government controls the economic system, any problem will be blamed on whatever small zone of freedom that remains.”
| Dick Langlois |
Surprisingly, the following passage is not from O’Driscoll and Rizzo (1985). It is the abstract of a new paper by Brian Arthur called “Complexity Economics: A Different Framework for Economic Thought.”
This paper provides a logical framework for complexity economics. Complexity economics builds from the proposition that the economy is not necessarily in equilibrium: economic agents (firms, consumers, investors) constantly change their actions and strategies in response to the outcome they mutually create. This further changes the outcome, which requires them to adjust afresh. Agents thus live in a world where their beliefs and strategies are constantly being “tested” for survival within an outcome or “ecology” these beliefs and strategies together create. Economics has largely avoided this nonequilibrium view in the past, but if we allow it, we see patterns or phenomena not visible to equilibrium analysis. These emerge probabilistically, last for some time and dissipate, and they correspond to complex structures in other fields. We also see the economy not as something given and existing but forming from a constantly developing set of technological innovations, institutions, and arrangements that draw forth further innovations, institutions and arrangements. Complexity economics sees the economy as in motion, perpetually “computing” itself— perpetually constructing itself anew. Where equilibrium economics emphasizes order, determinacy, deduction, and stasis, complexity economics emphasizes contingency, indeterminacy, sense-making, and openness to change. In this framework time, in the sense of real historical time, becomes important, and a solution is no longer necessarily a set of mathematical conditions but a pattern, a set of emergent phenomena, a set of changes that may induce further changes, a set of existing entities creating novel entities. Equilibrium economics is a special case of nonequilibrium and hence complexity economics, therefore complexity economics is economics done in a more general way. It shows us an economy perpetually inventing itself, creating novel structures and possibilities for exploitation, and perpetually open to response.
Arthur does acknowledge that people like Marshall, Veblen, Schumpeter, Hayek, and Shackle have had much to say about exactly these issues. “But the thinking was largely history-specific, particular, case-based, and intuitive—in a word, literary—and therefore held to be beyond the reach of generalizable reasoning, so in time what had come to be called political economy became pushed to the side, acknowledged as practical and useful but not always respected.” So what Arthur has in mind is a mathematical theory, no doubt a form of what Roger Koppl – who is cited obscurely in a footnote – calls BRACE Economics.
| Peter Klein |
Via John Hagel, a story on MOOR — Massively Open Online Research. A UC San Diego computer science and engineering professor is teaching a MOOC (massively open online course) that includes a research component. “All students who sign up for the course will be given an opportunity to work on specific research projects under the leadership of prominent bioinformatics scientists from different countries, who have agreed to interact and mentor their respective teams.” The idea of crowdsourcing research isn’t completely new, but this particular blend of MOO-ish teaching and research constitutes an interesting experiment (see also this). The MOO model is gaining some traction in the scientific publishing world as well.
| Peter Klein |
Further to my previous post on misplaced confidence, here is Robert Pindyck on one of the critical tools used by climate scientists.
Climate Change Policy: What Do the Models Tell Us?
Robert S. Pindyck
NBER Working Paper No. 19244, July 2013
Very little. A plethora of integrated assessment models (IAMs) have been constructed and used to estimate the social cost of carbon (SCC) and evaluate alternative abatement policies. These models have crucial flaws that make them close to useless as tools for policy analysis: certain inputs (e.g. the discount rate) are arbitrary, but have huge effects on the SCC estimates the models produce; the models’ descriptions of the impact of climate change are completely ad hoc, with no theoretical or empirical foundation; and the models can tell us nothing about the most important driver of the SCC, the possibility of a catastrophic climate outcome. IAM-based analyses of climate policy create a perception of knowledge and precision, but that perception is illusory and misleading.
Thanks to Bob Murphy for the pointer.
| Peter Klein |
This article on climate science skeptics is making the rounds, and drawing the expected denouncements in the usual quarters. It actually makes some reasonable, and quite mild, statements, namely that climate science, like astronomy or evolutionary biology, is a different kind of “science” than physics or chemistry or geology or other mundane sciences. In the former fields, the fundamental assumptions and key mechanisms are usually not falsifiable, the data are often fuzzier than usual, and there is frequently a lot of hand-waiving to fill in gaps.
Many climate sceptics worry climate science cannot be dubbed scientific as it is not falsifiable (as in Popper’s demarcation criterion). They claim that while elements of climate science may be testable in the lab, the complexity of interactions and feedback loops, as well as the levels of uncertainty in climate models, are too high to be a useful basis for public policy. The relationship of observations to these models are also a worry for climate sceptics. In particular, the role of climate sensitivity.
As well as their use of models, the quality of observations themselves have been open to criticism; some of which have been attempts to clean up issues deriving from the messiness of data collection in the real world (eg the positioning of weather stations), while others have focused on perceived weaknesses in the proxy methods required to calculate historic temperature data such as cross-sections of polar ice sheets and fossilised tree rings.
Such claims are of variable quality, but what unites them is a conviction that data quality in various branches of climate science are below those required by “real science”.
What strikes me the most about these “big” sciences is the language and tone typically used to communicate the results to the public. Where scientists in mundane fields express their conclusions cautiously, emphasizing that results and conclusions are tentative and always subject to challenge and revision, climate scientists seem to view themselves as Brave Crusaders for Truth, striking down “Deniers” (who must be funded by the oil industry or some other evil group). They shout that we “know” this or that about climate change, what the planet will be like in 5,000 years, etc. You hardly ever hear other scientists talk like this, or act as if skeptics are necessarily prejudiced and irrational.
| Peter Klein |
O&Mers attending the AoM conference may find these Professional Development Workshops, sponsored by the Academy of Management Perspectives and based on recent AMP symposia, of particular interest:
The first PDW is on “Private Equity” and features presentations on the managerial, strategic, and public policy implications of private equity transactions. Presenters include Robert Hoskisson (Rice University), Nick Bacon (City University London), Mike Wright (Imperial College London), and Peter Klein (University of Missouri). The private equity session takes place Saturday, Aug 10, 2013 from 11:30AM – 12:30PM at WDW Dolphin Resort in Oceanic 5.
The second is on “Microfoundations of Management,” and features presentations from Nicolai Foss (Copenhagen Business School), Henrich Greve (INSEAD), Sidney Winter (Wharton), Jay Barney (Utah), Teppo Felin (Oxford), Andrew Van de Ven (Minnesota), and Arik Lifschitz (Minnesota). The microfoundations session takes place Monday, Aug 12, 2013 from 9:00AM
– 10:30AM at WDW Dolphin Resort in Oceanic 5
| Peter Klein |
Famed sociologist Robert Putnam makes his case for government funding of social science research:
One of the harshest critics of National Science Foundation funding of political science has even praised my study [on civil society and democracy] as “one of the most influential pieces of practical research in the last half-century.”
Ironically, however, if the recent amendment by Sen. Tom Coburn (R-Okla.) that restricts NSF funding for political science had been in effect when I began this research, it never would have gotten off the ground since the foundational grant that made this project possible came from the NSF Political Science Program.
Well, yes, if it hadn’t been for NASA, we wouldn’t have put a man on the moon. What this shows about the average or marginal productivity of government science funding is a little unclear to me.
Of course, Putnam’s piece is a short editorial making an emotional, rather than logical, appeal. But this kind of appeal seems to be all the political scientists have offered in response to the hated Coburn Amendment.
| Peter Klein |
We noted before the Taylorite quality of many great restaurant kitchens. From Pierre Azoulay we learn that scientific laboratories are also sometimes organized as rigid hierarchies, presided over by an autocratic PI. (The key references is Pasteur.) Pierre suggests a sorting between PI and researcher characteristics so that labs run by autocrats are about as productive as labs run by softies. Probably the same is true in many groups. This reminds us that the Demsetz-Lehn critique applies to lots of work in management. If there is competition among organizational forms, and heterogeneity among individuals, then we shouldn’t expect one form to outperform the others, on average — a lesson often forgotten in empirical management research.
| Lasse Lien |
From Scott Masten we received this classic gem:
A growing interest in and concern about the adequacy and fairness of modern peer-review practices in publication and funding are apparent across a wide range of scientific disciplines. Although questions about reliability, accountability, reviewer bias, and competence have been raised, there has been very little direct research on these variables.
The present investigation was an attempt to study the peer-review process directly, in the natural setting of actual journal referee evaluations of submitted manuscripts. As test materials we selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.
With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.” A number of possible interpretations of these data are reviewed and evaluated.
Are these findings specific to the 80s and psychology? Care to replicate?
| Peter Klein |
Have you noticed that when you search for a person on Google, the sidebar shows you other linked people searches (“People also search for”)? E.g., if you search for yours truly, it pulls up Nicolai Foss, Joe Salerno, Bob Murphy, and Israel Kirzner. I’m not sure how the algorithm works; is it the likelihood these searches are combined, or searched in sequence, or does it have to do with cross-links in search results? Anyway, it’s interesting to see who Google things is related to whom. For instance
Peter G. Klein ==> Nicolai Foss, Joseph Salerno, Robert P. Murphy, Israel Kirzner
Nicolai Foss ==> Peter G. Klein, Edith Penrose, Israel Kirzner, Oliver E. Williamson
Oliver E. Williamson ==> Ronald Coase, Elinor Ostrom, Douglass North, Armen Alchian
Murray N. Rothbard ==> Ludwig von Mises, Friedrich Hayek, Frédéric Bastiat, Henry Hazlitt
Paul Krugman ==> John Law, P. T. Barnum, Charles Ponzi, Beelzebub
| Peter Klein |
The current issue of Nature features a special section on “The Future of Publishing” (thanks to Jason for the tip). The lead editorial discusses the results of a survey of scientists which shows, perhaps surprisingly, that support for online, open-access publishing is lukewarm. It’s not just the commercial publishers who want to maintain the paywalls. The entire issue is filled with interesting stuff, so check it out.
| Lasse Lien |
I recently attended a presentation by the great social scientist Jon Elster, in which he lamented the state of affairs in social science. Elster has – quite nicely, IMHO – coined the terms hard and soft obscurantism as the main problems. To Elster, obscurantism generally refers to endeavors that are unlikely to produce anything of value, and where this can be predicted in advance. This in contrast to more honorable failures, where a plausible hypothesis turns out to be wrong, leaving much effort without much value.
Soft obscurantism is exactly what it sounds like. Unfalsifiable, impenetrable theories which often proudly ignores standards for argument and evidence that elsewhere constitute the hallmark of the scientific method. Examples are post modernism (Latour), structuralism (Lévi-Strauss), Functionalism (Bourdieu, Foucault), Marxism (Badiou) and psychoanalysis.
But there is a ditch on the other side of the road too. Hard obscurantism refers to mathematical exercises without any tangent to reality, which is useful neither as mathematics nor social science. Another form of hard obscurantism is data mining, or misuse of fancy econometrics, or a combination of the two. Both mathematical games and econometric voodoo give the appearance of “scientificness”, but Elster doesn’t hold his guns about the value created by hard obscurantism either:
“I believe that much work in economics and political science that is inspired by rational-choice theory is devoid of any explanatory, aesthetic or mathematical interest, which means that it has no value at all”
One can of course argue about the size of the size of the total problem, and relative size of each type (personally, I would bet on soft obscurantism as the bigger problem), but the key question is perhaps why obscurantism of either type isn’t gradually rooted out. According to Elster their combined “market share” in the social sciences seems to be growing.
| Peter Klein |
When elite academic journals impose stricter submission requirements, authors comply. When lower-ranked journals impose these restrictions, authors submit elsewhere. Key insight for editors: know your place.
Revealed Preferences for Journals: Evidence from Page Limits
David Card, Stefano DellaVigna
NBER Working Paper No. 18663, December 2012
Academic journals set a variety of policies that affect the supply of new manuscripts. We study the impact of page limit policies adopted by the American Economic Review (AER) in 2008 and the Journal of the European Economic Association (JEEA) in 2009 in response to a substantial increase in the length of articles in economics. We focus the analysis on the decision by potential authors to either shorten a longer manuscript in response to the page limit, or submit to another journal. For the AER we find little indication of a loss of longer papers – instead, authors responded by shortening the text and reformatting their papers. For JEEA, in contrast, we estimate that the page length policy led to nearly complete loss of longer manuscripts. These findings provide a revealed-preference measure of competition between journals and indicate that a top-5 journal has substantial monopoly power over submissions, unlike a journal one notch below. At both journals we find that longer papers were more likely to receive a revise and resubmit verdict prior to page limits, suggesting that the loss of longer papers may have had a detrimental effect on quality at JEEA. Despite a modest impact of the AER’s policy on the average length of submissions (-5%), the policy had little or no effect on the length of final accepted manuscripts. Our results highlight the importance of evaluating editorial policies.
| Peter Klein |
We haven’t been entirely kind to behavioral economics, but we certainly recognize its importance, and have urged our colleagues to keep up with the latest arguments and findings. A new NBER paper by Nicholas Barberis summarizes the literature, focusing on prospect theory, and is worth a read.
Thirty Years of Prospect Theory in Economics: A Review and Assessment
Nicholas C. Barberis
NBER Working Paper No. 18621, December 2012
Prospect theory, first described in a 1979 paper by Daniel Kahneman and Amos Tversky, is widely viewed as the best available description of how people evaluate risk in experimental settings. While the theory contains many remarkable insights, economists have found it challenging to apply these insights, and it is only recently that there has been real progress in doing so. In this paper, after first reviewing prospect theory and the difficulties inherent in applying it, I discuss some of this recent work. While it is too early to declare this research effort an unqualified success, the rapid progress of the last decade makes me optimistic that at least some of the insights of prospect theory will eventually find a permanent and significant place in mainstream economic analysis.
| Peter Klein |
Hayek defined “scientism” or the “scientistic prejudice” as”slavish imitation of the method and language of Science” when applied to the social sciences, history, management, etc. Scientism represents “a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed, and as such is “not an unprejudiced but a very prejudiced approach which, before it has considered its subject, claims to know what is the most appropriate way of investigating it.” (Hayek’s Economica essays on scientism were collected in his 1952 Counter-Revolution of Science and reprinted in volume 13 of the Collected Works.)
Austin L. Hughes has a thoughtful essay on scientism in the current issue of the New Atlantis (HT: Barry Arrington). Hughes thinks “the reach of scientism exceeds its grasp.” The essay is worth a careful read — he misses Hayek but discusses Popper and other important critics. One focus is the “institutional” definition of science, defined with the trite phrase “science is what scientists do.” Here’s Hughes:
The fundamental problem raised by the identification of “good science” with “institutional science” is that it assumes the practitioners of science to be inherently exempt, at least in the long term, from the corrupting influences that affect all other human practices and institutions. Ladyman, Ross, and Spurrett explicitly state that most human institutions, including “governments, political parties, churches, firms, NGOs, ethnic associations, families … are hardly epistemically reliable at all.” However, “our grounding assumption is that the specific institutional processes of science have inductively established peculiar epistemic reliability.” This assumption is at best naïve and at worst dangerous. If any human institution is held to be exempt from the petty, self-serving, and corrupting motivations that plague us all, the result will almost inevitably be the creation of a priestly caste demanding adulation and required to answer to no one but itself.
| Lasse Lien |
Here’s a link to the “online first” version of a new Org. Science paper by Peter and myself. This one has been in the pipeline for some time, and we’ve blogged about the WP version before, but this is the final and substantially upgraded version. Please read it and cite it, or we will be forced to kidnap your cat:
The survivor principle holds that the competitive process weeds out inefficient firms, so that hypotheses about efficient behavior can be tested by observing what firms actually do. This principle underlies a large body of empirical work in strategy, economics, and management. But do competitive markets really select for efficient behavior? Is the survivor principle reliable? We evaluate the survivor principle in the context of corporate diversification, asking if survivor-based measures of interindustry relatedness are good predictors of firms’ decisions to exit particular lines of business, controlling for other firm and industry characteristics that affect firms’ portfolio choices. We find strong, robust evidence that survivor-based relatedness is an important determinant of exit. This empirical regularity is consistent with an efficiency rationale for firm-level diversification, though we cannot rule out alternative explanations based on firms’ desire for legitimacy by imitation and attempts to temper multimarket competition.
| Peter Klein |
Ronald Coase has a short piece in the December 2012 Harvard Business Review, “Saving Economics from the Economists” (thanks to Geoff Manne for the tip). Not bad for a fellow about to turn 102! I always learn from Coase, even when I don’t fully agree. Here Coase decries the irrelevance of contemporary economic theory, condemning economics for “giving up the real-world economy as its subject matter.” He also provides a killer quote: “Economics thus becomes a convenient instrument the state uses to manage the economy, rather than a tool the public turns to for enlightenment about how the economy operates.”
I’m sure that’s true for many economists and for some branches of the field, such as Keynesian macroeconomics. But Coase seems to reject economic theorizing altogether, even the “causal-realist” approach popular in these parts. To be useful, he argues, economics should provide practical guidance for the businessperson. However, “[s]ince economics offers little in the way of practical insight, managers and entrepreneurs depend on their own business acumen, personal judgment, and rules of thumb in making decisions.”
Well, that sounds about right to me. Economics provides general principles, or laws, about human action and interaction, mostly stated as “if-then” propositions. Applying the principles to concrete, historical cases requires Verstehen, and is the task of economic historians (as analysts) and entrepreneurs (as actors), not economic theorists. Deductive theory does not replace judgment. Without deductive theory, however, we’d have no principles to apply, and nothing to contribute to our understanding of the economy except — to quote Coase’s own critique of the Old Institutionalists — “a mass of descriptive material waiting for a theory, or a fire.” To be sure, Coase’s own inductive method has led to several brilliant insights. Coase himself has a knack for intuiting general principles from concrete cases (e.g., theorizing about transaction costs from observing automobile plants, or about property rights from studying the history of spectrum allocation), though not perfectly. But, as I noted before, Coase himself is probably the exception that proves the rule — namely that induction is a mess.
| Nicolai Foss |
In a SOapBox Essay in 2005, Teppo Felin and I called for “micro-foundations” for macro management theory, specifically the dominant routines and capabilities (etc.) stream in strategic management. (check Teppo’s site for the paper, commentaries by Jay Barney and Bruce Kogut, and various other Felin & Foss papers on the subject). We thought our argument was fairly simple, not really that novel (economists have been talking about micro-foundations for decades), and “obviously true.” Yet, the argument was apparently provocative (or, perhaps more correctly, our formulation of it was…), and it met with considerable hostility. For example, the DRUID 2008 conference in Copenhagen featured a panel on micro-foundations with opposing sides represented by Sidney Winter and Thorbjørn Knudsen, and Peter Abell and yours truly, respectively. I remember seeing several (extremely) prominent management scholars shaking their heads in disbelief about the folly of micro-foundations. (The debate, though not the head-shaking, can be accessed through the DRUID site).
And yet, 7 years later the micro-foundations project appears to have met with general acceptance, although it is sometimes referred to as the “Foss Fuss,” by at least one very prominent contributor to our field. In fact, some of the head-shaking persons from DRUID 2008 now themselves talk about micro-foundations. Both Sid Winter and Thorbjørn Knudsen (not headshakers) now embrace micro-foundations–albeit of the “right” kind (e.g., behavioralist and informed by neuroscience and experiments). Papers in leading journals have “micro-foundations” in the title. Specific examples: :
- The Journal of Management Studies just published a special issue on “Micro-origins of Routines and Capabilities,” edited by Teppo, me, Koen Heimeriks, and Tammy Madsen, and featuring contributions by various luminaries.
- The European Management Review’s December issue (not yet online) will feature a transcribed exchange between Sid Winter, me and Maurizio Zollo on micro-foundations.
- A leading association in our field will adopt “micro-foundations” as the theme of one its conferences (to be held in 2014). Details to be disclosed (soon).
Micro-foundations are “everywhere.” List der Vernunft, I reckon.
UPDATE: The Academy of Management Perspectives will feature a paper symposium next year on micro-foundations. Contributors: Jay Barney, Teppo Felin, Henrich Greve, Siegwart Lindenberg, Andrew van de Ven, Sid Winter, and me.
| Peter Klein |
A new Milken Institute report purports to show that “[t]he benefit from every dollar invested by National Institutes of Health (NIH) outweighs the cost by many times. When we consider the economic benefits realized as a result of decrease in mortality and morbidity of all other diseases, the direct and indirect effects (such as increases in work-related productivity) are phenomenal.” There are so many problems with the study I hardly know where to begin. For instance:
1. The authors measure long-term benefit to society as real GDP for the bioscience industries. This is a strange proxy. It is well-known that one of the major impact of public science funding is higher wages for science workers. It is hardly surprising that NIH funding results in higher wages and profits for those in the bioscience industry. Moreover, even if industry activity were the variable of interest, don’t we care about the composition of that activity, not the amount? Which projects were stimulated by NIH funding, and were they the right ones?
2. The results are based on a panel regression of the following equation:
Real GDP for the bioscience industries = f (employment in bioscience industry, labor skill, capital stock, real NIH funding, Industrial R&D in all industries) + state fixed effects + error term.
They interpret the coefficient on NIH funding as the causal effect of NIH funding on bioscience performance. E.g.: “Preliminary results show that the long-term effect of a $1.00 increase in NIH funding will increase the size (output) of the bioscience industry by at least $1.70.” But all the right-hand-side variables are potentially endogenous. For instance, the positive correlation between the dependent variable and NIH funding could reflect winner-picking: the NIH funds projects that are likely to be successful, with or without NIH funding. (The authors briefly mention endogeneity but dismiss it as unimportant.)
This is a version of the basic methodological flaw I attributed to the the political scientists lobbying for NSF money. The issue in question — even assuming the dependent variable is a reasonable measure of social benefit — is what bioscience industry output would have been in the absence of NIH funding. (And, even more important, what would have been the direction of that activity.) Public funding could crowd out private funding, and almost certainly changes the direction of research activity, for good or ill.
3. There are a host of econometric problems, aside from endogeneity — no year fixed effects, no interactions between federal and private funds, the imposition of linear relationships, etc.
If I’m being unfair to the authors, I hope readers will correct me. But this looks to me like another example of special pleading, not careful analysis.
| Peter Klein |
This looks like a mighty interesting conference:
Scientific theory choice is guided by judgments of simplicity, a bias frequently referred to as “Ockham’s Razor”. But what is simplicity and how, if at all, does it help science find the truth? Should we view simple theories as means for obtaining accurate predictions, as classical statisticians recommend? Or should we believe the theories themselves, as Bayesian methods seem to justify? The aim of this workshop is to re-examine the foundations of Ockham’s razor, with a firm focus on the connections, if any, between simplicity and truth.
The conference started yesterday; here’s a report on day 1 from Cosma Shalizi. Parsimony, for example, turns out to be more complicated than it appears; here is Shalizi on (recent University of Missouri visitor) Elliott Sober:
What he mostly addressed is when parsimony . . . ranks hypotheses in the same order as likelihood. . . . The conditions needed for parsimony and likelihood to agree are rather complicated and disjunctive, making parsimony seem like a mere short-cut or hack — if you think it should be matching likelihood. He was, however, clear in saying that he didn’t think hypotheses should always be evaluated in terms of likelihood alone. He ended by suggesting that “parsimony” or “simplicity” is probably many different things in many different areas of science (safe enough), and that when there is a legitimate preference for parsimony, it can be explained “reductively”, in terms of service to some more compelling goal than sheer simplicity.