Academic Journal Fakery

16 June 2008 at 9:37 am 3 comments

| Peter Klein |

As computer programs make images easier than ever to manipulate, editors at a growing number of scientific publications are turning into image detectives, examining figures to test their authenticity.

And the level of tampering they find is alarming. “The magnitude of the fraud is phenomenal,” says Hany Farid, a computer-science professor at Dartmouth College who has been working with journal editors to help them detect image manipulation. Doctored images are troubling because they can mislead scientists and even derail a search for the causes and cures of disease.

Ten to 20 of the articles accepted by The Journal of Clinical Investigation each year show some evidence of tampering, and about five to 10 of those papers warrant a thorough investigation, says Ms. Neill. (The journal publishes about 300 to 350 articles per year.)

This is from the Chronicle. The problem is partly cultural, it appears. “[Y]oung researchers may not even realize that tampering with their images is inappropriate. After all, people now commonly alter digital snapshots to take red out of eyes, so why not clean up a protein image in Photoshop to make it clearer?” Says Farid: “This is one of the dirty little secrets — that everybody massages the data like this.”

I suspect that outright fraud — making up data, changing regression coefficients — is unusual in empirical social-science research research. Sloppiness, ranging from data-entry errors to programming mistakes to misspecified regression models, is common. And social scientists typically “shade” results, e.g., by running fifty regressions and reporting only the one in which the signs and significance levels turn out to the researcher’s liking. (Hence the growing importance of the “robustness checks” section of any empirical paper.)

Entry filed under: - Klein -, Methods/Methodology/Theory of Science.

Ben Klein’s Contributions to Law and Economics Organizational Charts from 1915

3 Comments Add your own

  • 1. spostrel  |  16 June 2008 at 6:05 pm

    You should look at the Galison and Daston book Objectivity. They argue that the processing of images in scientific atlases has varied over time and by field, with a number of competing ideals dominating in different eras and areas.

    The one being advanced by the journals on today’s fraud hunts–mechanical objectivity–has a distinctive 19th-century genesis that Galison and Daston argue (only somewhat convincingly, in my view) stems from Kantian ideas about the power of the will and the need to overcome its biases. Another ideal, truth-to-nature, seeks to eliminate the accidental and show the essential (as is still common in botanical atlases);it scorns the keep-the-specks-on-the-lens proclivities of the mechanical ideal.

    Should an image use false color to highlight important features? Should contrast be improved? What makes an image “true” or “objective?” It seems to me that these questions can only be decided in the context of the use to which a particular image is put. Probably, disclosure of all image alterations would take care of most of the legitimate uses of these tools. “Raw” images could be archived at the journal’s website.

  • 2. Rafe  |  16 June 2008 at 7:20 pm

    “Hence the growing importance of the “robustness checks” section of any empirical paper.”

    How come it is “growing”? It was always clear that you can get whatever you want if you do enough runs (though it took longer when you had to punch decks of cards). But proper research is not about getting what you want, it is about testing theories to see if they are robust.

  • 3. Peter Klein  |  17 June 2008 at 2:58 pm

    Good points, Steve, and thanks for the reference.

    Rafe, all I meant is that journal editors and reviewers seem (to me) to be placing more emphasis on robustness, i.e., checking that the paper’s general, qualitative conclusions are robust to alternative econometric specifications, changes in variable definition and sample construction, etc. I think you’re talking about a more general notion of “robustness.”

Leave a comment

Trackback this post  |  Subscribe to the comments via RSS Feed


Authors

Nicolai J. Foss | home | posts
Peter G. Klein | home | posts
Richard Langlois | home | posts
Lasse B. Lien | home | posts

Guests

Former Guests | posts

Networking

Recent Posts

Categories

Feeds

Our Recent Books

Nicolai J. Foss and Peter G. Klein, Organizing Entrepreneurial Judgment: A New Approach to the Firm (Cambridge University Press, 2012).
Peter G. Klein and Micheal E. Sykuta, eds., The Elgar Companion to Transaction Cost Economics (Edward Elgar, 2010).
Peter G. Klein, The Capitalist and the Entrepreneur: Essays on Organizations and Markets (Mises Institute, 2010).
Richard N. Langlois, The Dynamics of Industrial Capitalism: Schumpeter, Chandler, and the New Economy (Routledge, 2007).
Nicolai J. Foss, Strategy, Economic Organization, and the Knowledge Economy: The Coordination of Firms and Resources (Oxford University Press, 2005).
Raghu Garud, Arun Kumaraswamy, and Richard N. Langlois, eds., Managing in the Modular Age: Architectures, Networks and Organizations (Blackwell, 2003).
Nicolai J. Foss and Peter G. Klein, eds., Entrepreneurship and the Firm: Austrian Perspectives on Economic Organization (Elgar, 2002).
Nicolai J. Foss and Volker Mahnke, eds., Competence, Governance, and Entrepreneurship: Advances in Economic Strategy Research (Oxford, 2000).
Nicolai J. Foss and Paul L. Robertson, eds., Resources, Technology, and Strategy: Explorations in the Resource-based Perspective (Routledge, 2000).