Academic Journal Fakery
| Peter Klein |
As computer programs make images easier than ever to manipulate, editors at a growing number of scientific publications are turning into image detectives, examining figures to test their authenticity.
And the level of tampering they find is alarming. “The magnitude of the fraud is phenomenal,” says Hany Farid, a computer-science professor at Dartmouth College who has been working with journal editors to help them detect image manipulation. Doctored images are troubling because they can mislead scientists and even derail a search for the causes and cures of disease.
Ten to 20 of the articles accepted by The Journal of Clinical Investigation each year show some evidence of tampering, and about five to 10 of those papers warrant a thorough investigation, says Ms. Neill. (The journal publishes about 300 to 350 articles per year.)
This is from the Chronicle. The problem is partly cultural, it appears. “[Y]oung researchers may not even realize that tampering with their images is inappropriate. After all, people now commonly alter digital snapshots to take red out of eyes, so why not clean up a protein image in Photoshop to make it clearer?” Says Farid: “This is one of the dirty little secrets — that everybody massages the data like this.”
I suspect that outright fraud — making up data, changing regression coefficients — is unusual in empirical social-science research research. Sloppiness, ranging from data-entry errors to programming mistakes to misspecified regression models, is common. And social scientists typically “shade” results, e.g., by running fifty regressions and reporting only the one in which the signs and significance levels turn out to the researcher’s liking. (Hence the growing importance of the “robustness checks” section of any empirical paper.)