The New York Times misses the real academic fraud: How academic research is biased towards finding statistically significant results that aren't really there
Obviously there is a bias towards publishing research with statistically significant results in journals, and this bias creates the wrong incentives for academic authors. But it raises the question what if anything one can learn from most academic research. (BTW, it is one reason that I often try to publish redoing the different combination of control variables in my regressions.) From the New York Times:
In one experiment conducted with undergraduates recruited from his class, Stapel asked subjects to rate their individual attractiveness after they were flashed an image of either an attractive female face or a very unattractive one. The hypothesis was that subjects exposed to the attractive image would — through an automatic comparison — rate themselves as less attractive than subjects exposed to the other image.
The experiment — and others like it — didn’t give Stapel the desired results, he said. He had the choice of abandoning the work or redoing the experiment. . . .