9/06/2015

61% of psychological research can't be replicated? How to figure out if there was fraud

Assume that people only submit papers when they get statistically significant results.  If 10 different researchers run an experiment, but only 1 gets a statistically significant result (just out of randomness), you would expect that 9 of the 10 replications would get statistically insignificant results.  What might be interesting is to redo the experiments that weren't replicated some more times to see whether they ever get statistically significant results.  If you could redo these experiments 20 times and none of the results were statistically significant, it would raise real questions about whether fraud occurred.

From Science magazine:
We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. There is no single standard for evaluating replication success. Here, we evaluated reproducibility using significance and P values, effect sizes, subjective assessments of replication teams, and meta-analysis of effect sizes. The mean effect size (r) of the replication effects (Mr = 0.197, SD = 0.257) was half the magnitude of the mean effect size of the original effects (Mr = 0.403, SD = 0.188), representing a substantial decline. Ninety-seven percent of original studies had significant results (P < .05). Thirty-six percent of replications had significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result . . . .
 It isn't clear what "subjectively rated" means, but it raises the question of whether people are making their results look more significant that they actually were.  However, this should just be the start of doing replications.

Labels:

0 Comments:

Post a Comment

<< Home