61% of psychological research can't be replicated? How to figure out if there was fraud
From Science magazine:
We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. There is no single standard for evaluating replication success. Here, we evaluated reproducibility using significance and P values, effect sizes, subjective assessments of replication teams, and meta-analysis of effect sizes. The mean effect size (r) of the replication effects (Mr = 0.197, SD = 0.257) was half the magnitude of the mean effect size of the original effects (Mr = 0.403, SD = 0.188), representing a substantial decline. Ninety-seven percent of original studies had significant results (P < .05). Thirty-six percent of replications had significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result . . . .It isn't clear what "subjectively rated" means, but it raises the question of whether people are making their results look more significant that they actually were. However, this should just be the start of doing replications.
Labels: psychology
0 Comments:
Post a Comment
<< Home