Several of the major research organizations in psychology, including APA, EAPP (European Association of Personality Psychology) and SPSP, have been talking about the issue of replicability of published research, but APS has made the most dramatic move so far to actually do something about it. The APS journal Perspectives on Psychological Science today announced a new policy to enable the publication of pre-registered, robust studies seeking to replicate important published findings. The journal will add a new section for this purpose, edited by Dan Simons and Alex Halcombe. For details, click here.
This idea has been kicked around in other places, including proposals for new journals exclusively dedicated to replication studies. One of the most interesting aspects of the new initiative is that instead of isolating replications in an independent journal few people might see, they will appear in an already widely-read and prestigious journal with a high impact factor.
When a similar proposal — in the form of a suggested new journal — was floated in a meeting I attended a few weeks ago, it quickly stimulated controversy. Some saw the proposal as a self-defeating attack on our own discipline that would only undermine the credibility of psychological research. Others saw it as a much-needed self-administered corrective action; better to come from within the field than be imposed from outside. And still others — probably the largest group — raised and got a bit bogged down in worrying about specifics of implementation. For example, what will stop a researcher from running a failed replication study, and only then “pre-registering” it? How many failed replications does it take to overturn the conclusions of a published study, and what does “failed replication” mean exactly, anyway? What degree of statistical power should replication studies be required to have, and what effect size should be used to make this calculation? Finally, running these replication studies (as described in the PPS policy) looks to be a demanding and expensive enterprise. Who will have sufficient time, money and/or incentive to run them? These questions all lack ready answers.
My own view is that the answers to these questions — or their ultimate unanswerability — will only be established through experimentation. Somebody needs to try it and see what happens. I admire APS for taking this step and am looking forward to seeing what, if anything, ultimately becomes of it.