This week, psychologist Brian Nosek and his colleagues from the Center for Open Science released the results of four years of work on a unique project. Since 2011, he and 270 other scientists in The Reproducibility Project have been attempting to replicate 100 previously published psychology studies. The results, published this week in Science, were worse than expected – just 36% of the replicated studies produced as strong a result as the original research.
That sounds pretty bad! But this article by Ed Yong in The Atlantic goes systematically through the issues around study design, publication and replicability and concludes that “failed replications don’t discredit the original studies, any more than successful ones enshrine them as truth.”
Most scientists agree that more efforts like the Reproducibility Project are essential to leading scientific research toward practices that produce more robust results. Luckily, research cats are generally amenable to repeating experiments over and over again, particularly if they involve can openers or pushing objects off tables.