A movement is afoot to create formal structures to reproduce experiments (Ars Technica):
Almost nobody goes back and repeats something that’s already been published, though.
But maybe they should. At least that’s the thinking behind a new effort called the Reproducibility Initiative, a project hosted by the Science Exchange and supported by Nature, PLoS, and the Rockefeller University Press.
John Timmer goes on to write about reasons why some people think this is a waste of time. I agree with all of these reasons.
The need for replication certainly depends on the field (small behavioral science studies should be replicated!). In molecular & cellular biology I think replication of other people’s work, purely for the purpose of replication, is almost always a waste of time and money. The process is expensive, you generally learn nothing new, and even if PLoS One will publish the result, your paper won’t do anything to advance your career.
In most cases (in my field), replication of someone’s work occurs as a matter of course when you build on that work. If one lab reports a particular phenotype for a knock-out Drosophila line, another lab will make sure that they can reproduce the original phenotype before they study their process of interest in that mutant background. If I see a microarray result from one lab telling me that a set of genes is induced under condition X, I verify that by qPCR before I start mutating the promoters that control those genes. My experience is that my own results are amply replicated in this way, and this is generally how published results are most effectively tested and probed. If nobody if replicating your results in follow-up studies, then your results haven’t made much of an impact and probably don’t deserve to be replicated.
There are important exceptions, like the multiple replication studies of a reported link between XMRV and chronic fatigue syndrome. Many GWAS studies are often known to underpowered, and so they will be purposely replicated with different cohorts.
A general call for indiscriminate replication is misguided. Replication efforts need to be focused on areas where they are specifically needed.
Carl Zimmer has more on this., focused on ‘pre-clinical’ cancer research. (Maybe part of the problem is that many ‘pre-clinical’ researchers think of themselves as basic, not pre-clinical scientists.) My impression from the article is that the problem isn’t a lack of formal ways to publish replication experiments. The problem is that people need to stop being poor scientists – don’t simply take others’ results for granted if you’re building on them, and don’t mindlessly cite papers that you haven’t carefully read and evaluated. Include controls in your experiments (yes, really!), and include those results in your manuscript. Demand them if you’re a reviewer, even if the lab has published many similar control-free experiments in higher-impact journals. Make your whole dataset available.
Within the scientific community, you can find many people who admit that there is too much sloppy science, that a culture of rigor is being undermined by the way peer-review and career advancement work, and that speed, hype, fame, and salesmanship are more useful ways to promote your career than careful, rigorous work. Large changes in scientific culture are are needed to solve the replication issue.
A recent comment in Nature put it this way:
What reasons underlie the publication of erroneous, selective or irreproducible data? The academic system and peer-review process tolerates and perhaps even inadvertently encourages such conduct5. To obtain funding, a job, promotion or tenure, researchers need a strong publication record, often including a first-authored high-impact publication. Journal editors, reviewers and grant-review committees often look for a scientific finding that is simple, clear and complete — a ‘perfect’ story. It is therefore tempting for investigators to submit selected data sets for publication, or even to massage data to fit the underlying hypothesis.
But there are no perfect stories in biology. In fact, gaps in stories can provide opportunities for further research — for example, a treatment that may work in only some cell lines may allow elucidation of markers of sensitivity or resistance. Journals and grant reviewers must allow for the presentation of imperfect stories, and recognize and reward reproducible results, so that scientists feel less pressure to tell an impossibly perfect story to advance their careers.