In my latest Pacific Standard column, I take a look at the recent hand-wringing over the reproducibility of published science. A lot of people are worried that poorly done, non-reproducible science is ending up in the peer-reviewed literature.
Many of these worries are misguided. Yes, as researchers, editors, and reviewers we should do a better job of filtering out bad statistical practices and poor experimental designs; we should also make sure that data, methods, and code are thoroughly described and freely shared. To the extent that sloppy science is causing a pervasive reproducibility problem, then we absolutely need to fix it.
But I’m worried that the recent reproducibility initiatives are going beyond merely sloppy science, and instead are imposing a standard on research that is not particularly useful and completely ahistorical. When you see a hot new result published in Nature, should you expect other experts in the field to be able reproduce it exactly?
Not always. To explain why, I’ll hand the mic over to Chris Drummond, a computer scientist and research officer at Canada’s National Research Council:
“Replicability is not Reproducibility: Nor is it Good Science” (PDF)
At various times, there have been discussions arising from the inability to replicate the experimental results published in a paper… There seems to be a widespread view that we need to do something to address this problem, as it is essential to the advancement of our field. The most compelling argument would seem to be that reproducibility of experimental results is the hallmark of science…I want to challenge this view by separating the notion of reproducibility, a generally desirable property, from replicability, its poor cousin. I claim there are important differences between the two. Reproducibility requires changes; replicability avoids them. Although reproducibility is desirable, I contend that the impoverished version, replicability, is one not worth having.
Drummond goes on to explain:
A critical point of reproducing an experimental result is that irrelevant things are intentionally not replicated. One might say, one should replicate the result not the experiment…The sharing of all the artifacts from people’s experiments is not a trivial activity.
In practice, most of us implicitly make Drummond’s distinction between replication and reproduction: we avoid exact replication when it isn’t absolutely necessary, but we are concerned about reproducing the general phenomena in our particular system.
And sometimes well-done research won’t be very reproducible, because it’s on the cutting edge, and we may not understand all of the relevant variables yet. You see this over and over in the history of science – the early days of genetics and the initial discoveries of high energy rays come to mind here. Scientists should do careful work and clearly publish their results. If another lab comes up with a different result, that’s not necessarily a sign of fraud or poor science. It’s often how science makes progress.
4 thoughts on “Why reproducibility initiatives are misguided”
I think a lot of the handwringing over this comes from pressure to publish high profile things that are also unassailably correct (because funding). the pressure for positive results is probably more enormous than ever, though arguably an important aspect of science is exploring those things that don’t pan out. It has become fashionable to make everything a ‘business’ now and sadly, businesses don’t tend to make money by getting things wrong or exploring dead ends (possible exception; Coke introduces ‘New Coke’, ends up making more money than ever after the debacle). In science, we often don’t have the luxury of certainty in how something will turn out even though we try hard to design experiments to yield interesting results no matter how they turn out.
I had never really thought about the distinction between replication and reproducibility. I take your point that there really is a distinction between the two and the latter is better for science. it’s more credible if a result or observation is seen under independent conditions/times…or two teams of scientists come to the same result by different means.
I’m with you on the pressure that we’re feeling to always produce spectacular findings. When that pressure produces sloppy science, then we do have a problem.
But I’m bothered by the idea, which is associated with this push for independent replication, that a published paper is supposed to be “unassailably correct”, as you aptly put it. That expectation hasn’t been the case historically, for good reason. Publications represent the ongoing process of discovering something new, not necessarily the final outcome.