I’m not a big fan of reproducibility projects. Shoddy papers shouldn’t be tolerated, but the truth is that sometimes rigorously done research isn’t reproducible — and when that happens, science gets interesting. It should go without saying that a peer-reviewed paper isn’t a guarantee of truth. If done properly, a paper is a record of a rigorous attempt to discover something about the world, no more, no less. What we believe about nature should reflect the accumulated evidence of many researchers and many papers, and that means the scientific literature should reflect our latest tentative, bleeding-edge thinking, even at the risk of being wrong. It’s counterproductive to hold up publication until some other lab reproduces your result, or to retract papers that don’t hold up, unless they had clear methodological flaws or artifacts that should have been caught in review.
Two recent articles capture what I think is the right attitude on reproducibility. First, as David Allison and his colleagues write, as a community of researchers, editors, and reviewers, we’re not doing as well as we should be when it comes to meeting high standards for best statistical and other methodological practices:
In the course of assembling weekly lists of articles in our field, we began noticing more peer-reviewed articles containing what we call substantial or invalidating errors. These involve factual mistakes or veer substantially from clearly accepted procedures in ways that, if corrected, might alter a paper’s conclusions.
There is no excuse for this kind of sloppiness.
On the other hand, here is Columbia’s Stuart Firestein:
The failure to replicate a part or even the whole of an experiment is not sufficient for indictment of the initial inquiry or its researchers. Failure is part of science. Without failures there would be no great discoveries.
So yes, let’s clean up science by rooting out obvious “invalidating practices” that all too often plague papers in journals at all tiers. But let’s not be naive about how science works, and what the scientific literature is supposed to be. To paraphrase what I wrote recently, if some of our studies don’t turn out to be wrong, than we’re not pushing hard enough at the boundaries of our knowledge.
There is a time and a place for complex atonal music, and perhaps the drinks reception of a genomics conference at the Excel Centre was not it. Through the chatter it wasn’t always easy to hear what the string quartet was doing, and meeting attendees were confused about the performance. “I thought they were still tuning”, said one of the guests.
This was not the first performance of Music of the Spheres. It had previously been set up in a large empty building, a gallery along the coast, and Hornsey Town Hall. The string quartet can’t be everywhere, but the bubbles are always there, and form the core part of the work. In fact, Jarvis turned on the bubble machine a few times during breaks at the Festival of Genomics. Without the string quartet, this created an effect of simple party entertainment, not out of place at this conference, which also featured a lively talk show and a treadmill challenge. People engaged with the bubbles by photographing them, popping them, or shielding their coffee cups from soapy surprises. Many of them were unaware that each bubble contained fragments of DNA encoding a piece of music.