The Cancer Reproducibility Project is Incredibly Naive, Probably Useless, and Potentially Damaging

I’ve always thought the Reproducibility Project represented an incredibly naive approach to the scientific method. This excellent news piece in Science sums up many of the reasons why. As Richard Young says in the piece, “I am a huge fan of reproducibility. But this mechanism is not the way to test it.” Here’s why:

1) Reproducibility in science is not achieved by having a generic contract research organization replicate a canned protocol, for good reason: cutting edge experiments are often very difficult and require specialized skills to get running. Replication is instead achieved by other labs in the field who want to build on the results. Sometimes this is done using the same protocol as the original experiment, and sometimes by obtaining similar results in a different system using a different method.

2) For this reason, I don’t have much confidence that the results obtained by the Reproducibility Project will accurately reflect the state of reproducibility in science. A negative result could mean many things — and most likely it will reflect a failure of the contract lab and not an inherent problem with the result. Contrary to the claims of the projects leaders, the data produced by the Project will probably not be useful to people who are serious about estimating the scope of irreproducibility in science. At its worst, it could be extremely misleading by painting an overly negative picture of the state of science. It’s already been damaging by promoting a too-naive view of how the process of successful science actually works.

3) As the Science piece points out, there is a much better, cheaper, and scientifically sensible way to achieve better reproducibility. If many papers out there are suspect because they lack proper controls, don’t use validated reagents, fail to describe methods adequately, or rely on flawed statistics, then we don’t need to spend millions of dollars and thousands of hours of effort trying to repeat experiments. We need to make sure editors and reviewers require proper controls, reagents, statistics, and full methods descriptions.

It’s worth reading the full article, but below the fold are some salient quotes:

[Richard Young] says that if the project does match his results, it will be unsurprising —the paper’s findings have already been reproduced. If it doesn’t, a lack of expertise in the replicating lab may be responsible. Either way, the project seems a waste of time, Young says. “I am a huge fan of reproducibility. But this mechanism is not the way to test it.”

I like the concept,” says cancer geneticist Todd Golub of the Broad Institute in Cambridge, who has a paper on the group’s list. But he is “concerned about a single group using scientists without deep expertise to reproduce decades of complicated, nuanced experiments.”

Early on, Begley, who had raised some of the initial objections about irreproducible papers, became disenchanted. He says some of the papers chosen have such serious flaws, such as a lack of appropriate controls, that attempting to replicate them is “a complete waste of time.” He stepped down from the project’s advisory board last year.

Amassing all the information needed to replicate an experiment and even figure out how many animals to use proved “more complex and time-consuming than we ever imagined,” [proejct leader] Iorns says.

For many scientists, the biggest concern is the nature of the labs that will conduct the replications. It’s unrealistic to think contract labs or university core facilities can get the same results as a highly specialized team of academic researchers, they say. Often a graduate student has spent years perfecting a technique using novel protocols, Young says. “We brought together some of the most talented young scientists in the area of gene control and oncology to do these genomics studies. If I thought it was as simple as sending a protocol to a contract laboratory, I would certainly be conducting my research that way,” he says.

Academic labs approach replication differently. Levi Garraway of the Harvard University–affiliated Dana-Farber Cancer Institute in Boston, who also has two papers on the project’s list, says that if a study doesn’t initially hold up in another lab, they might send someone to the original lab to work side by side with the authors. But the cancer reproducibility project has no plans to visit the original lab, and any troubleshooting will be limited to making sure the same protocol is followed, Errington says. Erkki Ruoslahti of the Sanford-Burnham Medical Research Institute in San Diego, California, has a related worry: The lab replicating one of his mouse experiments will run that experiment just one time; he repeated it two or three times.

The scientists behind the cancer reproducibility project dismiss these criticisms.

Author: Mike White

Genomes, Books, and Science Fiction

2 thoughts on “The Cancer Reproducibility Project is Incredibly Naive, Probably Useless, and Potentially Damaging”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: