Author Archives: Mike White

How to advance science by failure

Stewart Firestein has a provocative piece in Nautilus on the role of failing well in science:

As your career moves on and you have to obtain grant support you naturally highlight the successes and propose experiments that will continue this successful line of work with its high likelihood of producing results. The experiments in the drawer get trotted out less frequently and eventually the drawer just sticks shut. The lab becomes a kind of machine, a hopper—money in, papers out.

My hope of course is that things won’t be this way for long. It wasn’t this way in the past, and there is nothing at all about science and its proper pursuit that requires a high success rate or the likelihood of success, or the promise of any result. Indeed, in my view these things are an impediment to the best science, although I admit that they will get you along day to day. It seems to me we have simply switched the priorities. We have made the easy stuff—running experiments to fill in bits of the puzzle—the standard for judgment and relegated the creative, new ideas to that stuck drawer. But there is a cost to this. I mean a real monetary cost because it is wasteful to have everyone hunting in the same ever-shrinking territory…

How will this change? It will happen when we cease, or at least reduce, our devotion to facts and collections of them, when we decide that science education is not a memorization marathon, when we—scientists and nonscientists—recognize that science is not a body of infallible work, of immutable laws and facts. When we once again recognize that science is a dynamic and difficult process and that most of what there is to know is still unknown.

Putting numbers on the impact of basic research

Over at Pacific Standard, I tackle the question, How much does basic research really matter?

The idea that basic research is the indispensable foundation for technological and medical progress is widely accepted by scientists. It’s the core rationale for the major government investment in basic research made in the U.S and around the world.

But what’s the evidence for it? We can always come up with cherry-picked examples of a basic discovery that led to some revolutionary technology — general relativity and GPS, restriction enzymes and synthetic insulin, quantum mechanics and electronics, the double helix and genetic medicine, etc. Coming up with examples is easy. Quantifying the impact of basic research is hard.

A recent paper in Cell describes one way to do this. It’s not perfect, but the concept is surprisingly simple. Pick some new technology or therapy — the authors picked the new cystic fibrosis drug Ivacaftor — and follow the trail of citations to build a network of papers, researchers, and institutions that made the drug possible. Of course this network will include a lot of citations to studies that weren’t particularly critical. The trick here is sorting the wheat from the chaff: picking out the ‘network hubs’, the researchers and institutions that contributed consistently to the research that led to the drug.

The result may be not surprising to those of us working in science, but it’s still remarkable to see: dozens of researchers publishing hundreds of papers over several decades laid the essential scientific foundation for Ivacaftor. Continue reading

The Cancer Reproducibility Project is Incredibly Naive, Probably Useless, and Potentially Damaging

I’ve always thought the Reproducibility Project represented an incredibly naive approach to the scientific method. This excellent news piece in Science sums up many of the reasons why. As Richard Young says in the piece, “I am a huge fan of reproducibility. But this mechanism is not the way to test it.” Here’s why:

1) Reproducibility in science is not achieved by having a generic contract research organization replicate a canned protocol, for good reason: cutting edge experiments are often very difficult and require specialized skills to get running. Replication is instead achieved by other labs in the field who want to build on the results. Sometimes this is done using the same protocol as the original experiment, and sometimes by obtaining similar results in a different system using a different method.

2) For this reason, I don’t have much confidence that the results obtained by the Reproducibility Project will accurately reflect the state of reproducibility in science. A negative result could mean many things — and most likely it will reflect a failure of the contract lab and not an inherent problem with the result. Contrary to the claims of the projects leaders, the data produced by the Project will probably not be useful to people who are serious about estimating the scope of irreproducibility in science. At its worst, it could be extremely misleading by painting an overly negative picture of the state of science. It’s already been damaging by promoting a too-naive view of how the process of successful science actually works.

3) As the Science piece points out, there is a much better, cheaper, and scientifically sensible way to achieve better reproducibility. If many papers out there are suspect because they lack proper controls, don’t use validated reagents, fail to describe methods adequately, or rely on flawed statistics, then we don’t need to spend millions of dollars and thousands of hours of effort trying to repeat experiments. We need to make sure editors and reviewers require proper controls, reagents, statistics, and full methods descriptions.

It’s worth reading the full article, but below the fold are some salient quotes: Continue reading

Where Does the Genetic Code Come From? An Interview with Dr. Charles Carter, Part II

“Translating the genetic code is the nexus connecting pre-biotic chemistry to biology.” — Dr. Charles Carter

Last week we discussed the general question of how the genetic code evolved, and noted that the idea of the code as merely a frozen accident — an almost completely arbitrary key/value pairing of codons and amino acids — is not consistent with the evidence that has been amassed over the past three decades. Instead, there are deeper patterns in the code that go beyond the obvious redundancy of synonymous codons. These patterns give us important clues about the evolutionary steps that led to the genetic code that was present in the last universal common ancestor of all present-day life.

Charles Carter and his colleague Richard Wolfenden at the University of North Carolina Chapel Hill recently authored two papers that suggest the genetic code evolved in two key stages, and that those two stages are reflected in two codes present in the acceptor stem and anti-codon of tRNAs.

In the first part of my interview with Dr. Carter, he reviewed some of previous work in this field. In the present installment, he comments on the important results that came out of his two recent studies with Dr. Wolfenden. But before we continue with the interview, let’s review the main findings of the papers.

The key result is that there is a strong relationship between the nucleotide sequence of tRNAs, specifically in the acceptor stem and the anti-codon, and the physical properties of the amino acids with which those tRNAs are charged. In other words, tRNAs do more than merely code for the identity of amino acids. There is also a relationship between tRNA sequence and the physical role performed by the associated amino acids in folded protein structures. This suggests that, as Dr. Carter summarized it, “Our work shows that the close linkage between the physical properties of amino acids, the genetic code, and protein folding was likely essential from the beginning, long before large, sophisticated molecules arrived on the scene.” Perhaps it also suggests – this is my possibly unfounded speculation – that today’s genetic code was preceded by a more coarse-grained code that specified sets of amino acids according to their physical functions, rather than their specific identity. Continue reading

Sunday Science Poem: How Fossils Inspire Awe

Lindley Williams Hubbell’s’ “Ordovician Fossil Algae” (1965)

To become a fossil, it takes a lot of luck. Your carcass needs to be buried rapidly and then lie undisturbed for tens of thousands, hundreds of millions, or even billions of years. It’s a process that seems best suited to tough, hardy organisms – ancient sea shells, armored trilobites and giant dinosaur bones are what typically comes to mind when we think of fossils. Delicate and beautifully detailed fossils of the gently curved leaves and stems of exotic plants, the veined wings of strange insects, and the mussed feathers of dinosaurs defy our expectations. Fossils that capture such fragile details are a startlingly clear window to an alien world. At the same time they make that world seem very familiar.

In Lindley Williams Hubbell’s poem about fossils, it’s this defiance of expectations that induces a sense of awe and a feeling of the continuity of life across “some odd billion years.” Hubbell is particularly inspired by the fern-like fossil algae from the Ordovician Period, which followed the Cambrian, beginning about 490 million years ago and lasting for about 45 million years. The Ordovician was a great period of invertebrates and algae, all living in the oceans. Vertebrates, particularly jawless, armored fish, were also beginning to show up in greater numbers. And by the end of the Ordovician, there was a major development: the earliest fossils of land-dwelling organisms appear. It was a time of major change and and also major extinction. Continue reading