I’ve always thought the Reproducibility Project represented an incredibly naive approach to the scientific method. This excellent news piece in Science sums up many of the reasons why. As Richard Young says in the piece, “I am a huge fan of reproducibility. But this mechanism is not the way to test it.” Here’s why:
1) Reproducibility in science is not achieved by having a generic contract research organization replicate a canned protocol, for good reason: cutting edge experiments are often very difficult and require specialized skills to get running. Replication is instead achieved by other labs in the field who want to build on the results. Sometimes this is done using the same protocol as the original experiment, and sometimes by obtaining similar results in a different system using a different method.
2) For this reason, I don’t have much confidence that the results obtained by the Reproducibility Project will accurately reflect the state of reproducibility in science. A negative result could mean many things — and most likely it will reflect a failure of the contract lab and not an inherent problem with the result. Contrary to the claims of the projects leaders, the data produced by the Project will probably not be useful to people who are serious about estimating the scope of irreproducibility in science. At its worst, it could be extremely misleading by painting an overly negative picture of the state of science. It’s already been damaging by promoting a too-naive view of how the process of successful science actually works.
3) As the Science piece points out, there is a much better, cheaper, and scientifically sensible way to achieve better reproducibility. If many papers out there are suspect because they lack proper controls, don’t use validated reagents, fail to describe methods adequately, or rely on flawed statistics, then we don’t need to spend millions of dollars and thousands of hours of effort trying to repeat experiments. We need to make sure editors and reviewers require proper controls, reagents, statistics, and full methods descriptions.
It’s worth reading the full article, but below the fold are some salient quotes: Continue reading
This week Science for the People is talking about do-it-yourself biology, and the community labs that are changing the biotech landscape from the grassroots up. We’ll discuss open-source genetics and biohacking spaces with Will Canine of Brooklyn lab Genspace, and Tito Jankowski, co-founder of Silicon Valley’s BioCurious. We’ll also talk to transdisciplinary artist and educator Heather Dewey-Hagborg about her art projects exploring our relationship with genetics and privacy.
*Josh provides research & social media help to Science for the People and is, therefore, completely biased.
Posted in Curiosities of Nature, Follies of the Human Condition
Tagged BioCurious, biohacking, Desiree Schell, Genetics, Genspace, Heather Dewey-Hagborg, Podcast, privacy, sciart, science for the people, Tito Jankowski, Will Canine
“Translating the genetic code is the nexus connecting pre-biotic chemistry to biology.” — Dr. Charles Carter
Last week we discussed the general question of how the genetic code evolved, and noted that the idea of the code as merely a frozen accident — an almost completely arbitrary key/value pairing of codons and amino acids — is not consistent with the evidence that has been amassed over the past three decades. Instead, there are deeper patterns in the code that go beyond the obvious redundancy of synonymous codons. These patterns give us important clues about the evolutionary steps that led to the genetic code that was present in the last universal common ancestor of all present-day life.
Charles Carter and his colleague Richard Wolfenden at the University of North Carolina Chapel Hill recently authored two papers that suggest the genetic code evolved in two key stages, and that those two stages are reflected in two codes present in the acceptor stem and anti-codon of tRNAs.
In the first part of my interview with Dr. Carter, he reviewed some of previous work in this field. In the present installment, he comments on the important results that came out of his two recent studies with Dr. Wolfenden. But before we continue with the interview, let’s review the main findings of the papers.
The key result is that there is a strong relationship between the nucleotide sequence of tRNAs, specifically in the acceptor stem and the anti-codon, and the physical properties of the amino acids with which those tRNAs are charged. In other words, tRNAs do more than merely code for the identity of amino acids. There is also a relationship between tRNA sequence and the physical role performed by the associated amino acids in folded protein structures. This suggests that, as Dr. Carter summarized it, “Our work shows that the close linkage between the physical properties of amino acids, the genetic code, and protein folding was likely essential from the beginning, long before large, sophisticated molecules arrived on the scene.” Perhaps it also suggests – this is my possibly unfounded speculation – that today’s genetic code was preceded by a more coarse-grained code that specified sets of amino acids according to their physical functions, rather than their specific identity. Continue reading
“I’m more and more inclined to think that we can actually penetrate at least some of the steps by which nature invented the code.” — Charles Carter
The genetic code is one of biology’s few universals*, but rather than being the result of some deep underlying logic, it’s often said to be a “frozen accident” — the outcome of evolutionary chance, something that easily could have turned out another way. This idea, though it’s often repeated, has been challenged for decades. The accumulated evidence shows that the genetic code isn’t as arbitrary as we might naively think. And more importantly, this evidence also offers some tantalizing clues to how the genetic code came to be.
This origins of the genetic code has long been a research focus of University of North Carolina biophysicist Charles Carter, and his UNC enzymologist colleague Richard Wolfenden. They authored a pair of recent papers that suggest behind the genetic code are actually two codes, reflecting key steps in its evolution. Dr. Carter kindly agreed to answer some questions about the papers, which present some interesting results that add to the growing pile of evidence that the genetic code is much less accidental that it may seem.
These papers deal with the machinery that implements the genetic code. Conceptually the code is simple: it is a set of dictionary entries or key-value pairs mapping codons to amino acids. But to make this mapping happen physically, you need, as Francis Crick correctly hypothesized back in 1958, an adapter. That adapter, as most of our readers know, is tRNA, a nucleic acid molecule that is “charged” with an amino acid.
But the existence of tRNAs creates another coding problem: how does the right tRNA get paired with the correct amino acid? The answer to this question is at the heart of the origin of the genetic code, and it’s the subject of these two recent papers. More about this story, as well as the first part of my interview with Dr. Carter, is below the fold. Continue reading