Model angst

As I contemplate presenting my research plans in job talks, I’m worried about clearly conveying what we get out of quantitative models. The vast majority of biologists don’t build or use quantitative models, which I recognize is a reasonable consequence of the history of the field, but I find it shocking nonetheless. What this means is that many of these researchers don’t share my fundamental outlook, and, as good skeptical scientists, they won’t take it for granted that models are useful. In fact they’ve probably seen plenty of examples of bad models.

So here is how I justify my mathematical modeling work: Continue reading “Model angst”

Coming to news stands. . .


Needless to say (but I’m going to anyway), I am pleased as punch that my lab’s most recent offering unto the body of scientific literature (“Analysis of alternative splicing associated with aging and neurodegeneration in the human brain”) was put on the cover of the current issue of Genome Research. In this paper, we investigated the connections between alternative splicing profiles in the aging brain and in brains suffering from neurodegenerative disorders, like Alzheimer’s disease. It is important to note that we were characterizing the alternative splicing differences associated with aging and disease, not identifying splicing changes that cause the diseases or the symptoms. Such questions will require ongoing work, which this study will, hopefully, help guide. Continue reading “Coming to news stands. . .”

Biological noise and the burden of proof

Yes:

But this does not change the fact that we strongly disagree with the fundamental argument put forward by Clark et al., which is that the genomic area corresponding to transcripts is more important than their relative abundance. This viewpoint makes little sense to us. Given the various sources of extraneous sequence reads, both biological and laboratory-derived (see below), it is expected that with sufficient sequencing depth the entire genome would eventually be encompassed by reads. Our statement that “the genome is not as not as pervasively transcribed as previously reported” stems from the fact that our observations relate to the relative quantity of material detected.

Of course, some rare transcripts (and/or rare transcription) are functional, and low-level transcription may also provide a pool of material for evolutionary tinkering. But given that known mechanisms—in particular, imperfections in termination (see below)—can explain the presence of low-level random (and many non-random) transcripts, we believe the burden of proof is to show that such transcripts are indeed functional, rather than to disprove their putative functionality.

Dueling viewpoints on pervasive transcription

PLoS Biology does point-counterpoint on whether our entire genomes are transcribed (and, by implication, whether the majority of our DNA is functional):

The Reality of Pervasive Transcription  – Clark, et al.

Response – van Bakel, et al.

Interestingly, these two viewpoints tend to split somewhat cleanly between those who came into biology as computational people, and those who came in as experimentalists. (The split’s not perfect but the trend is there, and you can see it in the authorship of the two papers above.)  Computational people (or, at least those who came in as computational people – I’m not making judgments about anyone’s experimental skills) are more likely to believe in pervasive transcription, and while others are more likely see it as experimental and biological noise.

Following the trend, I fall into the latter camp.

What makes a paper bad instead of just wrong

The editor of the journal Remote Sensing just resigned over the fact that his journal published a paper that should never have been published.  Real Climate explains what that means – being controversial or eventually shown wrong is *not* an indication that a paper shouldn’t have been published. This is what makes a paper bad:

But what makes a paper ‘bad’ though? It is certainly not a paper that simply comes to a conclusion that is controversial or that goes against the mainstream, and it isn’t that the paper’s conclusions are unethical or immoral. Instead, a ‘bad’ paper is one that fails to acknowledge or deal with prior work, or that makes substantive errors in the analysis, or that draws conclusions that do not logically follow from the results, or that fails to deal fairly with alternative explanations (or all of the above). Of course, papers can be mistaken or come to invalid conclusions for many innocent reasons and that doesn’t necessarily make them ‘bad’ in this sense.