Sunday Science Poem: Lord Byron’s Post-Apocalyptic Vision

‘Darkness’, Lord Byron (1816)

HubertLouvreRuinsDarwin’s argument for evolution by natural selection gets a lot of attention as the science bombshell of the 19th century that shocked the sensibilities of Victorian society, but there was an equally consequential, if less dramatic, scientific development that took place much earlier in the century, a development that left a deep impression on the generation before Darwin: William Herschel’s discovery that the universe is much bigger and much older than nearly anyone had imagined.

William Herschel’s scientific findings, made with his ever larger telescopes, were a frequent target of Romantic poets’ imaginations, and towards the end of his career, Herschel’s speculations about the past and future of the cosmos fed Romantic angst over the role of God and humanity in what now seemed to be a jaw-droppingly vast cosmic stage.

Among Herschel’s more disturbing ideas is the notion of a natural end to the Milky Way. As Richard Holmes notes in The Age of Wonder, Herschel jarred the poet Thomas Campbell by explaining that the night sky was filled with “many distant stars [that] had probably ‘ceased to exist’ millions of years ago, and that looking up into the night sky we were seeing a stellar landscape that was not really there at all. The sky was full of ghosts.”1 Continue reading “Sunday Science Poem: Lord Byron’s Post-Apocalyptic Vision”

The paradox of more science funding, less research… we’ve seen this before

Does this sound familiar?

Since 19XX, overall federal research funding in all fields has shown a steady increase, resulting in greater than 40 percent growth (adjusted for inflation) from 19XX to 19XX. University-based researchers have been the primary beneficiaries of this growth. Although the data are harder to come by, relevant Figures from [Agency X] and several universities indicate that the growth in funding for XXX research has been comparable to these overall trends.

However, these figures lump together many different kinds of projects and funders. For example, one element of xxx funding is the base-funded (or core) program, which is the primary source of support for small science endeavors. This report looks at base-funded programs at both NSF and [Agency X] and finds, contrary to the trends described above, that they have not even kept up with inflation and have certainly not been able to keep pace with the explosion in grant requests. As a result, grant sizes have decreased, and the percentage of proposals accepted has dropped. A rough calculation shows that researchers must now write two to four proposals per year to remain funded, up from one or two in 19XX. Of course, increasing the time spent searching for support means that less time is spent on productive research. Rising university overhead and fringe benefit costs, that consume more and more of each grant dollar exacerbate this problem. Clearly, the base-funded program has not participated proportionately in the overall XXX research funding increase. Although we do not attempt to quantify the effect this has had on the quality of science produced, we do find that the core program has become much less efficient during the past decade. We also infer that the lion’s share of new funding has gone into project-specific funding, most of which involves big science efforts.

I’ve blanked out a few things… can you guess what area of research and what time period this refers to? The answer is below the fold. Continue reading “The paradox of more science funding, less research… we’ve seen this before”

Having your cake and eating it: more arguments over human genome function

My fellow F&P publican Josh Witten has drawn my attention to a rebuttal (PDF) of Graur et al’s rebuttal of claims made by ENCODE.

The authors, John Mattick and Marcel Dinger of the University of New South Wales, advance various claims to dispute the idea that most of the genome is non-functional, but here I’ll just focus on one:

We also show that polyploidy accounts for the higher than expected genome sizes in some eukaryotes, compounded by variable levels of repetitive sequences of unknown significance.

Uh, yeah. That’s the resolution to the C-value paradox, and it’s one reason why people argue that repetitive sequences, i.e. transposable elements, are, contra claims about ENCODE data, largely non-functional – because their numbers vary greatly between species with a similar biology. As Doolittle writes:

A balance between organism-level selection on nuclear structure and cell size, cell division times and developmental rate, selfish genome-level selection favoring replicative expansion, and (as discussed below) supraorganismal (clade-level) selective processes—as well as drift— must all be taken into account.

Reading into the paper, how is it possible that the following claims by Mattick and Dinger don’t contradict each other? Continue reading “Having your cake and eating it: more arguments over human genome function”

Function and another failure to consider the null hypothesis

Somehow, the following kind of illogic creeps into so many discussion of genomic function:

In terms of pathological functions, somatic mosaicism of terminally differentiated cells has long been known to cause cancer. Recent work shows that somatic mosaicism of nervous system tissues underlies a host of neurodevelopmental and perhaps neuropsychiatric diseases (17). However, the extent of somatic mosaicism that is now being reported in a variety of healthy tissues and cell types suggests that it also has physiological functions.

– James R Lupski, “Genome Mosaicism—One Human, Multiple Genomes” Science 26 July 2013: Vol. 341 no. 6144 pp. 358-359

This paragraph comes after the author carefully describes why extensive mosaicism is unavoidable, given the number of cell divisions we undergo during development from a zygote into a fully adult human.

So explain to me why extensive mosaicism “suggests that it also has physiological functions”? Why should we think that most of the mosaicism being observed is anything like the deliberate hypermutation that happens in the immune system? Isn’t the default hypothesis that mosaicism is the expected, non-functional by-product of trillions of cell divisions?

Since when is cancer not caused by mutation?

I feel a major rant about epigenetics coming on… must hold it back until a more convenient time. But I can’t refrain from commenting on just how wrong this is:

“We used to think that cancer was caused mainly by mutations of genes, but we now believe that epigenetic aberrations are responsible for more than half of cancer cases,” says Trygve Tollefsbol, who is a senior scientist at the University of Alabama at Birmingham’s Comprehensive Cancer Center.

“That’s an important change because genetic mutations are very difficult, if not impossible, to correct, while epigenetic marks are potentially reversible,” he explains.

– Nutrition Action HealthLetter, July/Aug 2013, p. 10

I’ve heard a lot of BS claims made in the name of epigenetics, but this one takes the cake. Can anyone point me to an instance of any cancer that does not involve mutations? And where is the evidence that “more than half of cancer cases” are not caused by mutations? Anyone?

(And if you haven’t read this, you should: Mark Ptashne, “Epigenetics: core misconcept” Proc Natl Acad Sci U S A. 2013 Apr 30;110(18):7101-3.)

NOTE: The article is not online yet, but comes from a big story on epigenetics in the July/Aug issue of the Nutrition Action Health Letter.