End of the World 1906: The Haunted Apocalypse

George Long’s Valhalla (1906)

Valhalla1906Post-apocalyptic worlds are always haunted. The empty ruins of great cities, the artifacts of lost technologies, the mouldering books, and the memories of the vanished civilization make it clear that the survivors are now living in the world of the dead. In George Long’s Valhalla, the haunting is literal: the world is now one great hall of the dead, with a billion spirits ready to lend their ghostly hands to help the survivors build a better future. While it’s stiffly written and poorly plotted, this short book is nevertheless an interesting artifact from that optimistic time before the First World War. As he describes a new civilization rebuilt under the guidance of the dead from the last one, Long suggests that the root of human dysfunction is simple: jealousy of love and power. Without jealousy, there would be no serious conflict and people will get along just fine.

Like The Purple Cloud, Valhalla is one of those post-apocalyptic books where the world has been almost entirely depopulated, but those who are left don’t really have to struggle for survival. Living in the aftermath of the greatest natural disaster ever, Long’s characters don’t worry about the challenges posed by nature; more threatening are the challenges they pose to each other. The catastrophe itself is a vaguely described series of events that resemble the biblical apocalypse: Continue reading “End of the World 1906: The Haunted Apocalypse”

Sunday Science Poem: The Geometry of Love

Andrew Marvell’s ‘The Definition of Love’ (1681)

Kepler_Mars_retrogradeWhy are 17th century poets like John Donne, George Herbert and Andrew Marvell called ‘metaphysical’ poets? You can trace the name back to John Dryden, who in an unabashedly sexist comment accused John Donne of “affect[ing] the metaphysics, not only in his satires, but in his amorous verses… perplex[ing] the minds of the fair sex with nice speculations of philosophy when he should engage their hearts, and entertain them with the softnesses of love.”

Well, the Metaphysical poets proved that you can in fact engage the heart with science. Continue reading “Sunday Science Poem: The Geometry of Love”

Apocalypse 1901: Adam and Eve in the Empty World Asylum

M. P. Shiel’s The Purple Cloud (1901)

NorthPoleNOAAThe Book of Revelation isn’t the only part of the Bible that inspires post-apocalyptic fiction — Genesis plays a big part too. The Bible’s story about the beginning of the world has become a popular way to think about the world’s end. Adam and Eve, a paradisiacal Eden, and humanity’s fall get transformed into a last couple, a post-apocalyptic haven, and the forbidden fruit of some unexplored territory or lethal knowledge. What could be called the very first post-apocalyptic novel was explicitly written as a bookend to Genesis. Nathaniel Hawthorne later wrote a replay of Genesis that takes place within the empty remnants of civilization. M.P Shiel’s The Purple Cloud, an overwritten but under-read classic, is also a post-apocalyptic Adam and Eve story: the fall of civilization is brought about by a reach for the unexplored North Pole, and a last couple must consider the moral dilemma of repopulating an empty world.

The Purple Cloud is the first post-apocalyptic novel of the 20th century, but it starts with a throwback, by putting the whole thing within a mystic frame story of the sort employed much earlier by de Grainville and Mary Shelley. Most of the novel consists of the first-person record of the last man as he wrote it down in his notebooks; to get those notebooks in the hands of 20th century readers, Shiel has them dictated by a medium to her physician, who then passes the manuscript on to M.P. Shiel. Finding a plausible explanation for how a future story comes into the hands of present-day readers was a particular concern of 19th century SF writers, but would soon be largely abandoned. Continue reading “Apocalypse 1901: Adam and Eve in the Empty World Asylum”

Would you outsource your gel to a gel-informatician?

Sean Eddy explains why sequencing is replacing many older assays, and why biologists need to learn to analyze their own data.

“High throughput sequencing for neuroscience”:

If we were talking about a well-defined resource like a genome sequence, where the problem is an engineering problem, I’m fine with outsourcing or building skilled teams of bioinformaticians. But if you’re a biologist pursuing a hypothesis-driven biological problem, and you’re using using a sequencing-based assay to ask part of your question, generically expecting a bioinformatician in your sequencing core to analyze your data is like handing all your gels over to some guy in the basement who uses a ruler and a lightbox really well.

Data analysis is not generic. To analyze data from a biological assay, you have to understand the question you’re asking, you have to understand the assay itself, and you have to have enough intuition to anticipate problems, recognize interesting anomalies, and design appropriate controls. If we were talking about gels, this would be obvious. You don’t analyze Northerns the same way you analyze Westerns, and you wouldn’t hand both your Westerns and your Northerns over to the generic gel-analyzing person with her ruler in the basement. But somehow this is what many people seem to want to do with bioinformaticians and sequence data.

It is true that sequencing generates a lot of data, and it is currently true that the skills needed to do sequencing data analysis are specialized and in short supply. What I want to tell you, though, is that those data analysis skills are easily acquired by biologists, that they must be acquired by biologists, and that that they will be. We need to rethink how we’re doing bioinformatics.

I would add this: it takes some time to learn, but in the end it’s not that hard, people. Students in chemistry and physics routinely learn the requisite skills. We need to educate biologists who expect to do programming, math, and statistics.

How bad is the NIH budget really?

In the blowback to Francis Collins’ comments about budget cuts delaying an Ebola vaccine, there is a lot of confusion going around about just how much the NIH budget declined.

The worst offender is the usually very good Sarah Kliff at Vox.com, who writes:

The NIH’s budget rose rapidly during the early 2000s, growing from $17 billion in 2000 to a peak of $31 billion in 2010. This meant more money for everything…

Funding then began to decline in 2010 and has continued to fall slightly over the past four years (this was during a period when Obama was in the White House, Democrats controlled the Senate, and Republicans controlled the House). By 2013, funding was down to $29.3 billion. These figures do not account for inflation.

Inflation – there’s the rub. Because when you do account for inflation, you see that the NIH budget was in decline long before 2010 – in fact things started to go south after 2004, as the AAAS budget analysis shows:

And depending on how you make the inflation adjustment, things can look even worse – you hear claims of a 20% decline tossed around. To understand how this works, lets look at the numbers themselves: Continue reading “How bad is the NIH budget really?”