Author Archives: Mike White

How bad is the NIH budget really?

In the blowback to Francis Collins’ comments about budget cuts delaying an Ebola vaccine, there is a lot of confusion going around about just how much the NIH budget declined.

The worst offender is the usually very good Sarah Kliff at Vox.com, who writes:

The NIH’s budget rose rapidly during the early 2000s, growing from $17 billion in 2000 to a peak of $31 billion in 2010. This meant more money for everything…

Funding then began to decline in 2010 and has continued to fall slightly over the past four years (this was during a period when Obama was in the White House, Democrats controlled the Senate, and Republicans controlled the House). By 2013, funding was down to $29.3 billion. These figures do not account for inflation.

Inflation – there’s the rub. Because when you do account for inflation, you see that the NIH budget was in decline long before 2010 – in fact things started to go south after 2004, as the AAAS budget analysis shows:

And depending on how you make the inflation adjustment, things can look even worse – you hear claims of a 20% decline tossed around. To understand how this works, lets look at the numbers themselves: Continue reading

Will the future run out of technology?

If you haven’t seen it, this opinionated, provocative, and forceful essay by Bruce Gibney at Founder’s Fund is a great read. Starting with the question of why venture capital return has generally sucked over the past two decades, he delves into issue of real vs. fake technology, why we’ve been too quick to be satisfied with incremental progress, and whether there is that much revolutionary technology left to invent.

“What happened to the future?”:

Have we reached the end of the line, a sort of technological end of history? Once every last retailer migrates onto the Internet, will that be it? Is the developed world really developed, full stop? Again, it may be helpful to revisit previous conceptions of the future to see if there are any areas where VC might yet profitably invest.

In 1958, Ford introduced the Nucleon, an atom-powered, El Camino-shaped concept car. From the perspective of the present, the Nucleon seems audacious to the point of idiocy, but consider at the time Nautilus, the first atomic submarine, had just been launched in 1954 (and that less than ten years after the first atomic bomb). The Nucleon was ambitious – and a marketing gimmick, to be sure – but it was not entirely out of the realm of reason. Ten years later, in 1968, Arthur C. Clarke predicted imminent commercial space travel and genuine (if erratic) artificial intelligences. “2001: A Space Odyssey” was fiction, of course, but again, its future didn’t seem implausible at the time; the Apollo program was ready to put Armstrong on the moon less than a decade after Gagarin, and computers were becoming common place just a few years after Kilby and Noyce dreamed up the integrated circuit. The future envisioned from the perspective of the 1960s was hard to get to, but not impossible, and people were willing to entertain the idea. We now laugh at the Nucleon and Pan Am to the moon while applauding underpowered hybrid cars and Easyjet, and that’s sad. The future that people in the 1960s hoped to see is still the future we’re waiting for today, half a century later. Instead of Captain Kirk and the USS Enterprise, we got the Priceline Negotiator and a cheap flight to Cabo.

There are major exceptions: as we’ve seen, computers and communication technologies advanced enormously (even if Windows 2000 is a far cry from Hal 9000) and the Internet has evolved into something far more powerful and pervasive than its architects had ever hoped for. But a lot of what seemed futuristic then remains futuristic now, in part because these technologies never received the sustained funding lavished on the electronics industries. Commercializing the technologies that have languished seems as good a place as any to start looking for ideas

Nature on the PhD Glut

This week Nature covers the online response to Eve Marder’s piece in eLife arguing that we shouldn’t shrink PhD programs. The article mentions my response and adds a few more comments by people with different perspectives. Go over and read it, and chime in with your opinions!

Can science fiction cure our innovation starvation?

Over at Pacific Standard this week, I look at Arizona State University’s fascinating Project Hieroglyph – a project to inspire us to think big with science fiction. The project, inspired in part by Neal Stephenson, just put out an excellent anthology of SF edited by Ed Finn and Kathryn Cramer, featuring thought experiments worked out as SF stories.

In the preface to the anthology, Stephenson looks back at the great technological achievements of the mid-20th century, notably the Apollo program, and worries that we are no longer a society that can get big things done. We’re unwilling to think big, attempt truly ground-breaking ideas, or solve society’s biggest problems. We need to unshackle our imaginations, and SF can help us do that.

You can read my response at Pacific Standard, but here’s the tl/dr version:

Scientists and engineers have plenty of imagination. What they don’t always have are the incentives and support to take big intellectual risks. Making the case that we should tackle big ideas that might fail is Project Hieroglyph’s most valuable contribution. Neal Stephenson writes that “the vast and radical innovations of the mid-twentieth century took place in a world that, in retrospect, looks insanely dangerous and unstable.” Pursuing insanely dangerous ideas—like nuclear weapons—is probably not the best way to build a better society. But risking failure is critical in science and technology. Unfortunately, failure is expensive, and the lack of money is probably the best explanation for why our society isn’t “executing the big stuff” that Stephenson wants to see. Scientists facing increasingly poor career prospects become risk-averse. Venture capitalists who complain that they only have 140 characters instead of flying cars are nevertheless hesitant to fund the expensive and risky development of technology that could be genuinely transformative. We certainly need imagination in science, and we should tell inspiring stories about big ideas. But to realize those ideas, we have to pay for them.

Thoughts?

Science Denial Then and Now

George Herbert’s “Vanity (I)” (1633)

Science has always made people uncomfortable. Witness the recent comments from the U.S. House Science (Denial) and Technology Committee:

We’ve had climate change since the day the earth was formed, whenever that was, depending on whatever you believe. — Rep. Bill Posey (R – FL)

I just don’t know how y’all prove those hypotheses going back fifty, a hundred, you might say thousands or not even millions of years, and how you postulate those forward. — Rep. Randy Weber (R – TX)

These confused politicians are part of a long tradition that stretches back to the beginnings of modern science itself. George Herbert was a friend of Francis Bacon, but the pious Herbert wanted nothing to do with Bacon’s radical ideas about the natural world. Herbert’s recent biographer John Drury explains:

Long before the discoveries of Darwin and modern astrophysics, some explanation of how everything had come into existence and how it worked was required. Divine creation provided that, had no challengers, and held the field. The natural world presented no moral problems. Rather, it provided ample scope for the investigation of the heavens and the earth which was beginning to gather pace among intellectuals, led by Herbert’s older friend Sir Francis Bacon. In his early poem ‘Vanity (I)’ Herbert was chary about such ‘philosophy’ as it was called, dismissing astronomy and chemistry as too speculative to occupy the valuable time of the practical Christian.

Music at Midnight: The Life and Poetry of George Herbert, John Drury p. 12

Continue reading