Tag Archives: Linkonomicon

Will the future run out of technology?

If you haven’t seen it, this opinionated, provocative, and forceful essay by Bruce Gibney at Founder’s Fund is a great read. Starting with the question of why venture capital return has generally sucked over the past two decades, he delves into issue of real vs. fake technology, why we’ve been too quick to be satisfied with incremental progress, and whether there is that much revolutionary technology left to invent.

“What happened to the future?”:

Have we reached the end of the line, a sort of technological end of history? Once every last retailer migrates onto the Internet, will that be it? Is the developed world really developed, full stop? Again, it may be helpful to revisit previous conceptions of the future to see if there are any areas where VC might yet profitably invest.

In 1958, Ford introduced the Nucleon, an atom-powered, El Camino-shaped concept car. From the perspective of the present, the Nucleon seems audacious to the point of idiocy, but consider at the time Nautilus, the first atomic submarine, had just been launched in 1954 (and that less than ten years after the first atomic bomb). The Nucleon was ambitious – and a marketing gimmick, to be sure – but it was not entirely out of the realm of reason. Ten years later, in 1968, Arthur C. Clarke predicted imminent commercial space travel and genuine (if erratic) artificial intelligences. “2001: A Space Odyssey” was fiction, of course, but again, its future didn’t seem implausible at the time; the Apollo program was ready to put Armstrong on the moon less than a decade after Gagarin, and computers were becoming common place just a few years after Kilby and Noyce dreamed up the integrated circuit. The future envisioned from the perspective of the 1960s was hard to get to, but not impossible, and people were willing to entertain the idea. We now laugh at the Nucleon and Pan Am to the moon while applauding underpowered hybrid cars and Easyjet, and that’s sad. The future that people in the 1960s hoped to see is still the future we’re waiting for today, half a century later. Instead of Captain Kirk and the USS Enterprise, we got the Priceline Negotiator and a cheap flight to Cabo.

There are major exceptions: as we’ve seen, computers and communication technologies advanced enormously (even if Windows 2000 is a far cry from Hal 9000) and the Internet has evolved into something far more powerful and pervasive than its architects had ever hoped for. But a lot of what seemed futuristic then remains futuristic now, in part because these technologies never received the sustained funding lavished on the electronics industries. Commercializing the technologies that have languished seems as good a place as any to start looking for ideas

Nature on the PhD Glut

This week Nature covers the online response to Eve Marder’s piece in eLife arguing that we shouldn’t shrink PhD programs. The article mentions my response and adds a few more comments by people with different perspectives. Go over and read it, and chime in with your opinions!

Can science fiction cure our innovation starvation?

Over at Pacific Standard this week, I look at Arizona State University’s fascinating Project Hieroglyph – a project to inspire us to think big with science fiction. The project, inspired in part by Neal Stephenson, just put out an excellent anthology of SF edited by Ed Finn and Kathryn Cramer, featuring thought experiments worked out as SF stories.

In the preface to the anthology, Stephenson looks back at the great technological achievements of the mid-20th century, notably the Apollo program, and worries that we are no longer a society that can get big things done. We’re unwilling to think big, attempt truly ground-breaking ideas, or solve society’s biggest problems. We need to unshackle our imaginations, and SF can help us do that.

You can read my response at Pacific Standard, but here’s the tl/dr version:

Scientists and engineers have plenty of imagination. What they don’t always have are the incentives and support to take big intellectual risks. Making the case that we should tackle big ideas that might fail is Project Hieroglyph’s most valuable contribution. Neal Stephenson writes that “the vast and radical innovations of the mid-twentieth century took place in a world that, in retrospect, looks insanely dangerous and unstable.” Pursuing insanely dangerous ideas—like nuclear weapons—is probably not the best way to build a better society. But risking failure is critical in science and technology. Unfortunately, failure is expensive, and the lack of money is probably the best explanation for why our society isn’t “executing the big stuff” that Stephenson wants to see. Scientists facing increasingly poor career prospects become risk-averse. Venture capitalists who complain that they only have 140 characters instead of flying cars are nevertheless hesitant to fund the expensive and risky development of technology that could be genuinely transformative. We certainly need imagination in science, and we should tell inspiring stories about big ideas. But to realize those ideas, we have to pay for them.

Thoughts?

Did NIH budget cuts delay an Ebola vaccine?

Mike Eisen makes an excellent point about NIH Director Francis Collins’ recent claims:

But what really bothers me the most about this is that, rather than trying to exploit the current hysteria about Ebola by offering a quid-pro-quo “Give me more money and I’ll deliver and Ebola vaccine”, Collins should be out there pointing out that the reason we’re even in a position to develop an Ebola vaccine is because of our long-standing investment in basic research, and that the real threat we face is not Ebola, but the fact that, by having slashed the NIH budget and made it increasingly difficult to have a stable career in science, we’re making it less and less likely that we’ll be equipped to handle all of the future challenges to public health that we’re going to be face in the future.

You can make a better case about the direct impact of funding cuts with the shrinking budget CDC Public Health Preparedness Funding, as Judy Stone notes over at Scientific American.

This year’s chemistry nobel in context

One of my favorite science historians, Daniel Kevles, has a brief, insightful New Yorker piece that puts this year’s chemistry Nobel Prize in context:

Trying to see the fine structure of a cell with a light microscope is akin to attempting to discern the individual trees in a forest from a jetliner at thirty thousand feet.

Kevles explains how Betzig and Hell were obsessed with breaking the “Abbe limit,” the physical principle that the resolution of light microscopes is limited to the wavelength of light. Each of them figured out how to “argue with the laws of physics,” using some brilliant tricks with fluorescence. To someone outside of biology it may sound strange, but the development of fluorescent imaging and tagging technologies is turning out to be one of the most important developments in the history of biology, at least as revolutionary as the initial development of the microscope.