Tag Archives: Linkonomicon

Specula-tion

Rose Eveleth, editor at The Atlantic, spent the last few days targeted by threats and abuse for being the first to say the same thing we did, but being a woman while doing so.

Today, she came back with a ridiculously good article – “Why No One Can Design a Better Speculum” – on the racist/misogynist history of the despised speculum and why we’ve been unable to substantially improve on the basic design for 150 years:

One might expect our modern spirit of innovation and disruption to turn its eye on the speculum. Surely something invented so long ago, under such dubious circumstances, could use an update. And many have tried. In the past 10 years, new designs for the speculum have continuously cropped up, only to fade away again. But while medical manufacturers continue to improve the design in little ways, there has been no real contender to displace the duck-billed model. The speculum’s history is inextricably linked to extreme racism and misogyny. But for all that, it just may be the best design we’re ever likely to have.
Rose Eveleth, The Atlantic

The article does include images of specula and technical illustrations of female anatomy, which may not be considered “Safe for Work” in your workplace.

Would you outsource your gel to a gel-informatician?

Sean Eddy explains why sequencing is replacing many older assays, and why biologists need to learn to analyze their own data.

“High throughput sequencing for neuroscience”:

If we were talking about a well-defined resource like a genome sequence, where the problem is an engineering problem, I’m fine with outsourcing or building skilled teams of bioinformaticians. But if you’re a biologist pursuing a hypothesis-driven biological problem, and you’re using using a sequencing-based assay to ask part of your question, generically expecting a bioinformatician in your sequencing core to analyze your data is like handing all your gels over to some guy in the basement who uses a ruler and a lightbox really well.

Data analysis is not generic. To analyze data from a biological assay, you have to understand the question you’re asking, you have to understand the assay itself, and you have to have enough intuition to anticipate problems, recognize interesting anomalies, and design appropriate controls. If we were talking about gels, this would be obvious. You don’t analyze Northerns the same way you analyze Westerns, and you wouldn’t hand both your Westerns and your Northerns over to the generic gel-analyzing person with her ruler in the basement. But somehow this is what many people seem to want to do with bioinformaticians and sequence data.

It is true that sequencing generates a lot of data, and it is currently true that the skills needed to do sequencing data analysis are specialized and in short supply. What I want to tell you, though, is that those data analysis skills are easily acquired by biologists, that they must be acquired by biologists, and that that they will be. We need to rethink how we’re doing bioinformatics.

I would add this: it takes some time to learn, but in the end it’s not that hard, people. Students in chemistry and physics routinely learn the requisite skills. We need to educate biologists who expect to do programming, math, and statistics.

What a cute baby. . .solar system

The folks at Atacama Large Millimeter/Submillimeter Array (ALMA) just released an insanely detailed image of a developing star and the surrounding disc of material that may become its planetary system.

Credit: ALMA (NRAO/ESO/NAOJ); C. Brogan, B. Saxton (NRAO/AUI/NSF)

Credit: ALMA (NRAO/ESO/NAOJ); C. Brogan, B. Saxton (NRAO/AUI/NSF)

Phil Plait explains why this image is more than aesthetically interesting at Slate.

From what we understand of planet formation, a star and disk this young shouldn’t have a planetary system evolved enough to create these gaps. That’s a bit of a shock. Research published in 2008 also indicated the presence of a new planet, and I’ll be curious to see how this new observation fits in with that work as well. – Phil Plait

HT: Amy Shira Teitel

Will the future run out of technology?

If you haven’t seen it, this opinionated, provocative, and forceful essay by Bruce Gibney at Founder’s Fund is a great read. Starting with the question of why venture capital return has generally sucked over the past two decades, he delves into issue of real vs. fake technology, why we’ve been too quick to be satisfied with incremental progress, and whether there is that much revolutionary technology left to invent.

“What happened to the future?”:

Have we reached the end of the line, a sort of technological end of history? Once every last retailer migrates onto the Internet, will that be it? Is the developed world really developed, full stop? Again, it may be helpful to revisit previous conceptions of the future to see if there are any areas where VC might yet profitably invest.

In 1958, Ford introduced the Nucleon, an atom-powered, El Camino-shaped concept car. From the perspective of the present, the Nucleon seems audacious to the point of idiocy, but consider at the time Nautilus, the first atomic submarine, had just been launched in 1954 (and that less than ten years after the first atomic bomb). The Nucleon was ambitious – and a marketing gimmick, to be sure – but it was not entirely out of the realm of reason. Ten years later, in 1968, Arthur C. Clarke predicted imminent commercial space travel and genuine (if erratic) artificial intelligences. “2001: A Space Odyssey” was fiction, of course, but again, its future didn’t seem implausible at the time; the Apollo program was ready to put Armstrong on the moon less than a decade after Gagarin, and computers were becoming common place just a few years after Kilby and Noyce dreamed up the integrated circuit. The future envisioned from the perspective of the 1960s was hard to get to, but not impossible, and people were willing to entertain the idea. We now laugh at the Nucleon and Pan Am to the moon while applauding underpowered hybrid cars and Easyjet, and that’s sad. The future that people in the 1960s hoped to see is still the future we’re waiting for today, half a century later. Instead of Captain Kirk and the USS Enterprise, we got the Priceline Negotiator and a cheap flight to Cabo.

There are major exceptions: as we’ve seen, computers and communication technologies advanced enormously (even if Windows 2000 is a far cry from Hal 9000) and the Internet has evolved into something far more powerful and pervasive than its architects had ever hoped for. But a lot of what seemed futuristic then remains futuristic now, in part because these technologies never received the sustained funding lavished on the electronics industries. Commercializing the technologies that have languished seems as good a place as any to start looking for ideas

Nature on the PhD Glut

This week Nature covers the online response to Eve Marder’s piece in eLife arguing that we shouldn’t shrink PhD programs. The article mentions my response and adds a few more comments by people with different perspectives. Go over and read it, and chime in with your opinions!