The Economist weighs in on what’s wrong with science

This week’s Economist is out with a provocative article about how science goes wrong. It’s a good piece, raises some good points, and it reaches a conclusion that is completely the opposite of mine.

Science goes wrong, the piece argues, because “Modern scientists are doing too much trusting and not enough verifying—to the detriment of the whole of science, and of humanity.” I don’t think this is true, and the old adage that scientists need to “trust but verify” actually doesn’t reflect how scientists throughout history have worked. Scientists have never been particularly interested in spending much time and effort verifying anyone else’s results – unless it advances their own research. Science is not founded on the idea that results need to be replicated – it’s founded on the idea that results need to be fruitful. A scientist’s new ideas and experimental results become accepted because they lead to success in other people’s labs. They lead to progress in other people’s research programs.

So the focus on lack of replication is somewhat misguided. However, that doesn’t mean we should be sanguine about an epidemic of bullshit results. Bullshit results waste time and resources, put patients at risk, and erode public trust in science. Bullshit results mean that someone was more concerned about advancing their career through flashy publications, rather than doing research that others will build on. Unfortunately, and here I’m in agreement with the Economist and Bruce Alberts, pushing quick, flashy papers seems to be the only way to advance your career in today’s hypercompetitive science profession.

The solution isn’t to publish more negative results or replication studies – with the exception of research directly related to clinical practice. (In my training I was repeatedly told that you design studies so that the outcome is worth publishing, even if your results are negative.) A better solution is to create a reward system where your career advances based on how fruitful your research is, rather than mere publication counts in high-profile journals. Of course, I have no idea how the hell we do that, because ‘fruitfulness’ is hard to measure. Identifying fruitful problems is a matter of intuition and judgment, and is the most inscrutable trait of successful scientists. How do we promote that in an overcrowded profession populated with administratively overburdened scientists?

Author: Mike White

Genomes, Books, and Science Fiction

5 thoughts on “The Economist weighs in on what’s wrong with science”

  1. Hi Mike,

    “Science is not founded on the idea that results need to be replicated – it’s founded on the idea that results need to be fruitful. A scientist’s new ideas and experimental results become accepted because they lead to success in other people’s labs. ”

    I feel like your description (above) is the “basic science” paradigm, but that a lot of the problems arise from the “translational science” paradigm. With basic science, we expect that an issue will be hashed out within a small community of researchers with overlapping expertise. They will argue with each other and often replicate research in the process of proceeding with their own research. Everyone takes for granted that it will take a decade or more to work out the issues, and then it will take a generation for that knowledge to be transmitted to engineering students who will apply it to practical problems.

    In contrast, the “translational science” paradigm aims for discoveries that will be applied within a few years. The results of a study get publicized widely, and then picked up by researchers who have absolutely no understanding of the experiments that were performed. These applied/clinical researchers then rush full speed ahead based on these unsubstantiated conclusions, and wonder why their research is unproductive. However, because funding agencies are impatient for practical applications, they give tons on money to these researchers who are conducting studies with poor foundations. Even on translational side, the push for practical applications results in a large number of researchers all aiming to solve the exact same problem — so there is immense pressure to publish ASAP even when the conclusions are not well-founded (either due to theoretical or empirical inadequacies).

    I’ve seen a couple of instance of this problem in my own field. When dealing with crop pathogens (USDA funded), many of the researchers are looking for the “magic bullet” rather than trying to establish a solid understanding of the disease system. They can’t establish a methodical research program for any disease system because the funding fluctuates wildly as various disease wax and wane. One consequence is that they have a strong incentive to “find something”, and I’ve found a lot of the research to be either pointless or unreliable.

    In another situation, I have been following the citations given to a high-profile study of bacterial population genetics. This study caught my attention because the conclusions were absurd, and the authors made some major errors in interpretation. While I’ve been developing a strong rebuttal (as a side project), I’ve been watching what others say about this study. The citations fall into two categories: one category consists of attempts to dismiss the conclusion based on laboratory evolution studies (which the author then dismisses as beside the point); the other category
    consists of cancer research. (Yes, half of the citations for a bacterial population genetics paper are from cancer researchers.) These guys have absolutely no ability to interpret the analysis performed, but they are apparently fascinated by the existence of the mutational mechanisms for which the flawed study supposedly provides evidence.

    So anyway, my hypothesis is that the problem arises from treating science as a short-term economic activity rather than an intellectual activity. That’s just not what science is, and that type of science doesn’t work.

    1. Hi Adam,

      Thanks for your comment. I agree with you that the translational science paradigm that’s popular right now is problematic. The ‘magic bullet’ idea is representative of the problem – people seem to think that successful translational science means forcing translatable discoveries that can be applied quickly, which I think puts the incentives all out of whack. It pushes scientists to try to shoot the moon and pick up ‘paradigm shifting,’ ‘high impact’ discoveries – creating a ripe environment for rushed, overhyped, bullshit claims.

      I understand the temptation – new technologies, like single cell cancer genome sequencing, make us think we can brute force our way to fundamental new discoveries that will solve some pressing problem, but we’re deceiving ourselves.

      A proper translational science should be focused on the hard-enough problem of turning already existing discoveries into something useful, rather than the nearly impossible task picking a big unsolved problem and forcing a ‘paradigm shift.’ As Thomas Kuhn argued, you don’t create paradigm shifts by looking for them; they come by working out the implications of the current paradigm.

      Your example of the mis-cited bacterial population genetics paper raises another issue – papers today are filled with meaningless, throw-away citations. Correct me if I’m wrong, but I imagine that those citations by cancer researchers don’t really engage (even mistakenly) with the substance of the bacterial paper; they’re just there to justify some general background or speculative claim in the intro or discussion.

      I think that’s common – so many papers don’t engage meaningfully with the ideas of most of the literature those papers cite.

      1. “so many papers don’t engage meaningfully with the ideas of most of the literature those papers cite.”

        Exactly. Which is one of the reasons why “impact factor” makes me queasy (but still want high citation counts for my own papers). For the paper that I described, there has not been a single extension of the work — yet it has gained 30 citations in a little more than a year (either as criticisms or fluff citations)

  2. I get the Economist a day or two later than everyone else apparently. So I haven’t read the article yet, but your post makes me want to read it as soon as it comes in.

    Somewhat on a tangent, I do think that The Economist is one of the few places left to get well written Science news. So many other of the other magazines, even the ones focused exclusively on Science, seem to have dumbed down their content a bit over the years.

    1. My copy of the Economist arrived yesterday – I first saw the piece online via someone’s link.

      I disagree with the diagnosis in the article, but it’s provocative and worth a read – I think it will live up to your expectations for good science news/opinion.

Leave a comment