This week’s Economist is out with a provocative article about how science goes wrong. It’s a good piece, raises some good points, and it reaches a conclusion that is completely the opposite of mine.
Science goes wrong, the piece argues, because “Modern scientists are doing too much trusting and not enough verifying—to the detriment of the whole of science, and of humanity.” I don’t think this is true, and the old adage that scientists need to “trust but verify” actually doesn’t reflect how scientists throughout history have worked. Scientists have never been particularly interested in spending much time and effort verifying anyone else’s results – unless it advances their own research. Science is not founded on the idea that results need to be replicated – it’s founded on the idea that results need to be fruitful. A scientist’s new ideas and experimental results become accepted because they lead to success in other people’s labs. They lead to progress in other people’s research programs.
So the focus on lack of replication is somewhat misguided. However, that doesn’t mean we should be sanguine about an epidemic of bullshit results. Bullshit results waste time and resources, put patients at risk, and erode public trust in science. Bullshit results mean that someone was more concerned about advancing their career through flashy publications, rather than doing research that others will build on. Unfortunately, and here I’m in agreement with the Economist and Bruce Alberts, pushing quick, flashy papers seems to be the only way to advance your career in today’s hypercompetitive science profession.
The solution isn’t to publish more negative results or replication studies – with the exception of research directly related to clinical practice. (In my training I was repeatedly told that you design studies so that the outcome is worth publishing, even if your results are negative.) A better solution is to create a reward system where your career advances based on how fruitful your research is, rather than mere publication counts in high-profile journals. Of course, I have no idea how the hell we do that, because ‘fruitfulness’ is hard to measure. Identifying fruitful problems is a matter of intuition and judgment, and is the most inscrutable trait of successful scientists. How do we promote that in an overcrowded profession populated with administratively overburdened scientists?