What makes scientists cheat? It’s cheating week over at Pacific Standard, and in my contribution, I talk about why scientists cheat.
I come up with three reasons:
1) It’s easy. So much of science is built on trust; generally, nobody comes into your lab and checks your notebook, equipment, computer code, or raw data. This is true of PIs as well – they trust that their grad students and postdocs are not faking their data.
2) There are (some short-term) incentives to cheat in science. In today’s hypercompetitive scientific community, there can be great pressure to cheat when you think your future in science is threatened. However, I think the long term incentives don’t favor cheating. Most serious cheaters seem to be caught quickly, the risks are huge, and the benefits of cheating scientists are more ephemeral than the benefits of many other types of fraud – scientists aren’t stashing laundered money away in offshore bank accounts.
3) When the data doesn’t go your way, it can be hard to accept that your idea is wrong. So much of science, especially experimental science, is a matter of judgment – what anomalous data is significant, and what data is simply a screw-up. Scientific publications by necessity are a selection of the work done by the authors, not a report of everything they tried. There are moments in every scientists career when some idea you knew just had to be true turns out to be wrong. Some cheaters are scientists who can’t deal with being wrong.
What about fraud in grant applications? My impression is that this is more common than fraud in publications. I frequently hear rumors of people presenting “results” from preliminary experiments that they have yet to conduct… but they feel confident that they can predict the results, so they go ahead and fake it.
Fraud on grant applications is certainly wrong because of the dishonesty, but otherwise it doesn’t bother me too much, since I think there is already too much of an emphasis on preliminary results.
If you run a consistently productive lab but occasionally exaggerated or completely faked your preliminary results, that’s dishonest but in the end the grant money won’t be wasted. If you faked your preliminary results and then completely bomb after you get the grant, then you’re not likely to get funded again.
” Most serious cheaters seem to be caught quickly, the risks are huge, and the benefits of cheating scientists are more ephemeral than the benefits of many other types of fraud”
Relating to your cost/benefit analysis, I think that the incentive to cheat on grant applications is much greater than the incentive to cheat on publications. First, the benefits are much more concrete. Second, the chance of getting caught is much lower, both because fewer people see the fraudulent data and because preliminary data is not supposed to be as reliable as published data, so you can be forgiven if things don’t work out exactly as anticipated. My suspicion is that fraudulent preliminary data can be sufficient to help a proposal stand out among the competition, and the consequences are minimal because once the scientist is funded he can still get SOMETHING done, even if it is a bit tangential to the initial proposal or a bit slower than what was predicted. … which is basically what you said.
However, even if the direct economic waste resulting from this fraud is minor, it is terribly corrosive to the scientific community. It would mean that equally productive but honest scientists fail to get grants. It would mean that the leaders of the field are habitual liars and sociopaths. These would be the exact type of people who would end up publishing fraudulent data or abusing their power within journals, universities, and professional societies. On top of this, fraudulent grant proposals would give people a lot of practice in committing fraud, which they could eventually apply to publication fraud.
I agree with you – regardless of what the cost in wasted money is, pervasive fraud and bullshit, esp. when it comes to grants, is terribly corrosive. It damages the scientific community and erodes public trust, as we see in the string of recent press articles about how much non-reproducible science is published. It’s pretty demoralizing for those of us still in the earlier phases of making a career of it.