Retraction rate increases with impact factor – is this because of professional editors?

Folks have long noted the strong positive correlation between high impact factor and retraction rate. There are three primary theories I’ve run across that attempt explain why Nature, Science, Cell, etc. have substantially higher retraction rates than other journals:

1) Acceptable risk/fame and glory theory: High impact factor journals are willing to publish riskier, but potentially higher-impact claims ASAP – more retractions are the price for getting high-impact science out early. The more negative version of this theory is that high impact factor journals care more about a high impact factor than about the integrity of what they publish.

2) Heightened scrutiny theory: papers published in high visibility journals get more scrutiny and thus flaws/fraud are more likely to be detected, but fraud/errors happen roughly equally everywhere. An associated theory is the high-stakes fraud theory: if you’re going to commit fraud, you need to make the payoff worth the risk, so you’re going to submit to Nature and not BBA.

Anthony Bretscher, in an MBoC commentary on editors, proposes a new theory, which, based on personal experience, I believe accounts for most of the correlation between retraction rate and high impact factor journals:

Continue reading “Retraction rate increases with impact factor – is this because of professional editors?”

Do as we say, not as we did

In the recent Federation of American Societies for Experimental Biology (FASEB) Washington Update, there is a letter to NIH director Francis Collins that supports recommendations from the Biomedical work force working group’s recent report. The report recommends, among other things, shortening the average Ph.D. training time to five years, while increasing training in skills targeting scientific careers outside of academia. How practical would it be to implement these recommendations? Continue reading “Do as we say, not as we did”