The Shaky Scientific Edifice: Watching Retracted And Unretracted Papers

The good news is that retracted papers really are worse than papers that have not been retracted. The bad news is that unretracted papers still have plenty of problems.

That’s the inevitable conclusion of a fascinating new paper from Darrel Francis’s group in the UK. In the past few years the same group has exposed fraud and systemic errors in cardiac stem cell research, predicted the failure of the first major clinical trial of renal denervation, and forced the European Society of Cardiology to finally take action regarding potentially dangerous guidelines heavily influenced by the disgraced researcher Don Poldermans.

In their new study in the BMJ, the upstart researchers, led by Graham Cole, carefully examined 50 retracted papers and 50 adjacent unretracted manuscripts. They discovered 348 discrepancies in the retracted papers compared with 131 discrepancies in the unretracted papers. Retracted papers were particularly more likely to have factual discrepancies, arithmetical errors, and missed P values.

The authors say that their study suggests “that identification of discrepancies, even by scientists without particular scientific specialism in the field, might provide an early alarm of unreliability.”

Because it is so difficult to identify problems, the authors propose formal measures to encourage crowd-sourcing: “Providing a post publication forum for readers to share knowledge of discrepancies is important, because, as our study shows, one reader may spot only a subset of the discrepancies noticed by multiple readers. We believe this finding highlights the difficulty of the task and the likely benefits of crowd sourcing when examining papers after publication.”

They also propose that the original data from studies should be made available as soon as possible once questions have been raised:

“A journal could plan an automatic escalation protocol that would minimise consumption of editorial time. Once it receives a list of discrepancies, it could publish them immediately and request an online supplement of individual patient data from the paper’s authors, if such a supplement was not already provided in the original publication. The journal could publish the time in days and hours from request to receipt. In honestly conducted trials with innocent errors (for example, honest, simple transcription errors), these would be identified as such and quickly corrected. Readers might draw their own conclusions if the dataset is delayed or unavailable.”

There’s one question they don’t specifically raise. Given the smaller but still significant number of discrepancies in the unretracted papers, what does their study say about the overall reliability of the scientific literature?

 

 

Speak Your Mind

*