Evaluating the Impact of Peer Review on the Completeness of Reporting in Imaging Diagnostic Test Accuracy Research

Sakib Kazi, Robert A. Frank, Jean-Paul Salameh, Nicholas Fabiano, Marissa Absi, Alex Pozdnyakov, Nayaar Islam, Daniël A. Korevaar, J. rémie F. Cohen, Patrick M. Bossuyt, Mariska M. G. Leeflang, Kelly D. Cobey, David Moher, Mark Schweitzer, Yves Menu, Michael Patlas, Matthew D. F. McInnes

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Background: Despite the nearly ubiquitous reported use of peer review among reputable medical journals, there is limited evidence to support the use of peer review to improve the quality of biomedical research and in particular, imaging diagnostic test accuracy (DTA) research. Purpose: To evaluate whether peer review of DTA studies published by imaging journals is associated with changes in completeness of reporting, transparency for risk of bias assessment, and spin. Study Type: Retrospective cross-sectional study. Study Sample: Cross-sectional study of articles published in Journal of Magnetic Resonance Imaging (JMRI), Canadian Association of Radiologists Journal (CARJ), and European Radiology (EuRad) before March 31, 2020. Assessment: Initial submitted and final versions of manuscripts were evaluated for completeness of reporting using the Standards for Reporting Diagnostic Accuracy Studies (STARD) 2015 and STARD for Abstracts guidelines, transparency of reporting for risk of bias assessment based on Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2), and actual and potential spin using modified published criteria. Statistical Tests: Two-tailed paired t-tests and paired Wilcoxon signed-rank tests were used for comparisons. A P value <0.05 was considered to be statistically significant. Results: We included 84 diagnostic accuracy studies accepted by three journals between 2014 and 2020 (JMRI = 30, CARJ = 23, and EuRad = 31) of the 692 which were screened. Completeness of reporting according to STARD 2015 increased significantly between initial submissions and final accepted versions (average reported items: 16.67 vs. 17.47, change of 0.80 [95% confidence interval 0.25–1.17]). No significant difference was found for the reporting of STARD for Abstracts (5.28 vs. 5.25, change of −0.03 [−0.15 to 0.11], P = 0.74), QUADAS-2 (6.08 vs. 6.11, change of 0.03 [−1.00 to 0.50], P = 0.92), actual “spin” (2.36 vs. 2.40, change of 0.04 [0.00 to 1.00], P = 0.39) or potential “spin” (2.93 vs. 2.81, change of −0.12 [−1.00 to 0.00], P = 0.23) practices. Conclusion: Peer review is associated with a marginal improvement in completeness of reporting in published imaging DTA studies, but not with improvement in transparency for risk of bias assessment or reduction in spin. Level of Evidence: 3. Technical Efficacy Stage: 1.

Original languageEnglish
Pages (from-to)680-690
Number of pages11
JournalJournal of magnetic resonance imaging
Volume56
Issue number3
Early online date2022
DOIs
Publication statusPublished - Sept 2022

Keywords

  • peer review
  • reporting guidelines
  • research methods

Cite this