A systematic review finds that diagnostic reviews fail to incorporate quality despite available tools

Penny Whiting, Anne W. S. Rutjes, Jacqueline Dinnes, Johannes B. Reitsma, Patrick M. M. Bossuyt, Jos Kleijnen

Research output: Contribution to journalReview articleAcademicpeer-review

62 Citations (Scopus)

Abstract

Background and Objective: To review existing quality assessment tools for diagnostic accuracy studies and to examine to what extent quality was assessed and incorporated in diagnostic systematic reviews. Methods: Electronic databases were searched for tools to assess the quality of studies of diagnostic accuracy or guides for conducting. reporting or interpreting such studies. The Database of Abstracts of Reviews of Effects (DARE: 199-5-2001) was used to identity reviews of diagnostic studies to examine the practice of quality assessment of primary studies. Results: Ninety-one quality assessment tools were identified. Only two provided details of tool development. and only a small proportion provided any indication of the aspects of quality they aimed to assess. None of the tools had been systematically evaluated. We identified 114 systematic reviews, of which 58 (51%) had performed an explicit quality assessment and were further examined. The majority of reviews used more than one method of incorporating quality. Conclusion: Most tools to assess the quality of diagnostic accuracy studies do not start from a well-defined definition of quality. None has been systematically evaluated. The majority of existing systematic reviews fail to take differences in quality into account. Reviewer should consider quality as a possible source of heterogeneity. (C) 2005 Elsevier Inc. All rights reserved
Original languageEnglish
Pages (from-to)1-12
JournalJournal of Clinical Epidemiology
Volume58
Issue number1
DOIs
Publication statusPublished - 2005

Cite this