Clinicians are right not to like Cohen's kappa

H.C.W. de Vet, L.B. Mokkink, C.B. Terwee, O.S. Hoekstra, D.L. Knol

Research output: Contribution to journalArticleAcademicpeer-review

202 Citations (Scopus)

Abstract

Clinicians are interested in observer variation in terms of the probability of other raters (interobserver) or themselves (intraobserver) obtaining the same answer. Cohen's κ is commonly used in the medical literature to express such agreement in categorical outcomes. The value of Cohen's κ, however, is not sufficiently informative because it is a relative measure, while the clinician's question of observer variation calls for an absolute measure. Using an example in which the observed agreement and κ lead to different conclusions, we illustrate that percentage agreement is an absolute measure (a measure of agreement) and that κ is a relative measure (a measure of reliability). For the data to be useful for clinicians, measures of agreement should be used. The proportion of specific agreement, expressing the agreement separately for the positive and the negative ratings, is the most appropriate measure for conveying the relevant information in a 2 × 2 table and is most informative for clinicians.
Original languageEnglish
Article numberf2125
Pages (from-to)1-7
JournalBritish medical journal
Volume346
DOIs
Publication statusPublished - 2013

Cite this