The Kappa Paradox Explained

Bastiaan M. Derksen, Wendy Bruinsma, Johan Carel Goslings, Niels W. L. Schep

Research output: Contribution to journalReview articleAcademicpeer-review

Abstract

Observer reliability studies for fracture classification systems evaluate agreement using Cohen's κ and absolute agreement as outcome measures. Cohen's κ is a chance-corrected measure of agreement and can range between 0 (no agreement) and 1 (perfect agreement). Absolute agreement is the percentage of times observers agree on the matter they have to rate. Some studies report a high-absolute agreement but a relatively low κ value, which is counterintuitive. This phenomenon is referred to as the Kappa Paradox. The objective of this article was to explain the statistical phenomenon of the Kappa Paradox and to help readers and researchers to recognize and prevent this phenomenon.
Original languageEnglish
Pages (from-to)482-485
Number of pages4
JournalJournal of Hand Surgery
Volume49
Issue number5
Early online date2024
DOIs
Publication statusPublished - May 2024

Keywords

  • Cohen
  • fracture classification
  • kappa
  • statistics

Cite this