Specific agreement on dichotomous outcomes can be calculated for more than two raters

Research output: Contribution to journalArticleAcademicpeer-review

27 Citations (Scopus)


Objective For assessing interrater agreement, the concepts of observed agreement and specific agreement have been proposed. The situation of two raters and dichotomous outcomes has been described, whereas often, multiple raters are involved. We aim to extend it for more than two raters and examine how to calculate agreement estimates and 95% confidence intervals (CIs). Study Design and Setting As an illustration, we used a reliability study that includes the scores of four plastic surgeons classifying photographs of breasts of 50 women after breast reconstruction into “satisfied” or “not satisfied.” In a simulation study, we checked the hypothesized sample size for calculation of 95% CIs. Results For m raters, all pairwise tables [ie, m (m − 1)/2] were summed. Then, the discordant cells were averaged before observed and specific agreements were calculated. The total number (N) in the summed table is m (m − 1)/2 times larger than the number of subjects (n), in the example, N = 300 compared to n = 50 subjects times m = 4 raters. A correction of n√(m − 1) was appropriate to find 95% CIs comparable to bootstrapped CIs. Conclusion The concept of observed agreement and specific agreement can be extended to more than two raters with a valid estimation of the 95% CIs.

Original languageEnglish
Pages (from-to)85-89
Number of pages5
JournalJournal of Clinical Epidemiology
Publication statusPublished - 1 Mar 2017


  • Confidence intervals
  • Continuity correction
  • Fleiss correction
  • Observed agreement
  • Specific agreement

Cite this