TY - JOUR
T1 - Unified Parkinson's disease rating scale motor examination: are ratings of nurses, residents in neurology, and movement disorders specialists interchangeable?
AU - Post, Bart
AU - Merkus, Maruschka P.
AU - de Bie, Rob M. A.
AU - de Haan, Rob J.
AU - Speelman, Johannes D.
PY - 2005
Y1 - 2005
N2 - The Unified Parkinson's Disease Rating Scale (UPDRS) is widely used for the clinical evaluation of Parkinson's disease (PD). We assessed the rater variability of the UPDRS Motor examination (UPDRS-ME) of nurse practitioners, residents in neurology, and a movement disorders specialist (MDS) compared to a senior MDS. We assessed the videotaped UPDRS-ME of 50 PD patients. Inter-rater and intra-rater variability were estimated using weighted kappa (kappa(w)) and intraclass correlation coefficients (ICC). Additionally, inter-rater agreement was quantified by calculation of the mean difference between 2 raters and its 95% limits of agreement. Intra-rater agreement was also estimated by calculation of a 95% repeatability limits. The kappa(w) and ICC statistics indicated good to very good inter-rater and intra-rater reliability for the majority of individual UPDRS items and the sum score of the UPDRS-ME in all raters. However, for inter-rater agreement, it appeared that both nurses, residents, and the MDS consistently assigned higher scores than the senior MDS. Mean differences ranged between 1.7 and 5.4 (all differences P < 0.05), with rather wide 95% limits of agreement. The intra-rater 95% repeatability limits were rather wide. We found considerable rater difference for the whole range of UPDRS-ME scores between a senior MDS and nurse practitioners, residents in neurology, and the MDS. This finding suggests that the amount by which raters may disagree should be quantified before starting longitudinal studies of disease progression or clinical trials. Finally, evaluation of rater agreement should always include the assessment of the extent of bias between different raters
AB - The Unified Parkinson's Disease Rating Scale (UPDRS) is widely used for the clinical evaluation of Parkinson's disease (PD). We assessed the rater variability of the UPDRS Motor examination (UPDRS-ME) of nurse practitioners, residents in neurology, and a movement disorders specialist (MDS) compared to a senior MDS. We assessed the videotaped UPDRS-ME of 50 PD patients. Inter-rater and intra-rater variability were estimated using weighted kappa (kappa(w)) and intraclass correlation coefficients (ICC). Additionally, inter-rater agreement was quantified by calculation of the mean difference between 2 raters and its 95% limits of agreement. Intra-rater agreement was also estimated by calculation of a 95% repeatability limits. The kappa(w) and ICC statistics indicated good to very good inter-rater and intra-rater reliability for the majority of individual UPDRS items and the sum score of the UPDRS-ME in all raters. However, for inter-rater agreement, it appeared that both nurses, residents, and the MDS consistently assigned higher scores than the senior MDS. Mean differences ranged between 1.7 and 5.4 (all differences P < 0.05), with rather wide 95% limits of agreement. The intra-rater 95% repeatability limits were rather wide. We found considerable rater difference for the whole range of UPDRS-ME scores between a senior MDS and nurse practitioners, residents in neurology, and the MDS. This finding suggests that the amount by which raters may disagree should be quantified before starting longitudinal studies of disease progression or clinical trials. Finally, evaluation of rater agreement should always include the assessment of the extent of bias between different raters
U2 - https://doi.org/10.1002/mds.20640
DO - https://doi.org/10.1002/mds.20640
M3 - Article
C2 - 16116612
SN - 0885-3185
VL - 20
SP - 1577
EP - 1584
JO - Movement disorders
JF - Movement disorders
IS - 12
ER -