TY - JOUR
T1 - Development and Internal Validation of a Prediction Model for Falls Using Electronic Health Records in a Hospital Setting
AU - Dormosh, Noman
AU - Damoiseaux-Volman, Birgit A.
AU - van der Velde, Nathalie
AU - Medlock, Stephanie
AU - Romijn, Johannes A.
AU - Abu-Hanna, Ameen
N1 - Funding Information: Funding information: This work was supported by the innovation funds of Amsterdam UMC –Location AMC. The sponsor did not have any role or influence in study design analysis or reporting. Publisher Copyright: © 2023 The Authors
PY - 2023/7
Y1 - 2023/7
N2 - Objective: Fall prevention is important in many hospitals. Current fall-risk-screening tools have limited predictive accuracy specifically for older inpatients. Their administration can be time-consuming. A reliable and easy-to-administer tool is desirable to identify older inpatients at higher fall risk. We aimed to develop and internally validate a prognostic prediction model for inpatient falls for older patients. Design: Retrospective analysis of a large cohort drawn from hospital electronic health record data. Setting and Participants: Older patients (≥70 years) admitted to a university medical center (2016 until 2021). Methods: The outcome was an inpatient fall (≥24 hours of admission). Two prediction models were developed using regularized logistic regression in 5 imputed data sets: one model without predictors indicating missing values (Model-without) and one model with these additional predictors indicating missing values (Model-with). We internally validated our whole model development strategy using 10-fold stratified cross-validation. The models were evaluated using discrimination (area under the receiver operating characteristic curve) and calibration (plot assessment). We determined whether the areas under the receiver operating characteristic curves (AUCs) of the models were significantly different using DeLong test. Results: Our data set included 21,286 admissions. In total, 470 (2.2%) had a fall after 24 hours of admission. The Model-without had 12 predictors and Model-with 13, of which 4 were indicators of missing values. The AUCs of the Model-without and Model-with were 0.676 (95% CI 0.646-0.707) and 0.695 (95% CI 0.667-0.724). The AUCs between both models were significantly different (P = .013). Calibration was good for both models. Conclusions and Implications: Both the Model-with and Model-without indicators of missing values showed good calibration and fair discrimination, where the Model-with performed better. Our models showed competitive performance to well-established fall-risk-screening tools, and they have the advantage of being based on routinely collected data. This may substantially reduce the burden on nurses, compared with nonautomatic fall-risk-screening tools.
AB - Objective: Fall prevention is important in many hospitals. Current fall-risk-screening tools have limited predictive accuracy specifically for older inpatients. Their administration can be time-consuming. A reliable and easy-to-administer tool is desirable to identify older inpatients at higher fall risk. We aimed to develop and internally validate a prognostic prediction model for inpatient falls for older patients. Design: Retrospective analysis of a large cohort drawn from hospital electronic health record data. Setting and Participants: Older patients (≥70 years) admitted to a university medical center (2016 until 2021). Methods: The outcome was an inpatient fall (≥24 hours of admission). Two prediction models were developed using regularized logistic regression in 5 imputed data sets: one model without predictors indicating missing values (Model-without) and one model with these additional predictors indicating missing values (Model-with). We internally validated our whole model development strategy using 10-fold stratified cross-validation. The models were evaluated using discrimination (area under the receiver operating characteristic curve) and calibration (plot assessment). We determined whether the areas under the receiver operating characteristic curves (AUCs) of the models were significantly different using DeLong test. Results: Our data set included 21,286 admissions. In total, 470 (2.2%) had a fall after 24 hours of admission. The Model-without had 12 predictors and Model-with 13, of which 4 were indicators of missing values. The AUCs of the Model-without and Model-with were 0.676 (95% CI 0.646-0.707) and 0.695 (95% CI 0.667-0.724). The AUCs between both models were significantly different (P = .013). Calibration was good for both models. Conclusions and Implications: Both the Model-with and Model-without indicators of missing values showed good calibration and fair discrimination, where the Model-with performed better. Our models showed competitive performance to well-established fall-risk-screening tools, and they have the advantage of being based on routinely collected data. This may substantially reduce the burden on nurses, compared with nonautomatic fall-risk-screening tools.
KW - Accidental falls
KW - electronic health records
KW - fall prevention
KW - inpatient falls
KW - prediction models
KW - routinely collected data
UR - http://www.scopus.com/inward/record.url?scp=85153504251&partnerID=8YFLogxK
U2 - https://doi.org/10.1016/j.jamda.2023.03.006
DO - https://doi.org/10.1016/j.jamda.2023.03.006
M3 - Article
C2 - 37060922
SN - 1525-8610
VL - 24
SP - 964-970.e5
JO - Journal of the American Medical Directors Association
JF - Journal of the American Medical Directors Association
IS - 7
ER -