TY - GEN
T1 - Deep Kernel Learning for Mortality Prediction in the Face of Temporal Shift
AU - Rios, Miguel
AU - Abu-Hanna, Ameen
N1 - Publisher Copyright: © 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - Neural models, with their ability to provide novel representations, have shown promising results in prediction tasks in healthcare. However, patient demographics, medical technology, and quality of care change over time. This often leads to drop in the performance of neural models for prospective patients, especially in terms of their calibration. The deep kernel learning (DKL) framework may be robust to such changes as it combines neural models with Gaussian processes, which are aware of prediction uncertainty. Our hypothesis is that out-of-distribution test points will result in probabilities closer to the global mean and hence prevent overconfident predictions. This in turn, we hypothesise, will result in better calibration on prospective data. This paper investigates DKL’s behaviour when facing a temporal shift, which was naturally introduced when an information system that feeds a cohort database was changed. We compare DKL’s performance to that of a neural baseline based on recurrent neural networks. We show that DKL indeed produced superior calibrated predictions. We also confirm that the DKL’s predictions were indeed less sharp. In addition, DKL’s discrimination ability was even improved: its AUC was 0.746 (± 0.014 std), compared to 0.739 (±0.028 std) for the baseline. The paper demonstrated the importance of including uncertainty in neural computing, especially for their prospective use.
AB - Neural models, with their ability to provide novel representations, have shown promising results in prediction tasks in healthcare. However, patient demographics, medical technology, and quality of care change over time. This often leads to drop in the performance of neural models for prospective patients, especially in terms of their calibration. The deep kernel learning (DKL) framework may be robust to such changes as it combines neural models with Gaussian processes, which are aware of prediction uncertainty. Our hypothesis is that out-of-distribution test points will result in probabilities closer to the global mean and hence prevent overconfident predictions. This in turn, we hypothesise, will result in better calibration on prospective data. This paper investigates DKL’s behaviour when facing a temporal shift, which was naturally introduced when an information system that feeds a cohort database was changed. We compare DKL’s performance to that of a neural baseline based on recurrent neural networks. We show that DKL indeed produced superior calibrated predictions. We also confirm that the DKL’s predictions were indeed less sharp. In addition, DKL’s discrimination ability was even improved: its AUC was 0.746 (± 0.014 std), compared to 0.739 (±0.028 std) for the baseline. The paper demonstrated the importance of including uncertainty in neural computing, especially for their prospective use.
KW - Calibration
KW - Deep kernel learning
KW - Gaussian process
KW - Mortality prediction
KW - Temporal shift
KW - Time series
UR - http://www.scopus.com/inward/record.url?scp=85111377581&partnerID=8YFLogxK
U2 - https://doi.org/10.1007/978-3-030-77211-6_22
DO - https://doi.org/10.1007/978-3-030-77211-6_22
M3 - Conference contribution
SN - 9783030772109
VL - 12721 LNAI
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 199
EP - 208
BT - Artificial Intelligence in Medicine - 19th International Conference on Artificial Intelligence in Medicine, AIME 2021, Proceedings
A2 - Tucker, Allan
A2 - Henriques Abreu, Pedro
A2 - Cardoso, Jaime
A2 - Pereira Rodrigues, Pedro
A2 - Riaño, David
PB - Springer Science and Business Media Deutschland GmbH
T2 - 19th International Conference on Artificial Intelligence in Medicine, AIME 2021
Y2 - 15 June 2021 through 18 June 2021
ER -