Deep learning-based classification of kidney transplant pathology: a retrospective, multicentre, proof-of-concept study

Jesper Kers, Roman D Bülow, Barbara M Klinkhammer, Gerben E Breimer, Francesco Fontana, Adeyemi Adefidipe Abiola, Rianne Hofstraat, Garry L Corthals, Hessel Peters-Sengers, Sonja Djudjaj, Saskia von Stillfried, David L Hölscher, Tobias T Pieters, Arjan D van Zuilen, Frederike J Bemelman, Azam S Nurmohamed, Maarten Naesens, Joris J T H Roelofs, Sandrine Florquin, Jürgen FloegeTri Q Nguyen, Jakob N Kather, Peter Boor

Research output: Contribution to journalArticleAcademicpeer-review

41 Citations (Scopus)

Abstract

BACKGROUND: Histopathological assessment of transplant biopsies is currently the standard method to diagnose allograft rejection and can help guide patient management, but it is one of the most challenging areas of pathology, requiring considerable expertise, time, and effort. We aimed to analyse the utility of deep learning to preclassify histology of kidney allograft biopsies into three main broad categories (ie, normal, rejection, and other diseases) as a potential biopsy triage system focusing on transplant rejection.

METHODS: We performed a retrospective, multicentre, proof-of-concept study using 5844 digital whole slide images of kidney allograft biopsies from 1948 patients. Kidney allograft biopsy samples were identified by a database search in the Departments of Pathology of the Amsterdam UMC, Amsterdam, Netherlands (1130 patients) and the University Medical Center Utrecht, Utrecht, Netherlands (717 patients). 101 consecutive kidney transplant biopsies were identified in the archive of the Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany. Convolutional neural networks (CNNs) were trained to classify allograft biopsies as normal, rejection, or other diseases. Three times cross-validation (1847 patients) and deployment on an external real-world cohort (101 patients) were used for validation. Area under the receiver operating characteristic curve (AUROC) was used as the main performance metric (the primary endpoint to assess CNN performance).

FINDINGS: Serial CNNs, first classifying kidney allograft biopsies as normal (AUROC 0·87 [ten times bootstrapped CI 0·85-0·88]) and disease (0·87 [0·86-0·88]), followed by a second CNN classifying biopsies classified as disease into rejection (0·75 [0·73-0·76]) and other diseases (0·75 [0·72-0·77]), showed similar AUROC in cross-validation and deployment on independent real-world data (first CNN normal AUROC 0·83 [0·80-0·85], disease 0·83 [0·73-0·91]; second CNN rejection 0·61 [0·51-0·70], other diseases 0·61 [0·50-0·74]). A single CNN classifying biopsies as normal, rejection, or other diseases showed similar performance in cross-validation (normal AUROC 0·80 [0·73-0·84], rejection 0·76 [0·66-0·80], other diseases 0·50 [0·36-0·57]) and generalised well for normal and rejection classes in the real-world data. Visualisation techniques highlighted rejection-relevant areas of biopsies in the tubulointerstitium.

INTERPRETATION: This study showed that deep learning-based classification of transplant biopsies could support pathological diagnostics of kidney allograft rejection.

FUNDING: European Research Council; German Research Foundation; German Federal Ministries of Education and Research, Health, and Economic Affairs and Energy; Dutch Kidney Foundation; Human(e) AI Research Priority Area of the University of Amsterdam; and Max-Eder Programme of German Cancer Aid.

Original languageEnglish
Pages (from-to)e18-e26
JournalThe Lancet Digital Health
Volume4
Issue number1
Early online date2021
DOIs
Publication statusPublished - Jan 2022

Cite this