TY - JOUR
T1 - Trustworthy AI
T2 - Closing the gap between development and integration of AI systems in ophthalmic practice
AU - González-Gonzalo, Cristina
AU - Thee, Eric F.
AU - Klaver, Caroline C. W.
AU - Lee, Aaron Y.
AU - Schlingemann, Reinier O.
AU - Tufail, Adnan
AU - Verbraak, Frank
AU - Sánchez, Clara I.
N1 - Funding Information: Prof. dr. Klaver is a consultant for Bayer, Laboratoires Théa, Novartis, and CooperVision. Dr. Lee reports grants from Santen, personal fees from Genentech, personal fees from US FDA, personal fees from Johnson and Johnson, grants from Carl Zeiss Meditec, personal fees from Topcon, personal fees from Gyroscope, non-financial support from Microsoft, grants from Regeneron, outside this work. Prof. dr. Schlingemann is a consultant for Bayer, Novartis, IDx/Digital Diagnostics, Oxurion, Apellis, and Ciana Therapeutics. Prof. Tufail is an Advisory Board member at Appellis, Allergan, Bayer, Genetech/Roche, IVERIC bio, Heidelberg Engineering, Kanghong, Novartis, and cofounder of the following companies that have no products or products in development that relate to the contents of this manuscript: Oculogics, Vision AI. Funding Information: This work was supported by the Deep Learning for Medical Image Analysis (DLMedIA) research program by The Dutch Research Council (project number P15-26). Prof. Sánchez received funding from the Innovative Medicines Initiative 2 Joint Undertaking under grant agreement No. 116076 . This Joint Undertaking receives support from the European Union’s Horizon 2020 research and innovation program and EFPIA and Carl Zeiss Meditec AG . Dr. Lee's funding sources: Unrestricted and career development award from RPB, NEI/NIH K23EY029246 and NIA/NIH U19AG066567. Funding Information: This work was supported by the Deep Learning for Medical Image Analysis (DLMedIA) research program by The Dutch Research Council (project number P15-26). Prof. S?nchez received funding from the Innovative Medicines Initiative 2 Joint Undertaking under grant agreement No. 116076. This Joint Undertaking receives support from the European Union's Horizon 2020 research and innovation program and EFPIA and Carl Zeiss Meditec AG. Dr. Lee's funding sources: Unrestricted and career development award from RPB, NEI/NIH K23EY029246 and NIA/NIH U19AG066567. Publisher Copyright: © 2021 The Authors
PY - 2022/9
Y1 - 2022/9
N2 - An increasing number of artificial intelligence (AI) systems are being proposed in ophthalmology, motivated by the variety and amount of clinical and imaging data, as well as their potential benefits at the different stages of patient care. Despite achieving close or even superior performance to that of experts, there is a critical gap between development and integration of AI systems in ophthalmic practice. This work focuses on the importance of trustworthy AI to close that gap. We identify the main aspects or challenges that need to be considered along the AI design pipeline so as to generate systems that meet the requirements to be deemed trustworthy, including those concerning accuracy, resiliency, reliability, safety, and accountability. We elaborate on mechanisms and considerations to address those aspects or challenges, and define the roles and responsibilities of the different stakeholders involved in AI for ophthalmic care, i.e., AI developers, reading centers, healthcare providers, healthcare institutions, ophthalmological societies and working groups or committees, patients, regulatory bodies, and payers. Generating trustworthy AI is not a responsibility of a sole stakeholder. There is an impending necessity for a collaborative approach where the different stakeholders are represented along the AI design pipeline, from the definition of the intended use to post-market surveillance after regulatory approval. This work contributes to establish such multi-stakeholder interaction and the main action points to be taken so that the potential benefits of AI reach real-world ophthalmic settings.
AB - An increasing number of artificial intelligence (AI) systems are being proposed in ophthalmology, motivated by the variety and amount of clinical and imaging data, as well as their potential benefits at the different stages of patient care. Despite achieving close or even superior performance to that of experts, there is a critical gap between development and integration of AI systems in ophthalmic practice. This work focuses on the importance of trustworthy AI to close that gap. We identify the main aspects or challenges that need to be considered along the AI design pipeline so as to generate systems that meet the requirements to be deemed trustworthy, including those concerning accuracy, resiliency, reliability, safety, and accountability. We elaborate on mechanisms and considerations to address those aspects or challenges, and define the roles and responsibilities of the different stakeholders involved in AI for ophthalmic care, i.e., AI developers, reading centers, healthcare providers, healthcare institutions, ophthalmological societies and working groups or committees, patients, regulatory bodies, and payers. Generating trustworthy AI is not a responsibility of a sole stakeholder. There is an impending necessity for a collaborative approach where the different stakeholders are represented along the AI design pipeline, from the definition of the intended use to post-market surveillance after regulatory approval. This work contributes to establish such multi-stakeholder interaction and the main action points to be taken so that the potential benefits of AI reach real-world ophthalmic settings.
KW - Artificial intelligence
KW - Deep learning
KW - Integration
KW - Machine learning
KW - Ophthalmic care
KW - Trustworthiness
UR - http://www.scopus.com/inward/record.url?scp=85121304626&partnerID=8YFLogxK
U2 - https://doi.org/10.1016/j.preteyeres.2021.101034
DO - https://doi.org/10.1016/j.preteyeres.2021.101034
M3 - Review article
C2 - 34902546
SN - 1350-9462
VL - 90
JO - Progress in Retinal and Eye Research
JF - Progress in Retinal and Eye Research
M1 - 101034
ER -