TY - GEN
T1 - Trust in Artificial Intelligence
T2 - International Workshops of the 26th European Conference on Artificial Intelligence, ECAI 2023
AU - Wünn, Tina
AU - Sent, Danielle
AU - Peute, Linda W. P.
AU - Leijnen, Stefan
N1 - Publisher Copyright: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
PY - 2024
Y1 - 2024
N2 - The healthcare sector has been confronted with rapidly rising healthcare costs and a shortage of medical staff. At the same time, the field of Artificial Intelligence (AI) has emerged as a promising area of research, offering potential benefits for healthcare. Despite the potential of AI to support healthcare, its widespread implementation, especially in healthcare, remains limited. One possible factor contributing to that is the lack of trust in AI algorithms among healthcare professionals. Previous studies have indicated that explainability plays a crucial role in establishing trust in AI systems. This study aims to explore trust in AI and its connection to explainability in a medical setting. A rapid review was conducted to provide an overview of the existing knowledge and research on trust and explainability. Building upon these insights, a dashboard interface was developed to present the output of an AI-based decision-support tool along with explanatory information, with the aim of enhancing explainability of the AI for healthcare professionals. To investigate the impact of the dashboard and its explanations on healthcare professionals, an exploratory case study was conducted. The study encompassed an assessment of participants’ trust in the AI system, their perception of its explainability, as well as their evaluations of perceived ease of use and perceived usefulness. The initial findings from the case study indicate a positive correlation between perceived explainability and trust in the AI system. Our preliminary findings suggest that enhancing the explainability of AI systems could increase trust among healthcare professionals. This may contribute to an increased acceptance and adoption of AI in healthcare. However, a more elaborate experiment with the dashboard is essential.
AB - The healthcare sector has been confronted with rapidly rising healthcare costs and a shortage of medical staff. At the same time, the field of Artificial Intelligence (AI) has emerged as a promising area of research, offering potential benefits for healthcare. Despite the potential of AI to support healthcare, its widespread implementation, especially in healthcare, remains limited. One possible factor contributing to that is the lack of trust in AI algorithms among healthcare professionals. Previous studies have indicated that explainability plays a crucial role in establishing trust in AI systems. This study aims to explore trust in AI and its connection to explainability in a medical setting. A rapid review was conducted to provide an overview of the existing knowledge and research on trust and explainability. Building upon these insights, a dashboard interface was developed to present the output of an AI-based decision-support tool along with explanatory information, with the aim of enhancing explainability of the AI for healthcare professionals. To investigate the impact of the dashboard and its explanations on healthcare professionals, an exploratory case study was conducted. The study encompassed an assessment of participants’ trust in the AI system, their perception of its explainability, as well as their evaluations of perceived ease of use and perceived usefulness. The initial findings from the case study indicate a positive correlation between perceived explainability and trust in the AI system. Our preliminary findings suggest that enhancing the explainability of AI systems could increase trust among healthcare professionals. This may contribute to an increased acceptance and adoption of AI in healthcare. However, a more elaborate experiment with the dashboard is essential.
KW - artificial intelligence
KW - dashboard
KW - explainability
KW - healthcare
KW - trust
UR - http://www.scopus.com/inward/record.url?scp=85184300163&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-50485-3_6
DO - 10.1007/978-3-031-50485-3_6
M3 - Conference contribution
SN - 9783031504846
VL - 1948 CCIS
T3 - Communications in Computer and Information Science
SP - 76
EP - 86
BT - Artificial Intelligence. ECAI 2023 International Workshops - XAI^3, TACTIFUL, XI-ML, SEDAMI, RAAIT, AI4S, HYDRA, AI4AI 2023, Proceedings
A2 - Nowaczyk, Sławomir
A2 - Biecek, Przemysław
A2 - Chung, Neo Christopher
A2 - Vallati, Mauro
A2 - Skruch, Paweł
A2 - Jaworek-Korjakowska, Joanna
A2 - Parkinson, Simon
A2 - Nikitas, Alexandros
A2 - Atzmüller, Martin
A2 - Kliegr, Tomáš
A2 - Schmid, Ute
A2 - Bobek, Szymon
A2 - Lavrac, Nada
A2 - Peeters, Marieke
A2 - van Dierendonck, Roland
A2 - Robben, Saskia
A2 - Mercier-Laurent, Eunika
A2 - Kayakutlu, Gülgün
A2 - Owoc, Mieczyslaw Lech
A2 - Mason, Karl
A2 - Wahid, Abdul
A2 - Bruno, Pierangela
A2 - Calimeri, Francesco
A2 - Cauteruccio, Francesco
A2 - Terracina, Giorgio
A2 - Wolter, Diedrich
A2 - Leidner, Jochen L.
A2 - Kohlhase, Michael
A2 - Dimitrova, Vania
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 30 September 2023 through 4 October 2023
ER -