Trust in Artificial Intelligence: Exploring the Influence of Model Presentation and Model Interaction on Trust in a Medical Setting

Tina Wünn, Danielle Sent, Linda W. P. Peute, Stefan Leijnen

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

The healthcare sector has been confronted with rapidly rising healthcare costs and a shortage of medical staff. At the same time, the field of Artificial Intelligence (AI) has emerged as a promising area of research, offering potential benefits for healthcare. Despite the potential of AI to support healthcare, its widespread implementation, especially in healthcare, remains limited. One possible factor contributing to that is the lack of trust in AI algorithms among healthcare professionals. Previous studies have indicated that explainability plays a crucial role in establishing trust in AI systems. This study aims to explore trust in AI and its connection to explainability in a medical setting. A rapid review was conducted to provide an overview of the existing knowledge and research on trust and explainability. Building upon these insights, a dashboard interface was developed to present the output of an AI-based decision-support tool along with explanatory information, with the aim of enhancing explainability of the AI for healthcare professionals. To investigate the impact of the dashboard and its explanations on healthcare professionals, an exploratory case study was conducted. The study encompassed an assessment of participants’ trust in the AI system, their perception of its explainability, as well as their evaluations of perceived ease of use and perceived usefulness. The initial findings from the case study indicate a positive correlation between perceived explainability and trust in the AI system. Our preliminary findings suggest that enhancing the explainability of AI systems could increase trust among healthcare professionals. This may contribute to an increased acceptance and adoption of AI in healthcare. However, a more elaborate experiment with the dashboard is essential.
Original languageEnglish
Title of host publicationArtificial Intelligence. ECAI 2023 International Workshops - XAI^3, TACTIFUL, XI-ML, SEDAMI, RAAIT, AI4S, HYDRA, AI4AI 2023, Proceedings
EditorsSławomir Nowaczyk, Przemysław Biecek, Neo Christopher Chung, Mauro Vallati, Paweł Skruch, Joanna Jaworek-Korjakowska, Simon Parkinson, Alexandros Nikitas, Martin Atzmüller, Tomáš Kliegr, Ute Schmid, Szymon Bobek, Nada Lavrac, Marieke Peeters, Roland van Dierendonck, Saskia Robben, Eunika Mercier-Laurent, Gülgün Kayakutlu, Mieczyslaw Lech Owoc, Karl Mason, Abdul Wahid, Pierangela Bruno, Francesco Calimeri, Francesco Cauteruccio, Giorgio Terracina, Diedrich Wolter, Jochen L. Leidner, Michael Kohlhase, Vania Dimitrova
PublisherSpringer Science and Business Media Deutschland GmbH
Pages76-86
Number of pages11
Volume1948 CCIS
ISBN (Print)9783031504846
DOIs
Publication statusPublished - 2024
EventInternational Workshops of the 26th European Conference on Artificial Intelligence, ECAI 2023 - Kraków, Poland
Duration: 30 Sept 20234 Oct 2023

Publication series

NameCommunications in Computer and Information Science
Volume1948 CCIS

Conference

ConferenceInternational Workshops of the 26th European Conference on Artificial Intelligence, ECAI 2023
Country/TerritoryPoland
CityKraków
Period30/09/20234/10/2023

Keywords

  • artificial intelligence
  • dashboard
  • explainability
  • healthcare
  • trust

Cite this