Ethical Considerations for Artificial Intelligence in Medical Imaging: Deployment and Governance

Jonathan Herington, Melissa D. McCradden, Kathleen Creel, Ronald Boellaard, Elizabeth C. Jones, Abhinav K. Jha, Arman Rahmim, Peter J. H. Scott, John J. Sunderland, Richard L. Wahl, Sven Zuehlsdorff, Babak Saboury

Research output: Contribution to journalArticleAcademicpeer-review

3 Citations (Scopus)

Abstract

The deployment of artificial intelligence (AI) has the potential to make nuclear medicine and medical imaging faster, cheaper, and both more effective and more accessible. This is possible, however, only if clinicians and patients feel that these AI medical devices (AIMDs) are trustworthy. Highlighting the need to ensure health justice by fairly distributing benefits and burdens while respecting individual patients’ rights, the AI Task Force of the Society of Nuclear Medicine and Molecular Imaging has identified 4 major ethical risks that arise during the deployment of AIMD: autonomy of patients and clinicians, transparency of clinical performance and limitations, fairness toward marginalized populations, and accountability of physicians and developers. We provide preliminary recommendations for governing these ethical risks to realize the promise of AIMD for patients and populations.
Original languageEnglish
Pages (from-to)1509-1515
Number of pages7
JournalJournal of nuclear medicine
Volume64
Issue number10
DOIs
Publication statusPublished - 1 Oct 2023

Keywords

  • AI ethics
  • explainability
  • fairness
  • justice
  • software as medical device

Cite this