Improving trust and confidence in medical skin lesion diagnosis through explainable deep learning

被引:8
|
作者
Metta, Carlo [1 ]
Beretta, Andrea [1 ]
Guidotti, Riccardo [2 ]
Yin, Yuan [3 ]
Gallinari, Patrick [3 ]
Rinzivillo, Salvatore [1 ]
Giannotti, Fosca [4 ]
机构
[1] ISTI CNR, Pisa, Italy
[2] Univ Pisa, Pisa, Italy
[3] Sorbonne Univ, Criteo AI Lab, Paris, France
[4] Scuola Normale Super Pisa, Pisa, Italy
基金
欧洲研究理事会;
关键词
Skin image analysis; Dermoscopic images; Explainable artificial intelligence; Adversarial autoencoders; ARTIFICIAL-INTELLIGENCE; BLACK-BOX;
D O I
10.1007/s41060-023-00401-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A key issue in critical contexts such as medical diagnosis is the interpretability of the deep learning models adopted in decision-making systems. Research in eXplainable Artificial Intelligence (XAI) is trying to solve this issue. However, often XAI approaches are only tested on generalist classifier and do not represent realistic problems such as those of medical diagnosis. In this paper, we aim at improving the trust and confidence of users towards automatic AI decision systems in the field of medical skin lesion diagnosis by customizing an existing XAI approach for explaining an AI model able to recognize different types of skin lesions. The explanation is generated through the use of synthetic exemplar and counter-exemplar images of skin lesions and our contribution offers the practitioner a way to highlight the crucial traits responsible for the classification decision. A validation survey with domain experts, beginners, and unskilled people shows that the use of explanations improves trust and confidence in the automatic decision system. Also, an analysis of the latent space adopted by the explainer unveils that some of the most frequent skin lesion classes are distinctly separated. This phenomenon may stem from the intrinsic characteristics of each class and may help resolve common misclassifications made by human experts.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Trust Metrics for Medical Deep Learning Using Explainable-AI Ensemble for Time Series Classification
    Siddiqui, Kashif
    Doyle, Thomas E.
    2022 IEEE CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING (CCECE), 2022, : 370 - 377
  • [22] Improving Access Trust in Healthcare Through Multimodal Deep Learning for Affective Computing
    I. Sakthidevi
    G. Fathima
    Human-Centric Intelligent Systems, 2024, 4 (4): : 511 - 526
  • [23] Advancing Ovarian Cancer Diagnosis Through Deep Learning and eXplainable AI: A Multiclassification Approach
    Radhakrishnan, Meera
    Sampathila, Niranjana
    Muralikrishna, H.
    Swathi, K. S.
    IEEE ACCESS, 2024, 12 : 116968 - 116986
  • [24] A novel framework of multiclass skin lesion recognition from dermoscopic images using deep learning and explainable AI
    Ahmad, Naveed
    Shah, Jamal Hussain
    Khan, Muhammad Attique
    Baili, Jamel
    Ansari, Ghulam Jillani
    Tariq, Usman
    Kim, Ye Jin
    Cha, Jae-Hyuk
    FRONTIERS IN ONCOLOGY, 2023, 13
  • [25] Enhancing UAV Security Through Zero Trust Architecture: An Advanced Deep Learning and Explainable AI Analysis
    Haque, Ekramul
    Hasan, Kamrul
    Ahmed, Imtiaz
    Alam, Md Sahabul
    Islam, Tariqul
    2024 INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKING AND COMMUNICATIONS, ICNC, 2024, : 463 - 467
  • [26] Explainable Deep Learning in Spectral and Medical Image Analysis
    Liu, Xuyang
    Duan, Chaoshu
    Cai, Wensheng
    Shao, Xueguang
    PROGRESS IN CHEMISTRY, 2022, 34 (12) : 2561 - 2572
  • [27] Explainable Deep Learning Models in Medical Image Analysis
    Singh, Amitojdeep
    Sengupta, Sourya
    Lakshminarayanan, Vasudevan
    JOURNAL OF IMAGING, 2020, 6 (06)
  • [28] Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis
    Singh, Amitojdeep
    Balaji, Janarthanam Jothi
    Rasheed, Mohammed Abdul
    Jayakumar, Varadharajan
    Raman, Rajiv
    Lakshminarayanan, Vasudevan
    CLINICAL OPHTHALMOLOGY, 2021, 15 : 2573 - 2581
  • [29] Colon cancer diagnosis by means of explainable deep learning
    Di Giammarco, Marcello
    Martinelli, Fabio
    Santone, Antonella
    Cesarelli, Mario
    Mercaldo, Francesco
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [30] A survey on deep learning for skin lesion segmentation
    Mirikharaji, Zahra
    Abhishek, Kumar
    Bissoto, Alceu
    Barata, Catarina
    Avila, Sandra
    Valle, Eduardo
    Celebi, M. Emre
    Hamarneh, Ghassan
    MEDICAL IMAGE ANALYSIS, 2023, 88