A key issue in critical contexts such as medical diagnosis is the interpretability of the deep learning models adopted in decision-making systems. Research in eXplainable Artificial Intelligence (XAI) is trying to solve this issue. However, often XAI approaches are only tested on generalist classifier and do not represent realistic problems such as those of medical diagnosis. In this paper, we aim at improving the trust and confidence of users towards automatic AI decision systems in the field of medical skin lesion diagnosis by customizing an existing XAI approach for explaining an AI model able to recognize different types of skin lesions. The explanation is generated through the use of synthetic exemplar and counter-exemplar images of skin lesions and our contribution offers the practitioner a way to highlight the crucial traits responsible for the classification decision. A validation survey with domain experts, beginners, and unskilled people shows that the use of explanations improves trust and confidence in the automatic decision system. Also, an analysis of the latent space adopted by the explainer unveils that some of the most frequent skin lesion classes are distinctly separated. This phenomenon may stem from the intrinsic characteristics of each class and may help resolve common misclassifications made by human experts.

Improving trust and confidence in medical skin lesion diagnosis through explainable deep learning

Giannotti, Fosca
2023

Abstract

A key issue in critical contexts such as medical diagnosis is the interpretability of the deep learning models adopted in decision-making systems. Research in eXplainable Artificial Intelligence (XAI) is trying to solve this issue. However, often XAI approaches are only tested on generalist classifier and do not represent realistic problems such as those of medical diagnosis. In this paper, we aim at improving the trust and confidence of users towards automatic AI decision systems in the field of medical skin lesion diagnosis by customizing an existing XAI approach for explaining an AI model able to recognize different types of skin lesions. The explanation is generated through the use of synthetic exemplar and counter-exemplar images of skin lesions and our contribution offers the practitioner a way to highlight the crucial traits responsible for the classification decision. A validation survey with domain experts, beginners, and unskilled people shows that the use of explanations improves trust and confidence in the automatic decision system. Also, an analysis of the latent space adopted by the explainer unveils that some of the most frequent skin lesion classes are distinctly separated. This phenomenon may stem from the intrinsic characteristics of each class and may help resolve common misclassifications made by human experts.
2023
Settore INF/01 - Informatica
Adversarial autoencoders; dermoscopic images; explainable artificial intelligence; skin image analysis; artificial-intelligence; black-box
   SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics
   SoBigData-PlusPlus
   European Commission
   Horizon 2020 Framework Programme
   871042

   HumanE AI Network
   HumanE-AI-Net
   European Commission
   Horizon 2020 Framework Programme
   952026

   Critical Action Planning over Extreme-Scale Data
   CREXDATA
   European Commission
   Horizon Europe Framework Programme
   101092749

   Science and technology for the explanation of AI decision making
   XAI
   European Commission
   Horizon 2020 Framework Programme
   834756

   Foundations of Trustworthy AI - Integrating Reasoning, Learning and Optimization
   TAILOR
   European Commission
   Horizon 2020 Framework Programme
   952215

   SoBigData.it - Strengthening the Italian RI for Social Mining and Big Data Analytics
   Unione Europea - NextGenerationEU - National Recovery and Resilience Plan (Piano Nazionale di Ripresa e Resilienza, PNRR)
   PNRR
File in questo prodotto:
File Dimensione Formato  
Improving trust and confidence in medical skin lesion diagnosis through explainable deep learning.pdf

accesso aperto

Tipologia: Published version
Licenza: Creative Commons
Dimensione 2.68 MB
Formato Adobe PDF
2.68 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11384/137124
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 2
social impact