Recent years have seen the emergence of Machine Learning models, which are accurate but lack transparency in their decision-making processes. The field of Explainable Artificial Intelligence has emerged to address this issue, but many questions remain unanswered. This Ph.D. Thesis presents two key contributions: (i) a novel variant of a local rule-based explanation method that provides stable and actionable explanations, and (ii) an investigation into the relationship between Data Privacy and Explainable Artificial Intelligence, examining their synergies and tensions. For (i), an improvement of a local explanation method is designed, using factual logic rules to explain black-box decisions and providing actionable counterfactual logic rules for suggesting changes in instances to achieve different outcomes. Explanations are generated from a decision tree that mimics the local behavior of the black-box model. The decision tree is obtained through a stability and fidelity-driven ensemble learning approach, where neighbor instances are synthetically generated using a genetic algorithm guided by the black-box behavior. Regarding (ii), two perspectives on privacy are addressed: (a) how Explainable Artificial Intelligence can enhance individuals’ privacy awareness and (b) how Explainable Artificial Intelligence can compromise privacy. A framework called Expert is developed to predict users’ privacy risk and provide explanations, focusing on human mobility data. Additionally, a visualization module is incorporated to display mobility data explanations on a map. To assess privacy exposure, instead, a new membership attack for Machine Learning models is proposed, and a methodology called reveal is introduced to evaluate the privacy risks associated with local explainers based on surrogate models. The experimental analysis demonstrates that global explainers pose a more significant threat to individual privacy compared to local explainers. These findings highlight the delicate balance between explainability and privacy in developing Artificial Intelligence systems.

Explainable AI methods and their interplay with privacy protection / Naretto, Francesca; relatore: GIANNOTTI, Fosca; relatore esterno: Monreale, Anna; Scuola Normale Superiore, ciclo 35, 17-Jul-2023.

Explainable AI methods and their interplay with privacy protection

NARETTO, Francesca
2023

Abstract

Recent years have seen the emergence of Machine Learning models, which are accurate but lack transparency in their decision-making processes. The field of Explainable Artificial Intelligence has emerged to address this issue, but many questions remain unanswered. This Ph.D. Thesis presents two key contributions: (i) a novel variant of a local rule-based explanation method that provides stable and actionable explanations, and (ii) an investigation into the relationship between Data Privacy and Explainable Artificial Intelligence, examining their synergies and tensions. For (i), an improvement of a local explanation method is designed, using factual logic rules to explain black-box decisions and providing actionable counterfactual logic rules for suggesting changes in instances to achieve different outcomes. Explanations are generated from a decision tree that mimics the local behavior of the black-box model. The decision tree is obtained through a stability and fidelity-driven ensemble learning approach, where neighbor instances are synthetically generated using a genetic algorithm guided by the black-box behavior. Regarding (ii), two perspectives on privacy are addressed: (a) how Explainable Artificial Intelligence can enhance individuals’ privacy awareness and (b) how Explainable Artificial Intelligence can compromise privacy. A framework called Expert is developed to predict users’ privacy risk and provide explanations, focusing on human mobility data. Additionally, a visualization module is incorporated to display mobility data explanations on a map. To assess privacy exposure, instead, a new membership attack for Machine Learning models is proposed, and a methodology called reveal is introduced to evaluate the privacy risks associated with local explainers based on surrogate models. The experimental analysis demonstrates that global explainers pose a more significant threat to individual privacy compared to local explainers. These findings highlight the delicate balance between explainability and privacy in developing Artificial Intelligence systems.
17-lug-2023
Settore INF/01 - Informatica
Data science
35
Scuola Normale Superiore
GIANNOTTI, Fosca
Monreale, Anna
File in questo prodotto:
File Dimensione Formato  
Naretto-PhDThesis.pdf

Open Access dal 17/07/2024

Tipologia: Tesi PhD
Licenza: Solo Lettura
Dimensione 24.63 MB
Formato Adobe PDF
24.63 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11384/133984
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact