In recent years we are witnessing the diffusion of AI systems based on powerful Machine Learning models which find application in many critical contexts such as medicine, financial market and credit scoring. In such a context it is particularly important to design Trustworthy AI systems while guaranteeing transparency, with respect to their decision reasoning and privacy protection. Although many works in the literature addressed the lack of transparency and the risk of privacy exposure of Machine Learning models, the privacy risks of explainers have not been appropriately studied. This paper presents a methodology for evaluating the privacy exposure raised by interpretable global explainers able to imitate the original black-box classifier. Our methodology exploits the well-known Membership Inference Attack. The experimental results highlight that global explainers based on interpretable trees lead to an increase in privacy exposure.

Evaluating the privacy exposure of interpretable global explainers

Francesca Naretto
;
Fosca Giannotti
In corso di stampa

Abstract

In recent years we are witnessing the diffusion of AI systems based on powerful Machine Learning models which find application in many critical contexts such as medicine, financial market and credit scoring. In such a context it is particularly important to design Trustworthy AI systems while guaranteeing transparency, with respect to their decision reasoning and privacy protection. Although many works in the literature addressed the lack of transparency and the risk of privacy exposure of Machine Learning models, the privacy risks of explainers have not been appropriately studied. This paper presents a methodology for evaluating the privacy exposure raised by interpretable global explainers able to imitate the original black-box classifier. Our methodology exploits the well-known Membership Inference Attack. The experimental results highlight that global explainers based on interpretable trees lead to an increase in privacy exposure.
Settore INF/01 - Informatica
The Fourth IEEE International Conference on Cognitive Machine Intelligence
Virtual Conference
2022-12-14 - 2022-12-16
The Fourth IEEE International Conference on Cognitive Machine Intelligence
Horizon 2020
This work is supported by the EU H2020 project SoBigData++ (Grant Id 871042), XAI (Grant Id 834756), G.A. 952215 TAILOR and HumanE-AI-Net (Grant Id 952026).
File in questo prodotto:
File Dimensione Formato  
CogMi (1).pdf

Accesso chiuso

Tipologia: Accepted version (post-print)
Licenza: Non pubblico
Dimensione 249.06 kB
Formato Adobe PDF
249.06 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11384/125582
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact