Counterfactual explanations play a crucial role in interpreting and understanding the decision-making process of complex machine learning models, offering insights into why a particular prediction was made and how it could be altered. However, individual counterfactual explanations generated by different methods may vary significantly in terms of their quality, diversity, and coherence to the black-box prediction. This is especially important in financial applications such as churn analysis, where customer retention officers could explore different approaches and solutions with the clients to prevent churning. The officer’s capability to modify and explore different explanations is pivotal to his ability to provide feasible solutions. To address this challenge, we propose an evaluation framework through the implementation of an ensemble approach that combines state-of-the-art counterfactual generation methods and a linear combination score of desired properties to select the most appropriate explanation. We conduct our experiments on three publicly available churn datasets in different domains. Our experimental results demonstrate that the ensemble of counterfactual explanations provides more diverse and comprehensive insights into model behavior compared to individual methods alone that suffer from specific weaknesses. By aggregating, evaluating, and selecting multiple explanations, our approach enhances the diversity of the explanation, highlights common patterns, and mitigates the limitations of any single method, offering to the user the ability to tweak the explanation properties to their needs.

Ensemble Counterfactual Explanations for Churn Analysis

Tonati Samuele
;
Roberto Pellungrini;Marzio Di Vece;Fosca Giannotti
2025

Abstract

Counterfactual explanations play a crucial role in interpreting and understanding the decision-making process of complex machine learning models, offering insights into why a particular prediction was made and how it could be altered. However, individual counterfactual explanations generated by different methods may vary significantly in terms of their quality, diversity, and coherence to the black-box prediction. This is especially important in financial applications such as churn analysis, where customer retention officers could explore different approaches and solutions with the clients to prevent churning. The officer’s capability to modify and explore different explanations is pivotal to his ability to provide feasible solutions. To address this challenge, we propose an evaluation framework through the implementation of an ensemble approach that combines state-of-the-art counterfactual generation methods and a linear combination score of desired properties to select the most appropriate explanation. We conduct our experiments on three publicly available churn datasets in different domains. Our experimental results demonstrate that the ensemble of counterfactual explanations provides more diverse and comprehensive insights into model behavior compared to individual methods alone that suffer from specific weaknesses. By aggregating, evaluating, and selecting multiple explanations, our approach enhances the diversity of the explanation, highlights common patterns, and mitigates the limitations of any single method, offering to the user the ability to tweak the explanation properties to their needs.
2025
Settore INFO-01/A - Informatica
Discovery Science 2024
Pisa
October 14–16, 2024
Discovery Science 2024 : Springer Proceedings
Explainable AI; Counterfactual Explanations; Churn Analysis
   PNRR Partenariati Estesi - FAIR - Future artificial intelligence research.
   Ministero della pubblica istruzione, dell'università e della ricerca

   It takes two to tango: a synergistic approach to human-machine decision making
   TANGO
   European Commission
   Grant Agreement n. 101120763

   PNRR Infrastrutture di Ricerca - SoBigData.it - Strengthening the Italian RI for Social Mining and Big Data Analytics.
   SoBigData.it
   Ministero della pubblica istruzione, dell'università e della ricerca
   IR0000013

   Science and technology for the explanation of AI decision making
   XAI
   European Commission
   H2020
   834756
File in questo prodotto:
File Dimensione Formato  
978-3-031-78980-9_21.pdf

accesso aperto

Tipologia: Published version
Licenza: Creative Commons
Dimensione 456.06 kB
Formato Adobe PDF
456.06 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11384/147064
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact