There is an increasing need to explain machine learning decisions in an understandable way, even for non-expert users. In this paper, we introduce a multi-agent architecture to provide interactive explanations for classification tasks based on a range of machine learning algorithms, so that end-users can obtain answers in natural language. Our architecture is composed of four agents that are able to convert any classifier into a surrogate Decision Tree around the neighbourhood of a classification instance, which is then translated into a natural language explanation that can be further explored in an interactive way. We validate our approach against publicly available datasets using different classification methods, discussing the relevance of the architecture along five quality attributes, and performing a user study to evaluate the generated explanations. Our results show that the proposed architecture is able to generate simplified explanations that are more understandable for non-expert users in comparison to the ones given directly by a single explainer in all evaluated criteria.

MAINLE : a Multi-Agent, Interactive, Natural Language Local Explainer of Classification Tasks

Gezici, Gizem;Giannotti, Fosca;
2025

Abstract

There is an increasing need to explain machine learning decisions in an understandable way, even for non-expert users. In this paper, we introduce a multi-agent architecture to provide interactive explanations for classification tasks based on a range of machine learning algorithms, so that end-users can obtain answers in natural language. Our architecture is composed of four agents that are able to convert any classifier into a surrogate Decision Tree around the neighbourhood of a classification instance, which is then translated into a natural language explanation that can be further explored in an interactive way. We validate our approach against publicly available datasets using different classification methods, discussing the relevance of the architecture along five quality attributes, and performing a user study to evaluate the generated explanations. Our results show that the proposed architecture is able to generate simplified explanations that are more understandable for non-expert users in comparison to the ones given directly by a single explainer in all evaluated criteria.
2025
Settore INFO-01/A - Informatica
European Conference, ECML PKDD 2025
Porto
September 15–19, 2025
Machine Learning and Knowledge Discovery in Databases : Research Track : European Conference, ECML PKDD 2025, Porto, Portugal, September 15–19, 2025, Proceedings, Part IV
Springer
9783032060778
9783032060785
Explainable AI; Conversational AI; Model-agnostic explanations; Local explanation
   Science and technology for the explanation of AI decision making
   XAI
   European Commission
   H2020
   834756

   Emergent awareness from minimal collectives
   EMERGE
   European Commission
   Horizon Europe Framework Programme
   101070918
File in questo prodotto:
File Dimensione Formato  
preprint_ecml_pkdd_2025_research_1246.pdf

Accesso chiuso

Tipologia: Published version
Licenza: Tutti i diritti riservati
Dimensione 539.98 kB
Formato Adobe PDF
539.98 kB Adobe PDF   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11384/158646
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex 0
social impact