Sequential data is integral to many fields and plays a fundamental role in high-stakes decision-making, such as in healthcare, finance, transportation, and many other domains. However, the state-of-the-art approaches for sequential data predictions are usually black-box models, hardly interpretable from a human standpoint. In critical domains, the ability to explain a model's decisions is vital to establish a trustworthy relationship between human experts and AI systems. Thus, effective eXplainable AI (XAI) methods for sequential data can provide deeper insights across various domains, enhancing trust in machine learning decisions and reinforcing expert accountability in decision-making processes. This work tackles the challenge of explaining sequential data models from three distinct angles: the input, the output, and the explanation. Specifically, the input focuses on the diverse kinds of sequential data, proposing a comprehensive definition that encompasses forms like time series, trajectories, and text. The output pertains to the target variable in supervised learning, which can be either categorical or continuous, as seen in classification and regression tasks. Lastly, this work focuses on the explanation, which is the core of XAI, presenting various visualization techniques to aid users in understanding predictions from sequential data models. We analyze different combinations of input, output, and explanation types, proposing solutions tailored to the unique requirements of each task and challenge.

Explanation Methods for Sequential Data Models / Spinnato, Francesco; relatore esterno: Monreale, Anna; Scuola Normale Superiore, ciclo 36, 09-May-2024.

Explanation Methods for Sequential Data Models

SPINNATO, Francesco
2024

Abstract

Sequential data is integral to many fields and plays a fundamental role in high-stakes decision-making, such as in healthcare, finance, transportation, and many other domains. However, the state-of-the-art approaches for sequential data predictions are usually black-box models, hardly interpretable from a human standpoint. In critical domains, the ability to explain a model's decisions is vital to establish a trustworthy relationship between human experts and AI systems. Thus, effective eXplainable AI (XAI) methods for sequential data can provide deeper insights across various domains, enhancing trust in machine learning decisions and reinforcing expert accountability in decision-making processes. This work tackles the challenge of explaining sequential data models from three distinct angles: the input, the output, and the explanation. Specifically, the input focuses on the diverse kinds of sequential data, proposing a comprehensive definition that encompasses forms like time series, trajectories, and text. The output pertains to the target variable in supervised learning, which can be either categorical or continuous, as seen in classification and regression tasks. Lastly, this work focuses on the explanation, which is the core of XAI, presenting various visualization techniques to aid users in understanding predictions from sequential data models. We analyze different combinations of input, output, and explanation types, proposing solutions tailored to the unique requirements of each task and challenge.
9-mag-2024
Settore INF/01 - Informatica
Matematica e Informatica
36
explainable AI; time series; sequences; machine learning; classification; regression
Monreale, Anna
Nanni, Mirco
Guidotti, Riccardo
Scuola Normale Superiore
File in questo prodotto:
File Dimensione Formato  
Tesi.pdf

accesso aperto

Descrizione: Tesi PhD
Tipologia: Published version
Licenza: Non specificata
Dimensione 11.15 MB
Formato Adobe PDF
11.15 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11384/157600
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact