Time series data is increasingly used in a wide range of fields, and it is often relied on in crucial applications and high-stakes decision-making. For instance, sensors generate time series data to recognize different types of anomalies through automatic decision-making systems. Typically, these systems are realized with machine learning models that achieve top-tier performance on time series classification tasks. Unfortunately, the logic behind their prediction is opaque and hard to understand from a human standpoint. Recently, we observed a consistent increase in the development of explanation methods for time series classification justifying the need to structure and review the field. In this work, we (a) present the first extensive literature review on Explainable AI (XAI) for time series classification, (b) categorize the research field through a taxonomy subdividing the methods into time points-based, subsequences-based and instance-based, and (c) identify open research directions regarding the type of explanations and the evaluation of explanations and interpretability.

Explainable AI for Time Series Classification: A Review, Taxonomy and Research Directions

Spinnato, Francesco;
2022

Abstract

Time series data is increasingly used in a wide range of fields, and it is often relied on in crucial applications and high-stakes decision-making. For instance, sensors generate time series data to recognize different types of anomalies through automatic decision-making systems. Typically, these systems are realized with machine learning models that achieve top-tier performance on time series classification tasks. Unfortunately, the logic behind their prediction is opaque and hard to understand from a human standpoint. Recently, we observed a consistent increase in the development of explanation methods for time series classification justifying the need to structure and review the field. In this work, we (a) present the first extensive literature review on Explainable AI (XAI) for time series classification, (b) categorize the research field through a taxonomy subdividing the methods into time points-based, subsequences-based and instance-based, and (c) identify open research directions regarding the type of explanations and the evaluation of explanations and interpretability.
2022
Settore INF/01 - Informatica
Explainable artificial intelligence; interpretable machine learning; temporal data analysis; time series classification
   SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics
   SoBigData-PlusPlus
   European Commission
   Horizon 2020 Framework Programme
   871042

   HumanE AI Network
   HumanE-AI-Net
   European Commission
   Horizon 2020 Framework Programme
   952026

   Science and technology for the explanation of AI decision making
   XAI
   European Commission
   Horizon 2020 Framework Programme
   834756

   Foundations of Trustworthy AI - Integrating Reasoning, Learning and Optimization
   TAILOR
   European Commission
   Horizon 2020 Framework Programme
   952215

   Social Explainable Artificial Intelligence
   SAI
   CHIST-ERA
   CHIST-ERA-19-XAI-010

   SAI: Social Explainable Artificial Intelligence
   UK Research and Innovation
   EPSRC
   EP/V055712/1
File in questo prodotto:
File Dimensione Formato  
P3 - Explainable_AI_for_Time_Series_Classification_A_Review_Taxonomy_and_Research_Directions (1).pdf

accesso aperto

Tipologia: Published version
Licenza: Creative Commons
Dimensione 2.01 MB
Formato Adobe PDF
2.01 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11384/137184
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 28
  • ???jsp.display-item.citation.isi??? 16
social impact