Deep neural networks are usually considered black-boxes due to their complex internal architecture, that cannot straightforwardly provide human-understandable explanations on how they behave. Indeed, Deep Learning is still viewed with skepticism in those real-world domains in which incorrect predictions may produce critical effects. This is one of the reasons why in the last few years Explainable Artificial Intelligence (XAI) techniques have gained a lot of attention in the scientific community. In this paper, we focus on the case of multi-label classification, proposing a neural network that learns the relationships among the predictors associated to each class, yielding First-Order Logic (FOL)-based descriptions. Both the explanation-related network and the classification-related network are jointly learned, thus implicitly introducing a latent dependency between the development of the explanation mechanism and the development of the classifiers. Our model can integrate human-driven preferences that guide the learning-to-explain process, and it is presented in a unified framework. Different typologies of explanations are evaluated in distinct experiments, showing that the proposed approach discovers new knowledge and can improve the classifier performance.

Human-driven FOL explanations of deep learning

Giannini, Francesco;Gori, Marco;
2020

Abstract

Deep neural networks are usually considered black-boxes due to their complex internal architecture, that cannot straightforwardly provide human-understandable explanations on how they behave. Indeed, Deep Learning is still viewed with skepticism in those real-world domains in which incorrect predictions may produce critical effects. This is one of the reasons why in the last few years Explainable Artificial Intelligence (XAI) techniques have gained a lot of attention in the scientific community. In this paper, we focus on the case of multi-label classification, proposing a neural network that learns the relationships among the predictors associated to each class, yielding First-Order Logic (FOL)-based descriptions. Both the explanation-related network and the classification-related network are jointly learned, thus implicitly introducing a latent dependency between the development of the explanation mechanism and the development of the classifiers. Our model can integrate human-driven preferences that guide the learning-to-explain process, and it is presented in a unified framework. Different typologies of explanations are evaluated in distinct experiments, showing that the proposed approach discovers new knowledge and can improve the classifier performance.
2020
Settore INFO-01/A - Informatica
Settore IINF-05/A - Sistemi di elaborazione delle informazioni
29th International Joint Conference on Artificial Intelligence, IJCAI 2020
jpn
2021
IJCAI International Joint Conference on Artificial Intelligence
International Joint Conferences on Artificial Intelligence
9780999241165
International Joint Conferences on Artifical Intelligence (IJCAI)
File in questo prodotto:
File Dimensione Formato  
IJCAI - Human-Driven FOL Explanations of Deep Learning.pdf

accesso aperto

Tipologia: Published version
Licenza: Solo Lettura
Dimensione 567.7 kB
Formato Adobe PDF
567.7 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11384/150601
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 23
  • ???jsp.display-item.citation.isi??? 15
  • OpenAlex ND
social impact