Knowledge Graph Embedding models have shown remarkable performances in different tasks like knowledge completion. However, they inherently lack interpretability, making it difficult to understand the reasoning behind their predictions. While different Neural-Symbolic (NeSy) models have been proposed to achieve interpretable reasoning through logic rules, existing evaluations primarily focus on accuracy, overlooking the critical assessment of explanation quality. This paper addresses this gap by introducing fully “interpretable-by-design” NeSy approaches for link prediction inspired by recently proposed models. Our framework employs reasoners that generate explicit logic proofs, utilizing either predefined or learned logic rules, ensuring transparent and explainable predictions. We go beyond traditional accuracy assessments, evaluating the quality of these explanations using established XAI metrics, including coherence. By quantitatively assessing the interpretability of our model, we aim to advance the development of trustworthy and understandable link prediction systems for Knowledge Graphs.

Interpretable Link Prediction via Neural-Symbolic Reasoning

Giannini F.;
2026

Abstract

Knowledge Graph Embedding models have shown remarkable performances in different tasks like knowledge completion. However, they inherently lack interpretability, making it difficult to understand the reasoning behind their predictions. While different Neural-Symbolic (NeSy) models have been proposed to achieve interpretable reasoning through logic rules, existing evaluations primarily focus on accuracy, overlooking the critical assessment of explanation quality. This paper addresses this gap by introducing fully “interpretable-by-design” NeSy approaches for link prediction inspired by recently proposed models. Our framework employs reasoners that generate explicit logic proofs, utilizing either predefined or learned logic rules, ensuring transparent and explainable predictions. We go beyond traditional accuracy assessments, evaluating the quality of these explanations using established XAI metrics, including coherence. By quantitatively assessing the interpretability of our model, we aim to advance the development of trustworthy and understandable link prediction systems for Knowledge Graphs.
2026
Settore INFO-01/A - Informatica
Settore IINF-05/A - Sistemi di elaborazione delle informazioni
3rd World Conference on Explainable Artificial Intelligence, xAI 2025
tur
2025
Communications in Computer and Information Science
Springer Science and Business Media Deutschland GmbH
9783032083234
9783032083241
Explainable AI; First-Order Logic; Knowledge Graphs
File in questo prodotto:
File Dimensione Formato  
XAI - X-RCBM.pdf

accesso aperto

Tipologia: Published version
Licenza: Creative Commons
Dimensione 339.14 kB
Formato Adobe PDF
339.14 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11384/162005
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex 0
social impact