Decision trees are widely adopted in Machine Learning tasks due to their operation simplicity and interpretability aspects. However, following the decision process path taken by trees can be difficult in a complex scenario or in a case where a user has no familiarity with them. Prior research showed that converting outcomes to natural language is an accessible way to facilitate understanding for non-expert users in several tasks. More recently, there has been a growing effort to use Large Language Models (LLMs) as a tool for providing natural language texts. In this paper, we examine the proficiency of LLMs to explain decision tree predictions in simple terms through the generation of natural language explanations. By exploring different textual representations and prompt engineering strategies, we identify capabilities that strengthen LLMs as a competent explainer as well as highlight potential challenges and limitations, opening further research possibilities on natural language explanations for decision trees.

Exploring Large Language Models Capabilities to Explain Decision Trees

Gezici, Gizem
;
Giannotti, Fosca
2024

Abstract

Decision trees are widely adopted in Machine Learning tasks due to their operation simplicity and interpretability aspects. However, following the decision process path taken by trees can be difficult in a complex scenario or in a case where a user has no familiarity with them. Prior research showed that converting outcomes to natural language is an accessible way to facilitate understanding for non-expert users in several tasks. More recently, there has been a growing effort to use Large Language Models (LLMs) as a tool for providing natural language texts. In this paper, we examine the proficiency of LLMs to explain decision tree predictions in simple terms through the generation of natural language explanations. By exploring different textual representations and prompt engineering strategies, we identify capabilities that strengthen LLMs as a competent explainer as well as highlight potential challenges and limitations, opening further research possibilities on natural language explanations for decision trees.
2024
Settore INFO-01/A - Informatica
Third International Conference on Hybrid Human-Artificial Intelligence, HHAI 2024
Malmö
10-14 June, 2024
HHAI 2024: Hybrid Human AI Systems for the Social Good : Proceedings of the Third International Conference on Hybrid Human-Artificial Intelligence
IOS Press BV
978-1-64368-522-9
decision tree; Explainable AI; natural language generation
   PNRR Partenariati Estesi - FAIR - Future artificial intelligence research.
   Ministero della pubblica istruzione, dell'università e della ricerca
File in questo prodotto:
File Dimensione Formato  
Exploring Large Language Models.pdf

accesso aperto

Tipologia: Published version
Licenza: Creative Commons
Dimensione 222.13 kB
Formato Adobe PDF
222.13 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11384/149506
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact