As Artificial Intelligence (AI) systems become integral to daily life, ensuring transparency and interpretability in their decision-making processes is critical. The General Data Protection Regulation (GDPR) has underscored users’ right to understand how AI-driven systems make decisions that affect them. However, the pursuit of model performance often compromises explainability, creating a tension between achieving high accuracy and maintaining transparency. Core research questions focus on reconciling the high performance of LLMs and other AI models with interpretability requirements. Emerging research focuses on designing transparent systems, understanding the effects of opaque models on users, developing explanation strategies, and enhancing user control over AI behaviors. The workshop on eXplainable AI (XAI.it) provides a platform for addressing these challenges, fostering collaboration within the XAI community to explore novel solutions and share insights across this evolving multifaceted field.

XAI.it 2024 - Preface to the Fifth Italian Workshop on eXplainable Artificial Intelligence

Pellungrini R.;
2024

Abstract

As Artificial Intelligence (AI) systems become integral to daily life, ensuring transparency and interpretability in their decision-making processes is critical. The General Data Protection Regulation (GDPR) has underscored users’ right to understand how AI-driven systems make decisions that affect them. However, the pursuit of model performance often compromises explainability, creating a tension between achieving high accuracy and maintaining transparency. Core research questions focus on reconciling the high performance of LLMs and other AI models with interpretability requirements. Emerging research focuses on designing transparent systems, understanding the effects of opaque models on users, developing explanation strategies, and enhancing user control over AI behaviors. The workshop on eXplainable AI (XAI.it) provides a platform for addressing these challenges, fostering collaboration within the XAI community to explore novel solutions and share insights across this evolving multifaceted field.
2024
Settore INFO-01/A - Informatica
CEUR Workshop Proceedings
CEUR-WS
Biases; eXplainable AI; Large Language Models; LLMs; Trustworthiness; XAI
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11384/157458
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact