As Artificial Intelligence (AI) systems become integral to daily life, ensuring transparency and interpretability in their decision-making processes is critical. The General Data Protection Regulation (GDPR) has underscored users’ right to understand how AI-driven systems make decisions that affect them. However, the pursuit of model performance often compromises explainability, creating a tension between achieving high accuracy and maintaining transparency. Core research questions focus on reconciling the high performance of LLMs and other AI models with interpretability requirements. Emerging research focuses on designing transparent systems, understanding the effects of opaque models on users, developing explanation strategies, and enhancing user control over AI behaviors. The workshop on eXplainable AI (XAI.it) provides a platform for addressing these challenges, fostering collaboration within the XAI community to explore novel solutions and share insights across this evolving multifaceted field.
XAI.it 2024 - Preface to the Fifth Italian Workshop on eXplainable Artificial Intelligence
Pellungrini R.;
2024
Abstract
As Artificial Intelligence (AI) systems become integral to daily life, ensuring transparency and interpretability in their decision-making processes is critical. The General Data Protection Regulation (GDPR) has underscored users’ right to understand how AI-driven systems make decisions that affect them. However, the pursuit of model performance often compromises explainability, creating a tension between achieving high accuracy and maintaining transparency. Core research questions focus on reconciling the high performance of LLMs and other AI models with interpretability requirements. Emerging research focuses on designing transparent systems, understanding the effects of opaque models on users, developing explanation strategies, and enhancing user control over AI behaviors. The workshop on eXplainable AI (XAI.it) provides a platform for addressing these challenges, fostering collaboration within the XAI community to explore novel solutions and share insights across this evolving multifaceted field.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



