The Artificial Intelligence Act (AIA) is a uniform legal framework to ensure that AI systems within the European Union (EU) are safe and comply with existing law on fundamental rights and constitutional values. The AIA adopts a risk-based approach with the aim of intending to regulate AI systems, especially categorised as high-risk, which have significant harmful impacts on the health, safety and fundamental rights of persons in the Union. The AIA is founded on the Ethics Guidelines of the High-Level Expert Group for Trustworthy AI, which are grounded in fundamental rights and reflect four ethical imperatives in order to ensure ethical and robust AI. While we acknowledge that ethics is not law, we advocate that the analysis of ethical risks can assist us in complying with laws, thereby facilitating the implementation of the AIA requirements. Thus, we first design an AI-driven Decision Support System for individual risk prediction in the insurance domain (categorised as high-risk by the AIA) based on the Titanic case, which is a popular benchmark dataset in machine learning. We then fulfill an ethical impact assessment of the Titanic case study, relying on the four ethical imperatives of respect for human autonomy, prevention of harm, fairness, and explicability, declared by the High-Level Expert Group for Trustworthy AI. In the context of this ethical impact assessment, we also refer to the questions in the ALTAI checklist. Our discussions regarding the ethical impact assessment in the insurance domain demonstrate that ethical principles can intersect but also create tensions (intriguingly, only in this particular context), for which there is no definitive solution. When tensions arise, which may result in unavoidable trade-offs, these trade-offs should be addressed in a rational and methodical manner, paying special attention to the context of the current case study being evaluated.

The ethical impact assessment of selling life insurance to Titanic passengers

Gezici, Gizem
;
Mannari, Chiara
;
2023

Abstract

The Artificial Intelligence Act (AIA) is a uniform legal framework to ensure that AI systems within the European Union (EU) are safe and comply with existing law on fundamental rights and constitutional values. The AIA adopts a risk-based approach with the aim of intending to regulate AI systems, especially categorised as high-risk, which have significant harmful impacts on the health, safety and fundamental rights of persons in the Union. The AIA is founded on the Ethics Guidelines of the High-Level Expert Group for Trustworthy AI, which are grounded in fundamental rights and reflect four ethical imperatives in order to ensure ethical and robust AI. While we acknowledge that ethics is not law, we advocate that the analysis of ethical risks can assist us in complying with laws, thereby facilitating the implementation of the AIA requirements. Thus, we first design an AI-driven Decision Support System for individual risk prediction in the insurance domain (categorised as high-risk by the AIA) based on the Titanic case, which is a popular benchmark dataset in machine learning. We then fulfill an ethical impact assessment of the Titanic case study, relying on the four ethical imperatives of respect for human autonomy, prevention of harm, fairness, and explicability, declared by the High-Level Expert Group for Trustworthy AI. In the context of this ethical impact assessment, we also refer to the questions in the ALTAI checklist. Our discussions regarding the ethical impact assessment in the insurance domain demonstrate that ethical principles can intersect but also create tensions (intriguingly, only in this particular context), for which there is no definitive solution. When tensions arise, which may result in unavoidable trade-offs, these trade-offs should be addressed in a rational and methodical manner, paying special attention to the context of the current case study being evaluated.
2023
Settore INF/01 - Informatica
Second International Conference on Hybrid Human-Artificial Intelligence co-located with (HHAI 2023)
Munich, Germany
26-27 giugno 2023
Proceedings of the Workshops at the Second International Conference on Hybrid Human-Artificial Intelligence co-located with (HHAI 2023) : CEUR Workshop Proceedings
M. Jeusfeld c/o Redaktion Sun SITE, Informatik
   Science and technology for the explanation of AI decision making
   XAI
   European Commission
   Horizon 2020 Framework Programme
   834756

   HumanE AI Network
   HumanE-AI-Net
   European Commission
   Horizon 2020 Framework Programme
   952026

   SoBigData.it - Strengthening the Italian RI for Social Mining and Big Data Analytics
   MUR
   PNRR
   IR000001 3

   SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics
   SoBigData-PlusPlus
   European Commission
   Horizon 2020 Framework Programme
   871042

   Future Artificial Intelligence Research” - Spoke 1 “Human-centered AI”
   FAIR
   European Commission
   PE00000013
File in questo prodotto:
File Dimensione Formato  
paper1-3.pdf

accesso aperto

Tipologia: Published version
Licenza: Creative Commons
Dimensione 901.49 kB
Formato Adobe PDF
901.49 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11384/140423
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact