Modern AI systems have become of widespread use in almost all sectors with a strong impact on our society. However, the very methods on which they rely, based on Machine Learning techniques for processing data to predict outcomes and to make decisions, are opaque, prone to bias and may produce wrong answers. Objective functions optimized in learning systems are not guaranteed to align with the values that motivated their definition. Properties such as transparency, verifiability, explainability, security, technical robustness and safety, are key to build operational governance frameworks, so that to make AI systems justifiably trustworthy and to align their development and use with human rights and values.

Trustworthy AI

Dignum, Virginia;Giannotti, Fosca;
2021

Abstract

Modern AI systems have become of widespread use in almost all sectors with a strong impact on our society. However, the very methods on which they rely, based on Machine Learning techniques for processing data to predict outcomes and to make decisions, are opaque, prone to bias and may produce wrong answers. Objective functions optimized in learning systems are not guaranteed to align with the values that motivated their definition. Properties such as transparency, verifiability, explainability, security, technical robustness and safety, are key to build operational governance frameworks, so that to make AI systems justifiably trustworthy and to align their development and use with human rights and values.
2021
Settore INF/01 - Informatica
Reflections on Artificial Intelligence for humanity
Springer Lecture Notes in Computer Science()
Human rights; Machine learning; Interpretability; Explainability; Dependability; Verification and validation; Beneficial AI
File in questo prodotto:
File Dimensione Formato  
Trustworthy_AI.pdf

Accesso chiuso

Tipologia: Published version
Licenza: Non pubblico
Dimensione 488.5 kB
Formato Adobe PDF
488.5 kB Adobe PDF   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11384/130102
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 34
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact