Lorenz Zonoids for Trustworthy AI

被引:0
|
作者
Giudici, Paolo [1 ]
Raffinetti, Emanuela [1 ]
机构
[1] Univ Pavia, Dept Econ & Management, Via San Felice Monastero 5, Pavia, Italy
关键词
Artificial Intelligence methods; Lorenz Zonoids tools; SAFE approach;
D O I
10.1007/978-3-031-44064-9_27
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning models are boosting Artificial Intelligence (AI) applications in many domains, such as finance, health care and automotive. This is mainly due to their advantage, in terms of predictive accuracy, with respect to "classic" statistical learning models. However, although complex machine learning models may reach high predictive performance, their predictions are not explainable and have an intrinsic black-box nature. Accuracy and explainability are not the only desirable characteristics of a machine learning model. The recently proposed European regulation on Artificial Intelligence, the AI Act, attempts to regulate the use of AI by means of a set of requirements of trustworthiness for high risk applications, to be embedded in a risk management model. We propose to map the requirements established for high-risk applications in the AI Act in four main variables: Sustainability, Accuracy, Fairness and Explainability, which need a set of metrics that can establish not only whether but also how much the requirements are satisfied over time. To the best of our knowledge, there exists no such set of metrics, yet. In this paper, we aim to fill this gap, and propose a set of four integrated metrics, aimed at measuring Sustainability, Accuracy, Fairness and Explainability (S.A.F.E. in brief), which have the advantage, with respect to the available metrics, of being all based on one unifying statistical tool: the Lorenz curve. The Lorenz curve is a well known robust statistical tool, which has been employed, along with the related Gini index to measure income and wealth inequalities. It thus appears as a natural methodology on which to build an integrated set of trustworthy AI measurement metrics.
引用
收藏
页码:517 / 530
页数:14
相关论文
共 50 条
  • [1] Multivariate Lorenz dominance based on zonoids
    Koshevoy, Gleb A.
    Mosler, Karl
    ASTA-ADVANCES IN STATISTICAL ANALYSIS, 2007, 91 (01) : 57 - 76
  • [2] Multivariate Lorenz dominance based on zonoids
    Gleb A. Koshevoy
    Karl Mosler
    AStA Advances in Statistical Analysis, 2007, 91 : 57 - 76
  • [3] Trustworthy AI
    Singh, Richa
    Vatsa, Mayank
    Ratha, Nalini
    CODS-COMAD 2021: PROCEEDINGS OF THE 3RD ACM INDIA JOINT INTERNATIONAL CONFERENCE ON DATA SCIENCE & MANAGEMENT OF DATA (8TH ACM IKDD CODS & 26TH COMAD), 2021, : 449 - 453
  • [4] Trustworthy AI
    Wing, Jeannette M.
    COMMUNICATIONS OF THE ACM, 2021, 64 (10) : 64 - 71
  • [5] Trustworthy and responsible AI
    Eriksen, Remi
    Operations Engineer, 2024, (01): : 24 - 25
  • [6] Trustworthy AI for the People?
    Figueras, Claudia
    Verhagen, Harko
    Pargman, Teresa Cerratto
    AIES '21: PROCEEDINGS OF THE 2021 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2021, : 269 - 270
  • [7] The Value of Trustworthy AI
    Danks, David
    AIES '19: PROCEEDINGS OF THE 2019 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2019, : 521 - 522
  • [8] How empty is Trustworthy AI? A discourse analysis of the Ethics Guidelines of Trustworthy AI
    Stamboliev, Eugenia
    Christiaens, Tim
    CRITICAL POLICY STUDIES, 2025, 19 (01) : 39 - 56
  • [9] Network of AI and trustworthy: response to Simion and Kelp’s account of trustworthy AI
    Song F.
    Asian Journal of Philosophy, 2 (2):
  • [10] An AI Harms and Governance Framework for Trustworthy AI
    Peckham, Jeremy B.
    COMPUTER, 2024, 57 (03) : 59 - 68