Lorenz Zonoids for Trustworthy AI

被引:0
|
作者
Giudici, Paolo [1 ]
Raffinetti, Emanuela [1 ]
机构
[1] Univ Pavia, Dept Econ & Management, Via San Felice Monastero 5, Pavia, Italy
关键词
Artificial Intelligence methods; Lorenz Zonoids tools; SAFE approach;
D O I
10.1007/978-3-031-44064-9_27
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning models are boosting Artificial Intelligence (AI) applications in many domains, such as finance, health care and automotive. This is mainly due to their advantage, in terms of predictive accuracy, with respect to "classic" statistical learning models. However, although complex machine learning models may reach high predictive performance, their predictions are not explainable and have an intrinsic black-box nature. Accuracy and explainability are not the only desirable characteristics of a machine learning model. The recently proposed European regulation on Artificial Intelligence, the AI Act, attempts to regulate the use of AI by means of a set of requirements of trustworthiness for high risk applications, to be embedded in a risk management model. We propose to map the requirements established for high-risk applications in the AI Act in four main variables: Sustainability, Accuracy, Fairness and Explainability, which need a set of metrics that can establish not only whether but also how much the requirements are satisfied over time. To the best of our knowledge, there exists no such set of metrics, yet. In this paper, we aim to fill this gap, and propose a set of four integrated metrics, aimed at measuring Sustainability, Accuracy, Fairness and Explainability (S.A.F.E. in brief), which have the advantage, with respect to the available metrics, of being all based on one unifying statistical tool: the Lorenz curve. The Lorenz curve is a well known robust statistical tool, which has been employed, along with the related Gini index to measure income and wealth inequalities. It thus appears as a natural methodology on which to build an integrated set of trustworthy AI measurement metrics.
引用
收藏
页码:517 / 530
页数:14
相关论文
共 50 条
  • [41] Computing Through Time: Trustworthy AI
    Akleman, Ergun
    COMPUTER, 2024, 57 (03) : 10 - 10
  • [42] Explainable AI for trustworthy image analysis
    Turley, Jordan E.
    Dunne, Jeffrey A.
    Woods, Zerotti
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS V, 2023, 12538
  • [43] Trustworthy AI: From Principles to Practices
    Li, Bo
    Qi, Peng
    Liu, Bo
    Di, Shuai
    Liu, Jingen
    Pei, Jiquan
    Yi, Jinfeng
    Zhou, Bowen
    ACM COMPUTING SURVEYS, 2023, 55 (09)
  • [44] Establishing the rules for building trustworthy AI
    Luciano Floridi
    Nature Machine Intelligence, 2019, 1 : 261 - 262
  • [45] Establishing the rules for building trustworthy AI
    Floridi, Luciano
    NATURE MACHINE INTELLIGENCE, 2019, 1 (06) : 261 - 262
  • [46] Trustworthy AI algorithm for embryo ranking
    Deluga-Bialowarczuk, S.
    Wygocki, P.
    Pawlik, P.
    Kompanowski, H.
    Gilewicz, T.
    Siennicki, M.
    Sankowski, P.
    Milewski, R.
    Stankiewicz, B.
    Martynowicz, I.
    Sieczynski, P.
    Kuczynski, W.
    HUMAN REPRODUCTION, 2023, 38
  • [47] Trustworthy AI for Digital Engineering Transformation
    Huang, Jingwei
    Beling, Peter
    Freeman, Laura
    Zeng, Yong
    JOURNAL OF INTEGRATED DESIGN & PROCESS SCIENCE, 2021, 25 (01) : 1 - 7
  • [48] Discrimination, Bias, Fairness, and Trustworthy AI
    Varona, Daniel
    Suarez, Juan Luis
    APPLIED SCIENCES-BASEL, 2022, 12 (12):
  • [49] Physics and the empirical gap of trustworthy AI
    Thais, Savannah
    NATURE REVIEWS PHYSICS, 2024, 6 (11) : 640 - 641
  • [50] Developing trustworthy AI for weather and climate
    Mcgovern, Amy
    Tissot, Philippe
    Bostrom, Ann
    PHYSICS TODAY, 2024, 77 (01) : 26 - 31