Reasons, Values, Stakeholders: A Philosophical Framework for Explainable Artificial Intelligence

被引:5
|
作者
Kasirzadeh, Atoosa [1 ,2 ]
机构
[1] Univ Toronto, Toronto, ON, Canada
[2] Australian Natl Univ, Canberra, ACT, Australia
关键词
Explainable AI; Explainable Artificial Intelligence; Explainable Machine Learning; Interpretable Machine Learning; Ethics of AI; Ethical AI; Machine learning; Philosophy of Explanation; Philosophy of AI;
D O I
10.1145/3442188.3445866
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The societal and ethical implications of the use of opaque artificial intelligence systems in consequential decisions, such as welfare allocation and criminal justice, have generated a lively debate among multiple stakeholders, including computer scientists, ethicists, social scientists, policy makers, and end users. However, the lack of a common language or a multi-dimensional framework to appropriately bridge the technical, epistemic, and normative aspects of this debate nearly prevents the discussion from being as productive as it could be. Drawing on the philosophical literature on the nature and value of explanations, this paper offers a multifaceted framework that brings more conceptual precision to the present debate by identifying the types of explanations that are most pertinent to artificial intelligence predictions, recognizing the relevance and importance of the social and ethical values for the evaluation of these explanations, and demonstrating the importance of these explanations for incorporating a diversified approach to improving the design of truthful algorithmic ecosystems. The proposed philosophical framework thus lays the groundwork for establishing a pertinent connection between the technical and ethical aspects of artificial intelligence systems.
引用
收藏
页码:14 / 14
页数:1
相关论文
共 50 条
  • [41] An explainable artificial intelligence and Internet of Things framework for monitoring and predicting cardiovascular disease
    Umar, Mubarak Albarka
    Abuali, Najah
    Shuaib, Khaled
    Awad, Ali Ismail
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 144
  • [42] Explainable Artificial Intelligence Based Framework for Non-Communicable Diseases Prediction
    Davagdorj, Khishigsuren
    Bae, Jang-Whan
    Pham, Van-Huy
    Theera-Umpon, Nipon
    Ryu, Keun Ho
    IEEE ACCESS, 2021, 9 : 123672 - 123688
  • [43] An Explainable Artificial Intelligence Framework for Quality-Aware IoE Service Delivery
    Munir, Md Shirajum
    Park, Seong-Bae
    Hong, Choong Seon
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 4787 - 4793
  • [44] Real Estate Automated Valuation Model with Explainable Artificial Intelligence Based on Shapley Values
    Tchuente, Dieudonne
    JOURNAL OF REAL ESTATE FINANCE AND ECONOMICS, 2024,
  • [45] Philosophical Reflections on the Role of Artificial Intelligence in Shaping Cultural Values in Ideological Education
    Wang, Shanshan
    Zhang, Shengjun
    CULTURA-INTERNATIONAL JOURNAL OF PHILOSOPHY OF CULTURE AND AXIOLOGY, 2025, 22 (03): : 545 - 568
  • [46] Memristive Explainable Artificial Intelligence Hardware
    Song, Hanchan
    Park, Woojoon
    Kim, Gwangmin
    Choi, Moon Gu
    In, Jae Hyun
    Rhee, Hakseung
    Kim, Kyung Min
    ADVANCED MATERIALS, 2024, 36 (25)
  • [47] Effects of Explainable Artificial Intelligence in Neurology
    Gombolay, G.
    Silva, A.
    Schrum, M.
    Dutt, M.
    Hallman-Cooper, J.
    Gombolay, M.
    ANNALS OF NEUROLOGY, 2023, 94 : S145 - S145
  • [48] Drug discovery with explainable artificial intelligence
    Jimenez-Luna, Jose
    Grisoni, Francesca
    Schneider, Gisbert
    NATURE MACHINE INTELLIGENCE, 2020, 2 (10) : 573 - 584
  • [49] Explainable Artificial Intelligence for Combating Cyberbullying
    Tesfagergish, Senait Gebremichael
    Damasevicius, Robertas
    SOFT COMPUTING AND ITS ENGINEERING APPLICATIONS, PT 1, ICSOFTCOMP 2023, 2024, 2030 : 54 - 67
  • [50] Drug discovery with explainable artificial intelligence
    José Jiménez-Luna
    Francesca Grisoni
    Gisbert Schneider
    Nature Machine Intelligence, 2020, 2 : 573 - 584