Reasons, Values, Stakeholders: A Philosophical Framework for Explainable Artificial Intelligence

被引:5
|
作者
Kasirzadeh, Atoosa [1 ,2 ]
机构
[1] Univ Toronto, Toronto, ON, Canada
[2] Australian Natl Univ, Canberra, ACT, Australia
关键词
Explainable AI; Explainable Artificial Intelligence; Explainable Machine Learning; Interpretable Machine Learning; Ethics of AI; Ethical AI; Machine learning; Philosophy of Explanation; Philosophy of AI;
D O I
10.1145/3442188.3445866
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The societal and ethical implications of the use of opaque artificial intelligence systems in consequential decisions, such as welfare allocation and criminal justice, have generated a lively debate among multiple stakeholders, including computer scientists, ethicists, social scientists, policy makers, and end users. However, the lack of a common language or a multi-dimensional framework to appropriately bridge the technical, epistemic, and normative aspects of this debate nearly prevents the discussion from being as productive as it could be. Drawing on the philosophical literature on the nature and value of explanations, this paper offers a multifaceted framework that brings more conceptual precision to the present debate by identifying the types of explanations that are most pertinent to artificial intelligence predictions, recognizing the relevance and importance of the social and ethical values for the evaluation of these explanations, and demonstrating the importance of these explanations for incorporating a diversified approach to improving the design of truthful algorithmic ecosystems. The proposed philosophical framework thus lays the groundwork for establishing a pertinent connection between the technical and ethical aspects of artificial intelligence systems.
引用
收藏
页码:14 / 14
页数:1
相关论文
共 50 条
  • [21] A novel model usability evaluation framework (MUsE) for explainable artificial intelligence
    Dieber, Juergen
    Kirrane, Sabrina
    INFORMATION FUSION, 2022, 81 : 143 - 153
  • [22] Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence
    Zednik C.
    Philosophy & Technology, 2021, 34 (2) : 265 - 288
  • [23] Understanding the dilemma of explainable artificial intelligence: a proposal for a ritual dialog framework
    Bao, Aorigele
    Zeng, Yi
    HUMANITIES & SOCIAL SCIENCES COMMUNICATIONS, 2024, 11 (01):
  • [24] A unified and practical user-centric framework for explainable artificial intelligence
    Kaplan, Sinan
    Uusitalo, Hannu
    Lensu, Lasse
    KNOWLEDGE-BASED SYSTEMS, 2024, 283
  • [25] Explainable Artificial Intelligence for Kids
    Alonso, Jose M.
    PROCEEDINGS OF THE 11TH CONFERENCE OF THE EUROPEAN SOCIETY FOR FUZZY LOGIC AND TECHNOLOGY (EUSFLAT 2019), 2019, 1 : 134 - 141
  • [26] Explainable Artificial Intelligence in education
    Khosravi H.
    Shum S.B.
    Chen G.
    Conati C.
    Tsai Y.-S.
    Kay J.
    Knight S.
    Martinez-Maldonado R.
    Sadiq S.
    Gašević D.
    Computers and Education: Artificial Intelligence, 2022, 3
  • [27] On the Need of an Explainable Artificial Intelligence
    Zanni-Merk, Cecilia
    INFORMATION SYSTEMS ARCHITECTURE AND TECHNOLOGY, ISAT 2019, PT I, 2020, 1050 : 3 - 3
  • [28] Explainable artificial intelligence in pathology
    Klauschen, Frederick
    Dippel, Jonas
    Keyl, Philipp
    Jurmeister, Philipp
    Bockmayr, Michael
    Mock, Andreas
    Buchstab, Oliver
    Alber, Maximilian
    Ruff, Lukas
    Montavon, Gregoire
    Mueller, Klaus-Robert
    PATHOLOGIE, 2024, 45 (02): : 133 - 139
  • [29] Explainable and Trustworthy Artificial Intelligence
    Alonso-Moral, Jose Maria
    Mencar, Corrado
    Ishibuchi, Hisao
    IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2022, 17 (01) : 14 - 15
  • [30] Review of Explainable Artificial Intelligence
    Zhao, Yanyu
    Zhao, Xiaoyong
    Wang, Lei
    Wang, Ningning
    Computer Engineering and Applications, 2023, 59 (14) : 1 - 14