Directive Explanations for Actionable Explainability in Machine Learning Applications

被引:0
|
作者
Singh, Ronal [1 ]
Miller, Tim [1 ]
Lyons, Henrietta [1 ]
Sonenberg, Liz [1 ]
Velloso, Eduardo [1 ]
Vetere, Frank [1 ]
Howe, Piers [2 ]
Dourish, Paul [3 ]
机构
[1] Univ Melbourne, Sch Comp & Informat Syst, Melbourne, Vic 3010, Australia
[2] Univ Melbourne, Melbourne Sch Psychol Sci, Melbourne, Vic 3010, Australia
[3] Univ Calif Irvine, Donald Bren Sch Informat & Comp Sci, Irvine, CA 92697 USA
基金
澳大利亚研究理事会;
关键词
Explainable AI; directive explanations; counterfactual explanations; BLACK-BOX;
D O I
10.1145/3579363
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this article, we show that explanations of decisions made by machine learning systems can be improved by not only explaining why a decision was made but also explaining how an individual could obtain their desired outcome. We formally define the concept of directive explanations (those that offer specific actions an individual could take to achieve their desired outcome), introduce two forms of directive explanations (directive-specific and directive-generic), and describe how these can be generated computationally. We investigate people's preference for and perception toward directive explanations through two online studies, one quantitative and the other qualitative, each covering two domains (the credit scoring domain and the employee satisfaction domain). We find a significant preference for both forms of directive explanations compared to non-directive counterfactual explanations. However, we also find that preferences are affected by many aspects, including individual preferences and social factors. We conclude that deciding what type of explanation to provide requires information about the recipients and other contextual information. This reinforces the need for a human-centered and context-specific approach to explainable AI.
引用
收藏
页数:26
相关论文
共 50 条
  • [1] Justificatory explanations: a step beyond explainability in machine learning
    Guersenzvaig, A.
    Casacuberta, D.
    EUROPEAN JOURNAL OF PUBLIC HEALTH, 2023, 33
  • [2] Guest editorial: Explainability of machine learning in methodologies and applications
    Li, Zhong
    Unger, Herwig
    Kyamakya, Kyandoghere
    KNOWLEDGE-BASED SYSTEMS, 2023, 264
  • [3] An Explainability-Centric Requirements Analysis Framework for Machine Learning Applications
    Pei Z.
    Liu L.
    Wang C.
    Wang J.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2024, 61 (04): : 983 - 1002
  • [4] Explainability: Actionable Information Extraction
    Silva, Catarina
    Henriques, Jorge
    Ribeiro, Bernardete
    INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS 2022, ICBHI 2022, 2024, 108 : 104 - 113
  • [5] Regulating Explainability in Machine Learning Applications - Observations from a Policy Design Experiment
    Nahar, Nadia
    Rowlett, Jenny
    Bray, Matthew
    Omar, Zahra Abba
    Papademetris, Xenophon
    Menon, Alka
    Kastner, Christian
    PROCEEDINGS OF THE 2024 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, ACM FACCT 2024, 2024, : 2101 - 2112
  • [6] A Survey on the Explainability of Supervised Machine Learning
    Burkart, Nadia
    Huber, Marco F.
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2021, 70 : 245 - 317
  • [7] Legal requirements on explainability in machine learning
    Adrien Bibal
    Michael Lognoul
    Alexandre de Streel
    Benoît Frénay
    Artificial Intelligence and Law, 2021, 29 : 149 - 169
  • [8] Legal requirements on explainability in machine learning
    Bibal, Adrien
    Lognoul, Michael
    de Streel, Alexandre
    Frenay, Benoit
    ARTIFICIAL INTELLIGENCE AND LAW, 2021, 29 (02) : 149 - 169
  • [9] Towards Directive Explanations: Crafting Explainable AI Systems for Actionable Human-AI Interactions
    Bhattacharya, Aditya
    EXTENDED ABSTRACTS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2024, 2024,
  • [10] Formal Reasoning Methods for Explainability in Machine Learning
    Marquez-Silva, Joao
    ELECTRONIC PROCEEDINGS IN THEORETICAL COMPUTER SCIENCE, 2020, (325):