EXPLAINING DATA-DRIVEN DECISIONS MADE BY AI SYSTEMS: THE COUNTERFACTUAL APPROACH

被引:11
|
作者
Fernandez-Loria, Carlos [1 ]
Provost, Foster [2 ]
Han, Xintian [3 ]
机构
[1] Hong Kong Univ Sci & Technol, HKUST Business Sch, Clear Water Bay, Hong Kong, Peoples R China
[2] NYU, Stern Sch Business, New York, NY USA
[3] NYU, Ctr Data Sci, New York, NY USA
关键词
Explanations; system decisions; interpretable machine learning; explainable artificial intelligence; RULE EXTRACTION; EXPLANATIONS; CLASSIFICATIONS; ANALYTICS; TAXONOMY;
D O I
10.25300/MISQ/2022/16749
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We examine counterfactual explanations for explaining the decisions made by model-based AI systems. The counterfactual approach we consider defines an explanation as a set of the system's data inputs that causally drives the decision (i.e., changing the inputs in the set changes the decision) and is irreducible (i.e., changing any subset of the inputs does not change the decision). We (1) demonstrate how this framework may be used to provide explanations for decisions made by general data-driven AI systems that can incorporate features with arbitrary data types and multiple predictive models, and (2) propose a heuristic procedure to find the most useful explanations depending on the context. We then contrast counterfactual explanations with methods that explain model predictions by weighting features according to their importance (e.g., Shapley additive explanations [SHAP], local interpretable model-agnostic explanations [LIME]) and present two fundamental reasons why we should carefully consider whether importance-weight explanations are well suited to explain system decisions. Specifically, we show that (1) features with a large importance weight for a model prediction may not affect the corresponding decision, and (2) importance weights are insufficient to communicate whether and how features influence decisions. We demonstrate this with several concise examples and three detailed case studies that compare the counterfactual approach with SHAP to illustrate conditions under which counterfactual explanations explain data-driven decisions better than importance weights.
引用
收藏
页码:1635 / 1660
页数:26
相关论文
共 50 条
  • [21] Collaborative information acquisition for data-driven decisions
    Danxia Kong
    Maytal Saar-Tsechansky
    Machine Learning, 2014, 95 : 71 - 86
  • [22] Collaborative information acquisition for data-driven decisions
    Kong, Danxia
    Saar-Tsechansky, Maytal
    MACHINE LEARNING, 2014, 95 (01) : 71 - 86
  • [23] Making data-driven decisions: Silent reading
    Trudel, Heidi
    READING TEACHER, 2007, 61 (04): : 308 - 315
  • [24] Data Analytics for Manufacturing Systems A Data-Driven Approach for Process Optimization
    Ungermann, Florian
    Kuhnle, Andreas
    Stricker, Nicole
    Lanza, Gisela
    52ND CIRP CONFERENCE ON MANUFACTURING SYSTEMS (CMS), 2019, 81 : 369 - 374
  • [25] A data-driven use case planning and assessment approach for AI portfolio management
    Bodendorf, Frank
    ELECTRONIC MARKETS, 2025, 35 (01)
  • [26] Data-driven drug discovery and healthcare by AI
    Yamanishi, Yoshihiro
    CANCER SCIENCE, 2023, 114 : 7 - 7
  • [27] Data-driven AI algorithms for construction machinery
    Liang, Ke
    Zhao, Jiahao
    Zhang, Zhiqing
    Guan, Wei
    Pan, Mingzhang
    Li, Mantian
    AUTOMATION IN CONSTRUCTION, 2024, 167
  • [28] AI Data-Driven Personalisation and Disability Inclusion
    Wald, Mike
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2021, 3
  • [29] A theme section on the central role of modeling in designing and explaining data-driven systems and software
    Attiogbe, Christian
    Ben Yahia, Sadok
    Bellatreche, Ladjel
    SOFTWARE AND SYSTEMS MODELING, 2023, 22 (06): : 1945 - 1947
  • [30] A theme section on the central role of modeling in designing and explaining data-driven systems and software
    Christian Attiogbé
    Sadok Ben Yahia
    Ladjel Bellatreche
    Software and Systems Modeling, 2023, 22 : 1945 - 1947