The Skyline of Counterfactual Explanations for Machine Learning Decision Models

被引:5
|
作者
Wang, Yongjie [1 ]
Ding, Qinxu [2 ]
Wang, Ke [3 ]
Liu, Yue [4 ]
Wu, Xingyu [4 ]
Wang, Jinglong [4 ]
Liu, Yong [2 ]
Miao, Chunyan [1 ]
机构
[1] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore, Singapore
[2] Nanyang Technol Univ, Alibaba NTU Singapore Joint Res Inst, Singapore, Singapore
[3] Simon Fraser Univ, Sch Comp Sci, Burnaby, BC, Canada
[4] Alibaba Grp, Hangzhou, Peoples R China
基金
加拿大自然科学与工程研究理事会; 新加坡国家研究基金会;
关键词
Counterfactual explanations; Multi-objective optimization; Skyline; Interactive query;
D O I
10.1145/3459637.3482397
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Counterfactual explanations are minimum changes of a given input to alter the original prediction by a machine learning model, usually from an undesirable prediction to a desirable one. Previous works frame this problem as a constrained cost minimization, where the cost is defined as L-1/L-2 distance (or variants) over multiple features to measure the change. In real-life applications, features of different types are hardly comparable and it is difficult to measure the changes of heterogeneous features by a single cost function. Moreover, existing approaches do not support interactive exploration of counterfactual explanations. To address above issues, we propose the skyline counterfactual explanations that define the skyline of counterfactual explanations as all non-dominated changes. We solve this problem as multi-objective optimization over actionable features. This approach does not require any cost function over heterogeneous features. With the skyline, the user can interactively and incrementally refine their goals on the features and magnitudes to be changed, especially when lacking prior knowledge to express their needs precisely. Intensive experiment results on three real-life datasets demonstrate that the skyline method provides a friendly way for finding interesting counterfactual explanations, and achieves superior results compared to the state-of-the-art methods.
引用
收藏
页码:2030 / 2039
页数:10
相关论文
共 50 条
  • [41] A framework for falsifiable explanations of machine learning models with an application in computational pathology
    Schuhmacher, David
    Schorner, Stephanie
    Kupper, Claus
    Grosserueschkamp, Frederik
    Sternemann, Carlo
    Lugnier, Celine
    Kraeft, Anna-Lena
    Jutte, Hendrik
    Tannapfel, Andrea
    Reinacher-Schick, Anke
    Gerwert, Klaus
    Mosig, Axel
    MEDICAL IMAGE ANALYSIS, 2022, 82
  • [42] Truthful meta-explanations for local interpretability of machine learning models
    Ioannis Mollas
    Nick Bassiliades
    Grigorios Tsoumakas
    Applied Intelligence, 2023, 53 : 26927 - 26948
  • [43] Redefining Counterfactual Explanations for Reinforcement Learning: Overview, Challenges and Opportunities
    Gajcin, Jasmina
    Dusparic, Ivana
    ACM COMPUTING SURVEYS, 2024, 56 (09)
  • [44] Crowdsourcing and Evaluating Concept-driven Explanations of Machine Learning Models
    Mishra S.
    Rzeszotarski J.M.
    Proceedings of the ACM on Human-Computer Interaction, 2021, 5 (CSCW1)
  • [45] Let's go to the Alien Zoo: Introducing an experimental framework to study usability of counterfactual explanations for machine learning
    Kuhl, Ulrike
    Artelt, Andre
    Hammer, Barbara
    FRONTIERS IN COMPUTER SCIENCE, 2023, 5
  • [46] A Machine Learning Model Based on Counterfactual Theory for Treatment Decision of Hepatocellular Carcinoma Patients
    Wei, Xiaoqin
    Wang, Fang
    Liu, Ying
    Li, Zeyong
    Xue, Zhong
    Tang, Mingyue
    Chen, Xiaowen
    JOURNAL OF HEPATOCELLULAR CARCINOMA, 2024, 11 : 1675 - 1687
  • [47] Counterfactual state explanations for reinforcement learning agents via generative deep learning
    Olson, Matthew L.
    Khanna, Roli
    Neal, Lawrence
    Li, Fuxin
    Wong, Weng-Keen
    ARTIFICIAL INTELLIGENCE, 2021, 295
  • [48] Reliable Decision Support using Counterfactual Models
    Schulam, Peter
    Saria, Suchi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [49] CSSE - An agnostic method of counterfactual, selected, and social explanations for classification models
    Balbino, Marcelo de Sousa
    Galvez, Luis Enrique Zarate
    Nobre, Cristiane Neri
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 228
  • [50] Comparison of Explainable Machine-Learning Models for Decision-Making in Health Intensive Care Using SHapley Additive exPlanations
    Vidal, Igor Pereira
    Pereira, Marluce Rodrigues
    Freire, Andre Pimenta
    Resende, Uanderson
    Maziero, Erick Galani
    PROCEEDINGS OF THE 19TH BRAZILIAN SYMPOSIUM ON INFORMATION SYSTEMS, 2023, : 300 - 307