Shap-enhanced counterfactual explanations for recommendations

被引:6
|
作者
Zhong, Jinfeng [1 ]
Negre, Elsa [1 ]
机构
[1] Paris Dauphine Univ, PSL Res Univ, CNRS UMR 7243, LAMSADE 75016, Paris, France
关键词
Model-agnostic explanations; explainable recommendations;
D O I
10.1145/3477314.3507029
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Explanations in recommender systems help users better understand why a recommendation (or a list of recommendations) is generated. Explaining recommendations has become an important requirement for enhancing users' trust and satisfaction. However, explanation methods vary across different recommender models, increasing engineering costs. As recommender systems become ever more inscrutable, directly explaining recommender systems sometimes becomes impossible. Post-hoc explanation methods that do not elucidate internal mechanisms of recommender systems are popular approaches. State-of-art post-hoc explanation methods such as SHAP can generate explanations by building simpler surrogate models to approximate the original models. However, directly applying such methods has several concerns. First of all, post-hoc explanations may not be faithful to the original recommender systems since the internal mechanisms of recommender systems are not elucidated. Another concern is that the outputs returned by methods such as SHAP are not trivial for plain users to understand since background mathematical knowledge is required. In this work, we present an explanation method enhanced by SHAP that can generate easily understandable explanations with high fidelity.
引用
收藏
页码:1365 / 1372
页数:8
相关论文
共 50 条
  • [11] Counterfactual Explanations for Models of Code
    Cito, Juergen
    Dillig, Isil
    Murali, Vijayaraghavan
    Chandra, Satish
    2022 ACM/IEEE 44TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: SOFTWARE ENGINEERING IN PRACTICE (ICSE-SEIP 2022), 2022, : 125 - 134
  • [12] On generating trustworthy counterfactual explanations
    Del Ser, Javier
    Barredo-Arrieta, Alejandro
    Diaz-Rodriguez, Natalia
    Herrera, Francisco
    Saranti, Anna
    Holzinger, Andreas
    INFORMATION SCIENCES, 2024, 655
  • [13] Diffusion Models for Counterfactual Explanations
    Jeanneret, Guillaume
    Simon, Loic
    France, Frederic Jurie
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 249
  • [14] Diffusion Models for Counterfactual Explanations
    Jeanneret, Guillaume
    Simon, Loic
    Jurie, Fredric
    COMPUTER VISION - ACCV 2022, PT VII, 2023, 13847 : 219 - 237
  • [15] PreCoF: counterfactual explanations for fairness
    Goethals, Sofie
    Martens, David
    Calders, Toon
    MACHINE LEARNING, 2024, 113 (05) : 3111 - 3142
  • [16] Counterfactual Explanations for Neural Recommenders
    Tran, Khanh Hiep
    Ghazimatin, Azin
    Roy, Rishiraj Saha
    SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, : 1627 - 1631
  • [17] Adversarial Counterfactual Visual Explanations
    Jeanneret, Guillaume
    Simon, Loic
    Jurie, Frederic
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 16425 - 16435
  • [18] Towards counterfactual explanations for ontologies
    Bellucci, Matthieu
    Delestre, Nicolas
    Malandain, Nicolas
    Zanni-Merk, Cecilia
    SEMANTIC WEB, 2024, 15 (05) : 1611 - 1636
  • [19] Counterfactual Explanations Can Be Manipulated
    Slack, Dylan
    Hilgard, Sophie
    Lakkaraju, Himabindu
    Singh, Sameer
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [20] Evaluating Robustness of Counterfactual Explanations
    Artelt, Andre
    Vaquet, Valerie
    Velioglu, Riza
    Hinder, Fabian
    Brinkrolf, Johannes
    Schilling, Malte
    Hammer, Barbara
    2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,