Shap-enhanced counterfactual explanations for recommendations

被引:6
|
作者
Zhong, Jinfeng [1 ]
Negre, Elsa [1 ]
机构
[1] Paris Dauphine Univ, PSL Res Univ, CNRS UMR 7243, LAMSADE 75016, Paris, France
关键词
Model-agnostic explanations; explainable recommendations;
D O I
10.1145/3477314.3507029
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Explanations in recommender systems help users better understand why a recommendation (or a list of recommendations) is generated. Explaining recommendations has become an important requirement for enhancing users' trust and satisfaction. However, explanation methods vary across different recommender models, increasing engineering costs. As recommender systems become ever more inscrutable, directly explaining recommender systems sometimes becomes impossible. Post-hoc explanation methods that do not elucidate internal mechanisms of recommender systems are popular approaches. State-of-art post-hoc explanation methods such as SHAP can generate explanations by building simpler surrogate models to approximate the original models. However, directly applying such methods has several concerns. First of all, post-hoc explanations may not be faithful to the original recommender systems since the internal mechanisms of recommender systems are not elucidated. Another concern is that the outputs returned by methods such as SHAP are not trivial for plain users to understand since background mathematical knowledge is required. In this work, we present an explanation method enhanced by SHAP that can generate easily understandable explanations with high fidelity.
引用
收藏
页码:1365 / 1372
页数:8
相关论文
共 50 条
  • [1] A SHAP-enhanced XGBoost model for interpretable prediction of coseismic landslides
    Wen, Haijia
    Liu, Bo
    Di, Mingrui
    Li, Jiayi
    Zhou, Xinzhi
    ADVANCES IN SPACE RESEARCH, 2024, 74 (08) : 3826 - 3854
  • [2] On the Tractability of SHAP Explanations
    Van den Broeck, Guy
    Lykov, Anton
    Schleich, Maximilian
    Suciu, Dan
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 6505 - 6513
  • [3] On the Tractability of SHAP Explanations
    Van den Broeck G.
    Lykov A.
    Schleich M.
    Suciu D.
    Journal of Artificial Intelligence Research, 2022, 74 : 851 - 886
  • [4] On the Tractability of SHAP Explanations
    Van den Broeck, Guy
    Lykov, Anton
    Schleich, Maximilian
    Suciu, Dan
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2022, 74 : 851 - 886
  • [5] GREASE: Generate Factual and Counterfactual Explanations for GNN-based Recommendations
    Chen, Ziheng
    Silvestri, Fabrizio
    Wang, Jia
    Zhang, Yongfeng
    Huang, Zhenhua
    Ahn, Hongshik
    Tolomei, Gabriele
    arXiv, 2022,
  • [6] Leveraging Causal Relations to Provide Counterfactual Explanations and Feasible Recommendations to End Users
    Crupi, Riccardo
    San Miguel Gonzalez, Beatriz
    Castelnovo, Alessandro
    Regoli, Daniele
    ICAART: PROCEEDINGS OF THE 14TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 2, 2022, : 24 - 32
  • [7] Counterfactual Visual Explanations
    Goyal, Yash
    Wu, Ziyan
    Ernst, Jan
    Batra, Dhruv
    Parikh, Devi
    Lee, Stefan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [8] PreCoF: counterfactual explanations for fairness
    Sofie Goethals
    David Martens
    Toon Calders
    Machine Learning, 2024, 113 : 3111 - 3142
  • [9] On the Suitability of SHAP Explanations for Refining Classifications
    Arslan, Yusuf
    Lebichot, Bertrand
    Allix, Kevin
    Veiber, Lisa
    Lefebvre, Clement
    Boytsov, Andrey
    Goujon, Anne
    Bissyande, Tegawende
    Klein, Jacques
    ICAART: PROCEEDINGS OF THE 14TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 3, 2022, : 395 - 402
  • [10] Counterfactual Causality and Historical Explanations
    Gerber, Doris
    EXPLANATION IN ACTION THEORY AND HISTORIOGRAPHY: CAUSAL AND TELEOLOGICAL APPROACHES, 2019, : 167 - 178