Explainable AI: Efficiency Sequential Shapley Updating Approach

被引:0
|
作者
Petrosian, Ovanes [1 ]
Zou, Jinying [1 ]
机构
[1] St Petersburg State Univ, Fac Appl Math & Control Proc, St Petersburg, Russia
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Bayes methods; Explainable AI; Mathematical models; Games; Forestry; Approximation algorithms; Machine learning algorithms; Resource management; Game theory; Anomaly detection; interpretability; sequential Shapley updating; Shapley value; sampling method; Bayesian updating; high-dimensional problem; cancer detection; game theory; efficiency calculation;
D O I
10.1109/ACCESS.2024.3495543
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Shapley value-based explainable AI has recently attracted significant interest. However, the computational complexity of the Shapley value grows exponentially with the number of players, resulting in high computational costs that prevent its widespread practical application. To address this challenge, various approximation methods have been proposed in the literature for computing the Shapley value, such as linear Shapley computation, sampling-based Shapley computation, and several estimation-based approaches. Among these methods, the sampling approach exhibits non-zero bias and variance and is sufficiently universal to be used with almost any AI algorithm. However, it suffers from unstable interpretability results and slow convergence in high-dimensional problems. To address these problems, we propose integrating a sequential Bayesian updating framework into the Shapley sampling approach. The core idea of this method is to dynamically update probabilities based on each sample's Shapley value combined with a selection strategy. Both theoretical analysis and empirical results show that this method significantly improves the convergence speed and interpretability compared to the original sampling approach.
引用
收藏
页码:166414 / 166423
页数:10
相关论文
共 50 条
  • [1] Explainable AI for Material Property Prediction Based on Energy Cloud: A Shapley-Driven Approach
    Qayyum, Faiza
    Khan, Murad Ali
    Kim, Do-Hyeun
    Ko, Hyunseok
    Ryu, Ga-Ae
    MATERIALS, 2023, 16 (23)
  • [2] Channel Selection for Seizure Detection Based on Explainable AI With Shapley Values
    Ding, Yulan
    Zhao, Wenshan
    IEEE SENSORS JOURNAL, 2024, 24 (16) : 26126 - 26135
  • [3] Shapley-based explainable AI for clustering applications in fault diagnosis and prognosis
    Cohen, Joseph
    Huan, Xun
    Ni, Jun
    JOURNAL OF INTELLIGENT MANUFACTURING, 2024, 35 (08) : 4071 - 4086
  • [4] A New SVDD Approach to Reliable and Explainable AI
    Carlevaro, Alberto
    Mongelli, Maurizio
    IEEE INTELLIGENT SYSTEMS, 2022, 37 (02) : 55 - 68
  • [5] EXplainable AI (XAI) approach to image captioning
    Han, Seung-Ho
    Kwon, Min-Su
    Choi, Ho-Jin
    JOURNAL OF ENGINEERING-JOE, 2020, 2020 (13): : 589 - 594
  • [6] Revealing the role of explainable AI: How does updating AI applications generate agility-driven performance?
    Masialeti, Masialeti
    Talaei-Khoei, Amir
    Yang, Alan T.
    INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT, 2024, 77
  • [7] Collective eXplainable AI: Explaining Cooperative Strategies and Agent Contribution in Multiagent Reinforcement Learning With Shapley Values
    Heuillet, Alexandre
    Couthouis, Fabien
    Diaz-Rodriguez, Natalia
    IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2022, 17 (01) : 59 - 71
  • [8] Explainable AI
    Veerappa, Manjunatha
    Rinzivillo, Salvo
    ERCIM NEWS, 2023, (134):
  • [9] Explainable AI
    Anna, Monreale
    ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT, 2019, 319 : 5 - 5
  • [10] An Interpretable Approach with Explainable AI for Heart Stroke Prediction
    Srinivasu, Parvathaneni Naga
    Sirisha, Uddagiri
    Sandeep, Kotte
    Praveen, S. Phani
    Maguluri, Lakshmana Phaneendra
    Bikku, Thulasi
    DIAGNOSTICS, 2024, 14 (02)