Constructing Interpretable Belief Rule Bases Using a Model-Agnostic Statistical Approach

被引:1
|
作者
Sun, Chao [1 ]
Wang, Yinghui [1 ]
Yan, Tao [1 ]
Yang, Jinlong [1 ]
Huang, Liangyi [2 ]
机构
[1] Jiangnan Univ, Sch Artificial Intelligence & Comp Sci, Wuxi 214122, Peoples R China
[2] Arizona State Univ, Sch Comp & Augmented Intelligence, Tempe, AZ 85281 USA
基金
中国国家自然科学基金;
关键词
Data models; Knowledge based systems; Parameter extraction; Fuzzy systems; Feature extraction; Explosions; Cognition; Belief rule base (BRB); data-driven; explainable artificial intelligence (XAI); interpretability; model-agnostic; EVIDENTIAL REASONING APPROACH; SYSTEM;
D O I
10.1109/TFUZZ.2024.3416448
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Belief rule base (BRB) has attracted considerable interest due to its interpretability and exceptional modeling accuracy. Generally, BRB construction relies on prior knowledge or historical data. The limitations of knowledge constrain the knowledge-based BRB and are unsuitable for use in large-scale rule bases. Data-driven techniques excel at extracting model parameters from data, thus significantly improving the accuracy of BRB. However, the previous data-based BRBs neglected the study of interpretability, and some still depend on prior knowledge or introduce additional parameters. All these factors make the BRB highly problem-specific and limit its broad applicability. To address these problems, a model-agnostic statistical BRB (MAS-BRB) modeling approach is proposed in this article. It adopts an MAS methodology for parameter extraction, ensuring that the parameters both fulfill their intended roles within the BRB framework and accurately represent complex, nonlinear data relationships. A comprehensive interpretability analysis of MAS-BRB components further confirms their compliance with established BRB interpretability standards. Experiments conducted on multiple public datasets demonstrate that MAS-BRB not only achieves improved modeling performance but also shows greater effectiveness compared to existing rule-based and traditional machine learning models.
引用
收藏
页码:5163 / 5175
页数:13
相关论文
共 50 条
  • [41] Applying local interpretable model-agnostic explanations to identify substructures that are responsible for mutagenicity of chemical compounds
    Rosa, Lucca Caiaffa Santos
    Pimentel, Andre Silva
    MOLECULAR SYSTEMS DESIGN & ENGINEERING, 2024, 9 (09): : 920 - 936
  • [42] Enhancing Visualization and Explainability of Computer Vision Models with Local Interpretable Model-Agnostic Explanations (LIME)
    Hamilton, Nicholas
    Webb, Adam
    Wilder, Matt
    Hendrickson, Ben
    Blanck, Matt
    Nelson, Erin
    Roemer, Wiley
    Havens, Timothy C.
    2022 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2022, : 604 - 611
  • [43] Predicting households' residential mobility trajectories with geographically localized interpretable model-agnostic explanation (GLIME)
    Jin, Chanwoo
    Park, Sohyun
    Ha, Hui Jeong
    Lee, Jinhyung
    Kim, Junghwan
    Hutchenreuther, Johan
    Nara, Atsushi
    INTERNATIONAL JOURNAL OF GEOGRAPHICAL INFORMATION SCIENCE, 2023, 37 (12) : 2597 - 2619
  • [44] Early knee osteoarthritis classification using distributed explainable convolutional neural network with local interpretable model-agnostic explanations
    Kumar, M. Ganesh
    Gumma, Lakshmi Narayana
    Neelam, Saikiran
    Yaswanth, Narikamalli
    Yedukondalu, Jammisetty
    Engineering Research Express, 2024, 6 (04):
  • [45] TS-MULE: Local Interpretable Model-Agnostic Explanations for Time Series Forecast Models
    Schlegel, Udo
    Duy Lam Vo
    Keim, Daniel A.
    Seebacher, Daniel
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021, PT I, 2021, 1524 : 5 - 14
  • [46] Explainability of Point Cloud Neural Networks Using SMILE: Statistical Model-Agnostic Interpretability with Local Explanations
    Ahmadi, Seyed Mohammad
    Aslansefat, Koorosh
    Valcarce-Diñeiro, Rubén
    Barnfather, Joshua
    arXiv,
  • [47] Feasibility of local interpretable model-agnostic explanations (LIME) algorithm as an effective and interpretable feature selection method: comparative fNIRS study
    Shin, Jaeyoung
    BIOMEDICAL ENGINEERING LETTERS, 2023, 13 (04) : 689 - 703
  • [48] Feasibility of local interpretable model-agnostic explanations (LIME) algorithm as an effective and interpretable feature selection method: comparative fNIRS study
    Jaeyoung Shin
    Biomedical Engineering Letters, 2023, 13 : 689 - 703
  • [49] Development of a classification model for Cynanchum wilfordii and Cynanchum auriculatum using convolutional neural network and local interpretable model-agnostic explanation technology
    Jung, Dae-Hyun
    Kim, Ho-Youn
    Won, Jae Hee
    Park, Soo Hyun
    FRONTIERS IN PLANT SCIENCE, 2023, 14
  • [50] Explaining Black Boxes With a SMILE: Statistical Model-Agnostic Interpretability With Local Explanations
    Aslansefat, Koorosh
    Hashemian, Mojgan
    Walker, Martin
    Akram, Mohammed Naveed
    Sorokos, Ioannis
    Papadopoulos, Yiannis
    IEEE SOFTWARE, 2024, 41 (01) : 87 - 97