Illuminating the black box: An interpretable machine learning based on ensemble trees

被引:0
|
作者
Lee, Yue-Shi [1 ]
Yen, Show-Jane [1 ]
Jiang, Wendong [2 ]
Chen, Jiyuan [3 ]
Chang, Chih-Yung [2 ]
机构
[1] Ming Chuan Univ, Dept Comp Sci & Informat Engn, Taoyuan City 333, Taiwan
[2] Tamkang Univ, Dept Comp Sci & Informat Engn, New Taipei 25137, Taiwan
[3] Univ Melbourne, Fac Engn & Informat Technol, Parkville, Vic 3052, Australia
关键词
Interpretable machine learning; Machine learning; Explanation;
D O I
10.1016/j.eswa.2025.126720
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning has achieved significant success in the analysis of unstructured data, but its inherent black-box nature has led to numerous limitations in security-sensitive domains. Although many existing interpretable machine learning methods can partially address this issue, they often face challenges such as model limitations, interpretability randomness, and a lack of global interpretability. To address these challenges, this paper introduces an innovative interpretable ensemble tree method, EnEXP. This method generates a sample set by applying fixed masking perturbation to individual samples, then constructs multiple decision trees using bagging and boosting techniques and interprets them based on the importance outputs of these trees, thereby achieving a global interpretation of the entire dataset through the aggregation of all sample insights. Experimental results demonstrate that EnEXP possesses superior explanatory power compared to other interpretable methods. In text processing experiments, the bag-of-words model optimized by EnEXP outperformed the GPT-3 Ada fine-tuned model.
引用
收藏
页数:19
相关论文
共 50 条
  • [41] Interpretable Machine Learning
    Chen V.
    Li J.
    Kim J.S.
    Plumb G.
    Talwalkar A.
    Queue, 2021, 19 (06): : 28 - 56
  • [42] Black Box Fairness Testing of Machine Learning Models
    Aggarwal, Aniya
    Lohia, Pranay
    Nagar, Seema
    Dey, Kuntal
    Saha, Diptikalyan
    ESEC/FSE'2019: PROCEEDINGS OF THE 2019 27TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, 2019, : 625 - 635
  • [43] Machine learning for psychiatry: getting doctors at the black box?
    Dennis M. Hedderich
    Simon B. Eickhoff
    Molecular Psychiatry, 2021, 26 : 23 - 25
  • [44] Machine learning for psychiatry: getting doctors at the black box?
    Hedderich, Dennis M.
    Eickhoff, Simon B.
    MOLECULAR PSYCHIATRY, 2021, 26 (01) : 23 - 25
  • [45] Removing the Black-Box from Machine Learning
    Fernando Kuri-Morales, Angel
    PATTERN RECOGNITION, MCPR 2023, 2023, 13902 : 36 - 46
  • [46] Interpretable machine learning algorithms to predict leaf senescence date of deciduous trees
    Gao, Chengxi
    Wang, Huanjiong
    Ge, Quansheng
    AGRICULTURAL AND FOREST METEOROLOGY, 2023, 340
  • [47] Constructing transferable and interpretable machine learning models for black carbon concentrations
    Fung, Pak Lun
    Savadkoohi, Marjan
    Zaidan, Martha Arbayani
    V. Niemi, Jarkko
    Timonen, Hilkka
    Pandolfi, Marco
    Alastuey, Andres
    Querol, Xavier
    Hussein, Tareq
    Petaja, Tuukka
    ENVIRONMENT INTERNATIONAL, 2024, 184
  • [48] Interpretable Machine Learning for Finding Intermediate-mass Black Holes
    Pasquato, Mario
    Trevisan, Piero
    Askar, Abbas
    Lemos, Pablo
    Carenini, Gaia
    Mapelli, Michela
    Hezaveh, Yashar
    ASTROPHYSICAL JOURNAL, 2024, 965 (01):
  • [49] Ensemble Based Extreme Learning Machine
    Liu, Nan
    Wang, Han
    IEEE SIGNAL PROCESSING LETTERS, 2010, 17 (08) : 754 - 757
  • [50] Interpretable Companions for Black-Box Models
    Pan, Danqing
    Wang, Tong
    Hara, Satoshi
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2444 - 2453