Illuminating the black box: An interpretable machine learning based on ensemble trees

被引:0
|
作者
Lee, Yue-Shi [1 ]
Yen, Show-Jane [1 ]
Jiang, Wendong [2 ]
Chen, Jiyuan [3 ]
Chang, Chih-Yung [2 ]
机构
[1] Ming Chuan Univ, Dept Comp Sci & Informat Engn, Taoyuan City 333, Taiwan
[2] Tamkang Univ, Dept Comp Sci & Informat Engn, New Taipei 25137, Taiwan
[3] Univ Melbourne, Fac Engn & Informat Technol, Parkville, Vic 3052, Australia
关键词
Interpretable machine learning; Machine learning; Explanation;
D O I
10.1016/j.eswa.2025.126720
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning has achieved significant success in the analysis of unstructured data, but its inherent black-box nature has led to numerous limitations in security-sensitive domains. Although many existing interpretable machine learning methods can partially address this issue, they often face challenges such as model limitations, interpretability randomness, and a lack of global interpretability. To address these challenges, this paper introduces an innovative interpretable ensemble tree method, EnEXP. This method generates a sample set by applying fixed masking perturbation to individual samples, then constructs multiple decision trees using bagging and boosting techniques and interprets them based on the importance outputs of these trees, thereby achieving a global interpretation of the entire dataset through the aggregation of all sample insights. Experimental results demonstrate that EnEXP possesses superior explanatory power compared to other interpretable methods. In text processing experiments, the bag-of-words model optimized by EnEXP outperformed the GPT-3 Ada fine-tuned model.
引用
收藏
页数:19
相关论文
共 50 条
  • [21] Learning outside the Black-Box: The pursuit of interpretable models
    Crabbe, Jonathan
    Zhang, Yao
    Zame, William R.
    van der Schaar, Mihaela
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [22] Ensemble machine learning for interpretable soil heat flux estimation
    Cross, James F.
    Drewry, Darren T.
    ECOLOGICAL INFORMATICS, 2024, 82
  • [23] Toward Interpretable Machine Learning: Constructing Polynomial Models Based on Feature Interaction Trees
    Jang, Jisoo
    Kim, Mina
    Bui, Tien-Cuong
    Li, Wen-Syan
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2023, PT II, 2023, 13936 : 159 - 170
  • [25] Decision-making framework with double-loop learning through interpretable black-box machine learning models
    Bohanec, Marko
    Robnik-Sikonja, Marko
    Borstnar, Mirjana Kljajic
    INDUSTRIAL MANAGEMENT & DATA SYSTEMS, 2017, 117 (07) : 1389 - 1406
  • [26] Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
    Cynthia Rudin
    Nature Machine Intelligence, 2019, 1 : 206 - 215
  • [27] Opening the Black Box: Revealing Interpretable Sequence Motifs in Kernel-Based Learning Algorithms
    Vidovic, Marina M. -C.
    Goernitz, Nico
    Mueller, Klaus-Robert
    Raetsch, Gunnar
    Kloft, Marius
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2015, PT II, 2015, 9285 : 137 - 153
  • [28] Shining Light Into the Black Box of Machine Learning
    Hsu, William
    Elmore, Joann G.
    JNCI-JOURNAL OF THE NATIONAL CANCER INSTITUTE, 2019, 111 (09): : 877 - 879
  • [29] On the Black-Box Challenge for Fraud Detection Using Machine Learning (II): Nonlinear Analysis through Interpretable Autoencoders
    Chaquet-Ulldemolins, Jacobo
    Gimeno-Blanes, Francisco-Javier
    Moral-Rubio, Santiago
    Munoz-Romero, Sergio
    Rojo-Alvarez, Jose-Luis
    APPLIED SCIENCES-BASEL, 2022, 12 (08):
  • [30] Illuminating the black box of entrepreneurship education programs
    Maritz, Alex
    Brown, Christopher R.
    EDUCATION AND TRAINING, 2013, 55 (03): : 234 - 252