Toward Interpretable Machine Learning: Constructing Polynomial Models Based on Feature Interaction Trees

被引:2
|
作者
Jang, Jisoo [1 ]
Kim, Mina [1 ]
Bui, Tien-Cuong [1 ]
Li, Wen-Syan [1 ]
机构
[1] Seoul Natl Univ, 1 Gwanak Ro Gwanak Gu, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
eXplainable AI; transparent models; polynomial model; explainability evaluation;
D O I
10.1007/978-3-031-33377-4_13
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As AI has been applied in many decision-making processes, ranging from loan application approval to predictive policing, the interpretability of machine learning models is increasingly important. Interpretable models and post-hoc explainability are two approaches in eXplainable AI (XAI). We follow the argument that transparent models should be used instead of black-box ones in real-world applications, especially regarding high-stakes decisions. In this paper, we propose PolyFIT to address two major issues in XAI: (1) bridging the gap between black-box and interpretable models and (2) experimentally validating the trade-off relationship between model performance and explainability. PolyFIT is a novel polynomial model construction method assisted by the knowledge of feature interactions in black-box models. PolyFIT uses extracted feature interaction knowledge to build interaction trees, which are then transformed into polynomial models. We evaluate the predictive performance of PolyFIT with baselines using four publicly available data sets, Titanic survival, Adult income, Boston house price, and California house price. Our method outperforms linear models by 5% and 56% in classification and regression tasks on average, respectively. We also conducted usability studies to derive the trade-off relationship between model performance and explainability. The studies validate our hypotheses about the actual relationship between model performance and explainability.
引用
收藏
页码:159 / 170
页数:12
相关论文
共 50 条
  • [1] Toward Interpretable Machine Learning Models for Materials Discovery
    Mikulskis, Paulius
    Alexander, Morgan R.
    Winkler, David Alan
    ADVANCED INTELLIGENT SYSTEMS, 2019, 1 (08)
  • [2] Constructing transferable and interpretable machine learning models for black carbon concentrations
    Fung, Pak Lun
    Savadkoohi, Marjan
    Zaidan, Martha Arbayani
    V. Niemi, Jarkko
    Timonen, Hilkka
    Pandolfi, Marco
    Alastuey, Andres
    Querol, Xavier
    Hussein, Tareq
    Petaja, Tuukka
    ENVIRONMENT INTERNATIONAL, 2024, 184
  • [3] Toward interpretable machine learning: evaluating models of heterogeneous predictions
    Zhang, Ruixun
    ANNALS OF OPERATIONS RESEARCH, 2024,
  • [4] Feature Learning for Interpretable, Performant Decision Trees
    Good, Jack H.
    Kovach, Torin
    Miller, Kyle
    Dubrawski, Artur
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [5] Feature mining for thermoelectric materials based on interpretable machine learning
    Liu, Yiyu
    Mu, Zilong
    Hong, Peichao
    Yang, Yun
    Lin, Changxu
    NANOSCALE, 2025, 17 (04) : 2200 - 2214
  • [6] Mixture of Decision Trees for Interpretable Machine Learning
    Brueggenjuergen, Simeon
    Schaaf, Nina
    Kerschke, Pascal
    Huber, Marco F.
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 1175 - 1182
  • [7] Illuminating the black box: An interpretable machine learning based on ensemble trees
    Lee, Yue-Shi
    Yen, Show-Jane
    Jiang, Wendong
    Chen, Jiyuan
    Chang, Chih-Yung
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 272
  • [8] Constructing regularity feature trees for solid models
    Li, M.
    Langbein, F. C.
    Martin, R. R.
    GEOMETRIC MODELING AND PROCESSING - GMP 2006, PROCEEDINGS, 2006, 4077 : 267 - 286
  • [9] Prediction of hotel booking cancellations: Integration of machine learning and probability model based on interpretable feature interaction
    Chen, Shuixia
    Ngai, Eric W. T.
    Ku, Yaoyao
    Xu, Zeshui
    Gou, Xunjie
    Zhang, Chenxi
    DECISION SUPPORT SYSTEMS, 2023, 170
  • [10] Interpretable Differencing of Machine Learning Models
    Haldar, Swagatam
    Saha, Diptikalyan
    Wei, Dennis
    Nair, Rahul
    Daly, Elizabeth M.
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 788 - 797