Toward Interpretable Machine Learning: Constructing Polynomial Models Based on Feature Interaction Trees

被引:2
|
作者
Jang, Jisoo [1 ]
Kim, Mina [1 ]
Bui, Tien-Cuong [1 ]
Li, Wen-Syan [1 ]
机构
[1] Seoul Natl Univ, 1 Gwanak Ro Gwanak Gu, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
eXplainable AI; transparent models; polynomial model; explainability evaluation;
D O I
10.1007/978-3-031-33377-4_13
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As AI has been applied in many decision-making processes, ranging from loan application approval to predictive policing, the interpretability of machine learning models is increasingly important. Interpretable models and post-hoc explainability are two approaches in eXplainable AI (XAI). We follow the argument that transparent models should be used instead of black-box ones in real-world applications, especially regarding high-stakes decisions. In this paper, we propose PolyFIT to address two major issues in XAI: (1) bridging the gap between black-box and interpretable models and (2) experimentally validating the trade-off relationship between model performance and explainability. PolyFIT is a novel polynomial model construction method assisted by the knowledge of feature interactions in black-box models. PolyFIT uses extracted feature interaction knowledge to build interaction trees, which are then transformed into polynomial models. We evaluate the predictive performance of PolyFIT with baselines using four publicly available data sets, Titanic survival, Adult income, Boston house price, and California house price. Our method outperforms linear models by 5% and 56% in classification and regression tasks on average, respectively. We also conducted usability studies to derive the trade-off relationship between model performance and explainability. The studies validate our hypotheses about the actual relationship between model performance and explainability.
引用
收藏
页码:159 / 170
页数:12
相关论文
共 50 条
  • [21] Active Sampling for Learning Interpretable Surrogate Machine Learning Models
    Saadallah, Amal
    Morik, Katharina
    2020 IEEE 7TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA 2020), 2020, : 264 - 272
  • [22] Feature Blending: An Approach toward Generalized Machine Learning Models for Property Prediction
    Satsangi, Swanti
    Mishra, Avanish
    Singh, Abhishek K.
    ACS PHYSICAL CHEMISTRY AU, 2022, 2 (01): : 16 - 22
  • [23] Feature Blending: An Approach toward Generalized Machine Learning Models for Property Prediction
    Satsangi, Swanti
    Mishra, Avanish
    Singh, Abhishek K.
    ACS PHYSICAL CHEMISTRY AU, 2021, 2 (01): : 16 - 22
  • [24] Interpretable and intervenable ultrasonography-based machine learning models for pediatric appendicitis
    Marcinkevics, Ricards
    Wolfertstetter, Patricia Reis
    Klimiene, Ugne
    Chin-Cheong, Kieran
    Paschke, Alyssia
    Zerres, Julia
    Denzinger, Markus
    Niederberger, David
    Wellmann, Sven
    Ozkan, Ece
    Knorr, Christian
    Vogt, Julia E.
    MEDICAL IMAGE ANALYSIS, 2024, 91
  • [25] Interpretable Machine Learning Using Partial Linear Models
    Flachaire, Emmanuel
    Hue, Sullivan
    Laurent, Sebastien
    Hacheme, Gilles
    OXFORD BULLETIN OF ECONOMICS AND STATISTICS, 2024, 86 (03) : 519 - 540
  • [26] Interpretable Machine Learning Models for PISA Results in Mathematics
    Gomez-Talal, Ismael
    Bote-Curiel, Luis
    Luis Rojo-Alvarez, Jose
    IEEE ACCESS, 2025, 13 : 27371 - 27397
  • [27] Application of interpretable machine learning models for the intelligent decision
    Li, Yawen
    Yang, Liu
    Yang, Bohan
    Wang, Ning
    Wu, Tian
    NEUROCOMPUTING, 2019, 333 : 273 - 283
  • [28] Editorial: Interpretable and explainable machine learning models in oncology
    Hrinivich, William Thomas
    Wang, Tonghe
    Wang, Chunhao
    FRONTIERS IN ONCOLOGY, 2023, 13
  • [29] The coming of age of interpretable and explainable machine learning models
    Lisboa, P. J. G.
    Saralajew, S.
    Vellido, A.
    Fernandez-Domenech, R.
    Villmann, T.
    NEUROCOMPUTING, 2023, 535 : 25 - 39
  • [30] Progress Toward Interpretable Machine Learning-Based Disruption Predictors Across Tokamaks
    Rea, C.
    Montes, K. J.
    Pau, A.
    Granetz, R. S.
    Sauter, O.
    FUSION SCIENCE AND TECHNOLOGY, 2020, 76 (08) : 912 - 924