Challenging the Performance-Interpretability Trade-Off: An Evaluation of Interpretable Machine Learning Models

被引:0
|
作者
Kruschel, Sven [1 ]
Hambauer, Nico [1 ]
Weinzierl, Sven [2 ]
Zilker, Sandra [2 ,3 ]
Kraus, Mathias [1 ]
Zschech, Patrick [4 ]
机构
[1] Univ Regensburg, Chair Explainable AI Business Value Creat, Bajuwarenstr 4, D-93053 Regensburg, Germany
[2] Friedrich Alexander Univ Erlangen Nurnberg, Chair Digital Ind Serv Syst, Further Str 248, D-90429 Nurnberg, Germany
[3] TH Nurnberg Georg Simon Ohm, Professorship Business Analyt, Hohfederstr 40, D-90489 Nurnberg, Germany
[4] Univ Leipzig, Professorship Intelligent Informat Syst & Proc, Grimma Str 12, D-04109 Leipzig, Germany
关键词
Decision support; Predictive analytics; Interpretable machine learning; Generalized additive models; Explainable artificial intelligence; EXPLANATIONS; REGRESSION; AI;
D O I
10.1007/s12599-024-00922-2
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning is permeating every conceivable domain to promote data-driven decision support. The focus is often on advanced black-box models due to their assumed performance advantages, whereas interpretable models are often associated with inferior predictive qualities. More recently, however, a new generation of generalized additive models (GAMs) has been proposed that offer promising properties for capturing complex, non-linear patterns while remaining fully interpretable. To uncover the merits and limitations of these models, the study examines the predictive performance of seven different GAMs in comparison to seven commonly used machine learning models based on a collection of twenty tabular benchmark datasets. To ensure a fair and robust model comparison, an extensive hyperparameter search combined with cross-validation was performed, resulting in 68,500 model runs. In addition, this study qualitatively examines the visual output of the models to assess their level of interpretability. Based on these results, the paper dispels the misconception that only black-box models can achieve high accuracy by demonstrating that there is no strict trade-off between predictive performance and model interpretability for tabular data. Furthermore, the paper discusses the importance of GAMs as powerful interpretable models for the field of information systems and derives implications for future work from a socio-technical perspective.
引用
收藏
页数:25
相关论文
共 50 条
  • [21] Simpler is better: Lifting interpretability-performance trade-off via automated feature engineering
    Gosiewska, Alicja
    Kozak, Anna
    Biecek, Przemyslaw
    DECISION SUPPORT SYSTEMS, 2021, 150
  • [22] Causality-Aided Trade-off Analysis for Machine Learning Fairness
    Ji, Zhenlan
    Ma, Pingchuan
    Wang, Shuai
    Li, Yanhui
    2023 38TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE, 2023, : 371 - 383
  • [23] Utility-Privacy Trade-Off in Distributed Machine Learning Systems
    Zeng, Xia
    Yang, Chuanchuan
    Dai, Bin
    ENTROPY, 2022, 24 (09)
  • [24] Optimizing Speed and Accuracy Trade-off in Machine Learning Models via Stochastic Gradient Descent Approximation
    Catapang, Jasper Kyle
    2022 9TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING & MACHINE INTELLIGENCE, ISCMI, 2022, : 124 - 128
  • [25] Leveraging the Trade-off between Accuracy and Interpretability in a Hybrid Intelligent System
    Wang, Di
    Quek, Chai
    Tan, Ah-Hwee
    Miao, Chunyan
    Ng, Geok See
    Zhou, You
    2017 INTERNATIONAL CONFERENCE ON SECURITY, PATTERN ANALYSIS, AND CYBERNETICS (SPAC), 2017, : 55 - 60
  • [26] Balancing the trade-off between accuracy and interpretability in software defect prediction
    Toshiki Mori
    Naoshi Uchihira
    Empirical Software Engineering, 2019, 24 : 779 - 825
  • [27] Balancing the trade-off between accuracy and interpretability in software defect prediction
    Mori, Toshiki
    Uchihira, Naoshi
    EMPIRICAL SOFTWARE ENGINEERING, 2019, 24 (02) : 779 - 825
  • [28] BEYOND THE BIAS VARIANCE TRADE-OFF: A MUTUAL INFORMATION TRADE-OFF IN DEEP LEARNING
    Lan, Xinjie
    Zhu, Bin
    Boncelet, Charles
    Barner, Kenneth
    2021 IEEE 31ST INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2021,
  • [29] Interpretable Machine Learning Models for Practical Antimonate Electrocatalyst Performance
    Deo, Shyam
    Kreider, Melissa E.
    Kamat, Gaurav
    Hubert, McKenzie
    Zamora Zeledon, Jose A.
    Wei, Lingze
    Matthews, Jesse
    Keyes, Nathaniel
    Singh, Ishaan
    Jaramillo, Thomas F.
    Abild-Pedersen, Frank
    Burke Stevens, Michaela
    Winther, Kirsten
    Voss, Johannes
    CHEMPHYSCHEM, 2024, 25 (13)
  • [30] The Aggregation-Learning Trade-off
    Piezunka, Henning
    Aggarwal, Vikas A.
    Posen, Hart E.
    ORGANIZATION SCIENCE, 2022, 33 (03) : 1094 - 1115