Toward interpretable machine learning: evaluating models of heterogeneous predictions

被引:0
|
作者
Zhang, Ruixun [1 ,2 ,3 ,4 ]
机构
[1] Peking Univ, Sch Math Sci, Beijing, Peoples R China
[2] Peking Univ, Ctr Stat Sci, Beijing, Peoples R China
[3] Peking Univ, Natl Engn Lab Big Data Anal & Applicat, Beijing, Peoples R China
[4] Peking Univ, Lab Math Econ & Quantitat Finance, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Machine learning; Interpretability; Heterogeneous prediction; Bayesian statistics; Loan default; SYSTEMIC RISK; FINANCE; DEFAULT; GAME; GO;
D O I
10.1007/s10479-024-06033-1
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
AI and machine learning have made significant progress in the past decade, powering many applications in FinTech and beyond. But few machine learning models, especially deep learning models, are interpretable by humans, creating challenges for risk management and model improvements. Here, we propose a simple yet powerful framework to evaluate and interpret any black-box model with binary outcomes and explanatory variables, and heterogeneous relationships between the two. Our new metric, the signal success share (SSS) cross-entropy loss, measures how well the model captures the relationship along any feature or dimension, thereby providing actionable guidance on model improvements. Simulations demonstrate that our metric works for heterogeneous and nonlinear predictions, and distinguishes itself from traditional loss functions in evaluating model interpretability. We apply the methodology to an example of predicting loan defaults with real data. Our framework is more broadly applicable to a wide range of problems in financial and information technology.
引用
收藏
页数:21
相关论文
共 50 条
  • [31] On the importance of interpretable machine learning predictions to inform clinical decision making in oncology
    Lu, Sheng-Chieh
    Swisher, Christine L.
    Chung, Caroline
    Jaffray, David
    Sidey-Gibbons, Chris
    FRONTIERS IN ONCOLOGY, 2023, 13
  • [32] Evaluating Wear Volume of Oligoether Esters with an Interpretable Machine Learning Approach
    Wang, Hanwen
    Zhang, Chunhua
    Yu, Xiaowen
    Li, Yangyang
    TRIBOLOGY LETTERS, 2023, 71 (02)
  • [33] Interpretable machine learning for evaluating risk factors of freeway crash severity
    Samerei, Seyed Alireza
    Aghabayk, Kayvan
    INTERNATIONAL JOURNAL OF INJURY CONTROL AND SAFETY PROMOTION, 2024, 31 (03) : 534 - 550
  • [34] Evaluating Wear Volume of Oligoether Esters with an Interpretable Machine Learning Approach
    Hanwen Wang
    Chunhua Zhang
    Xiaowen Yu
    Yangyang Li
    Tribology Letters, 2023, 71
  • [35] Interpretable Machine Learning
    Chen V.
    Li J.
    Kim J.S.
    Plumb G.
    Talwalkar A.
    Queue, 2021, 19 (06): : 28 - 56
  • [36] Epileptic seizure detection by using interpretable machine learning models
    Zhao, Xuyang
    Yoshida, Noboru
    Ueda, Tetsuya
    Sugano, Hidenori
    Tanaka, Toshihisa
    JOURNAL OF NEURAL ENGINEERING, 2023, 20 (01)
  • [37] Interpretable machine learning models for COPD ease of breathing estimation
    Kok, Thomas T.
    Morales, John
    Deschrijver, Dirk
    Blanco-Almazan, Dolores
    Groenendaal, Willemijn
    Ruttens, David
    Smeets, Christophe
    Mihajlovic, Vojkan
    Ongenae, Femke
    Van Hoecke, Sofie
    MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2025,
  • [38] Interpretable Catalysis Models Using Machine Learning with Spectroscopic Descriptors
    Wang, Song
    Jiang, Jun
    ACS CATALYSIS, 2023, 13 (11) : 7428 - 7436
  • [39] Neural Additive Models: Interpretable Machine Learning with Neural Nets
    Agarwal, Rishabh
    Melnick, Levi
    Frosst, Nicholas
    Zhang, Xuezhou
    Lengerich, Ben
    Caruana, Rich
    Hinton, Geoffrey E.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [40] Machine learning of material properties: Predictive and interpretable multilinear models
    Allen, Alice E. A.
    Tkatchenko, Alexandre
    SCIENCE ADVANCES, 2022, 8 (18)