Toward interpretable machine learning: evaluating models of heterogeneous predictions

被引:0
|
作者
Zhang, Ruixun [1 ,2 ,3 ,4 ]
机构
[1] Peking Univ, Sch Math Sci, Beijing, Peoples R China
[2] Peking Univ, Ctr Stat Sci, Beijing, Peoples R China
[3] Peking Univ, Natl Engn Lab Big Data Anal & Applicat, Beijing, Peoples R China
[4] Peking Univ, Lab Math Econ & Quantitat Finance, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Machine learning; Interpretability; Heterogeneous prediction; Bayesian statistics; Loan default; SYSTEMIC RISK; FINANCE; DEFAULT; GAME; GO;
D O I
10.1007/s10479-024-06033-1
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
AI and machine learning have made significant progress in the past decade, powering many applications in FinTech and beyond. But few machine learning models, especially deep learning models, are interpretable by humans, creating challenges for risk management and model improvements. Here, we propose a simple yet powerful framework to evaluate and interpret any black-box model with binary outcomes and explanatory variables, and heterogeneous relationships between the two. Our new metric, the signal success share (SSS) cross-entropy loss, measures how well the model captures the relationship along any feature or dimension, thereby providing actionable guidance on model improvements. Simulations demonstrate that our metric works for heterogeneous and nonlinear predictions, and distinguishes itself from traditional loss functions in evaluating model interpretability. We apply the methodology to an example of predicting loan defaults with real data. Our framework is more broadly applicable to a wide range of problems in financial and information technology.
引用
收藏
页数:21
相关论文
共 50 条
  • [21] Interpretable Machine Learning Models for PISA Results in Mathematics
    Gomez-Talal, Ismael
    Bote-Curiel, Luis
    Luis Rojo-Alvarez, Jose
    IEEE ACCESS, 2025, 13 : 27371 - 27397
  • [22] Application of interpretable machine learning models for the intelligent decision
    Li, Yawen
    Yang, Liu
    Yang, Bohan
    Wang, Ning
    Wu, Tian
    NEUROCOMPUTING, 2019, 333 : 273 - 283
  • [23] Crack path predictions in heterogeneous media by machine learning
    Worthington, M.
    Chew, H. B.
    JOURNAL OF THE MECHANICS AND PHYSICS OF SOLIDS, 2023, 172
  • [24] Editorial: Interpretable and explainable machine learning models in oncology
    Hrinivich, William Thomas
    Wang, Tonghe
    Wang, Chunhao
    FRONTIERS IN ONCOLOGY, 2023, 13
  • [25] The coming of age of interpretable and explainable machine learning models
    Lisboa, P. J. G.
    Saralajew, S.
    Vellido, A.
    Fernandez-Domenech, R.
    Villmann, T.
    NEUROCOMPUTING, 2023, 535 : 25 - 39
  • [26] Interpretable Machine Learning for Catalytic Materials Design toward Sustainability
    Xin, Hongliang
    Mou, Tianyou
    Pillai, Hemanth Somarajan
    Wang, Shih-Han
    Huang, Yang
    ACCOUNTS OF MATERIALS RESEARCH, 2023, 5 (01): : 22 - 34
  • [27] Discovering Interpretable Machine Learning Models in Parallel Coordinates
    Kovalerchuk, Boris
    Hayes, Dustin
    2021 25TH INTERNATIONAL CONFERENCE INFORMATION VISUALISATION (IV): AI & VISUAL ANALYTICS & DATA SCIENCE, 2021, : 181 - 188
  • [28] Toward Design and Evaluation Framework for Interpretable Machine Learning Systems
    Mohseni, Sina
    AIES '19: PROCEEDINGS OF THE 2019 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2019, : 553 - 554
  • [29] Explaining machine learning models in sales predictions
    Bohanec, Marko
    Borstnar, Mirjana Kljajic
    Robnik-Sikonja, Marko
    EXPERT SYSTEMS WITH APPLICATIONS, 2017, 71 : 416 - 428
  • [30] Interpretable machine learning methods for predictions in systems biology from omics data
    Sidak, David
    Schwarzerova, Jana
    Weckwerth, Wolfram
    Waldherr, Steffen
    FRONTIERS IN MOLECULAR BIOSCIENCES, 2022, 9