Toward interpretable machine learning: evaluating models of heterogeneous predictions

被引:0
|
作者
Zhang, Ruixun [1 ,2 ,3 ,4 ]
机构
[1] Peking Univ, Sch Math Sci, Beijing, Peoples R China
[2] Peking Univ, Ctr Stat Sci, Beijing, Peoples R China
[3] Peking Univ, Natl Engn Lab Big Data Anal & Applicat, Beijing, Peoples R China
[4] Peking Univ, Lab Math Econ & Quantitat Finance, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Machine learning; Interpretability; Heterogeneous prediction; Bayesian statistics; Loan default; SYSTEMIC RISK; FINANCE; DEFAULT; GAME; GO;
D O I
10.1007/s10479-024-06033-1
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
AI and machine learning have made significant progress in the past decade, powering many applications in FinTech and beyond. But few machine learning models, especially deep learning models, are interpretable by humans, creating challenges for risk management and model improvements. Here, we propose a simple yet powerful framework to evaluate and interpret any black-box model with binary outcomes and explanatory variables, and heterogeneous relationships between the two. Our new metric, the signal success share (SSS) cross-entropy loss, measures how well the model captures the relationship along any feature or dimension, thereby providing actionable guidance on model improvements. Simulations demonstrate that our metric works for heterogeneous and nonlinear predictions, and distinguishes itself from traditional loss functions in evaluating model interpretability. We apply the methodology to an example of predicting loan defaults with real data. Our framework is more broadly applicable to a wide range of problems in financial and information technology.
引用
收藏
页数:21
相关论文
共 50 条
  • [41] Interpretable Machine Learning Models for Prediction of UHPC Creep Behavior
    Zhu, Peng
    Cao, Wenshuo
    Zhang, Lianzhen
    Zhou, Yongjun
    Wu, Yuching
    Ma, Zhongguo John
    BUILDINGS, 2024, 14 (07)
  • [42] Interpretable machine learning models for concrete compressive strength prediction
    Hoang, Huong-Giang Thi
    Nguyen, Thuy-Anh
    Ly, Hai-Bang
    INNOVATIVE INFRASTRUCTURE SOLUTIONS, 2025, 10 (01)
  • [43] Interpretable Machine Learning Models for Practical Antimonate Electrocatalyst Performance
    Deo, Shyam
    Kreider, Melissa E.
    Kamat, Gaurav
    Hubert, McKenzie
    Zamora Zeledon, Jose A.
    Wei, Lingze
    Matthews, Jesse
    Keyes, Nathaniel
    Singh, Ishaan
    Jaramillo, Thomas F.
    Abild-Pedersen, Frank
    Burke Stevens, Michaela
    Winther, Kirsten
    Voss, Johannes
    CHEMPHYSCHEM, 2024, 25 (13)
  • [44] Integration of Radiomics and Tumor Biomarkers in Interpretable Machine Learning Models
    Brocki, Lennart
    Chung, Neo Christopher
    CANCERS, 2023, 15 (09)
  • [45] A Survey of Machine Learning Models in Renewable Energy Predictions
    Lai, Jung-Pin
    Chang, Yu-Ming
    Chen, Chieh-Huang
    Pai, Ping-Feng
    APPLIED SCIENCES-BASEL, 2020, 10 (17):
  • [46] Robustness of Local Predictions in Atomistic Machine Learning Models
    Chong, Sanggyu
    Grasselli, Federico
    Ben Mahmoud, Chiheb
    Morrow, Joe D.
    Deringer, Volker L.
    Ceriotti, Michele
    JOURNAL OF CHEMICAL THEORY AND COMPUTATION, 2023, 19 (22) : 8020 - 8031
  • [47] Nuclear mass predictions using machine learning models
    Yuksel, Esra
    Soydaner, Derya
    Bahtiyar, Huseyin
    PHYSICAL REVIEW C, 2024, 109 (06)
  • [48] Using interpretable machine learning to extend heterogeneous antibody-virus datasets
    Einav, Tal
    Ma, Rong
    CELL REPORTS METHODS, 2023, 3 (08):
  • [49] Evaluating machine learning models for engineering problems
    Reich, Y
    Barai, SV
    ARTIFICIAL INTELLIGENCE IN ENGINEERING, 1999, 13 (03): : 257 - 272
  • [50] Evaluating Machine Learning Models for QoT Estimation
    Morais, Rui Manuel
    Pedro, Joao
    2018 20TH ANNIVERSARY INTERNATIONAL CONFERENCE ON TRANSPARENT OPTICAL NETWORKS (ICTON), 2018,