Multi-modal deep learning for credit rating prediction using text and numerical data streams

被引:0
|
作者
Tavakoli, Mahsa [1 ]
Chandra, Rohitash [2 ]
Tian, Fengrui [1 ]
Bravo, Cristian [1 ]
机构
[1] Univ Western Ontario, Dept Stat & Actuarial Sci, London, ON N6A 5B7, Canada
[2] UNSW Sydney, Sch Math & Stat, Transit Artificial Intelligence Res Grp, Sydney 2052, Australia
基金
加拿大自然科学与工程研究理事会;
关键词
Fusion strategies; Deep learning; Credit ratings; Multi-modality; BERT; CNN; Cross-attention; Earning call transcripts; NEURAL-NETWORK MODELS; DEFAULT PREDICTION;
D O I
10.1016/j.asoc.2025.112771
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Knowing which factors are significant in credit rating assessments leads to better decision-making. However, the focus of the literature thus far has been mostly on structured data, and fewer studies have addressed unstructured or multimodal datasets. In this paper, we present an analysis of the most effective architectures for the fusion of deep learning models to predict company credit rating classes, using structured and unstructured datasets of different types. In these models, we tested various combinations of fusion strategies with selected deep-learning models, including convolutional neural networks (CNNs) and variants of recurrent neural networks (RNNs), and pre-trained language models (BERT). We study data fusion strategies in terms of level (including early and intermediate fusion) and techniques (including concatenation and cross-attention). Our results show that a CNN-based multi-modal model with a hybrid fusion strategy outperformed other multimodal techniques. In addition, by comparing simple architectures with more complex ones, we found that more sophisticated deep learning models do not necessarily produce the highest performance. Furthermore, we found that the text channel plays amore significant role than numeric data, with the contribution of text achieving an AUC of 0.91, while the maximum AUC of numeric channels was 0.808. Finally, rating agencies on short, medium, and long-term performance show that Moody's credit ratings outperform those of other agencies like Standard & Poor's and Fitch Ratings.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Multi-Modal Deep Learning Models for Alzheimer's Disease Prediction Using MRI and EHR
    Prabhu, Sathvik S.
    Berkebile, John A.
    Rajagopalan, Neha
    Yao, Renjie
    Shi, Wenqi
    Giuste, Felipe
    Zhong, Yishan
    Sun, Jimin
    Wang, May D.
    2022 IEEE 22ND INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOENGINEERING (BIBE 2022), 2022, : 168 - 173
  • [22] A deep manifold-regularized learning model for improving phenotype prediction from multi-modal data
    Nam D. Nguyen
    Jiawei Huang
    Daifeng Wang
    Nature Computational Science, 2022, 2 : 38 - 46
  • [23] A deep manifold-regularized learning model for improving phenotype prediction from multi-modal data
    Nguyen, Nam D.
    Huang, Jiawei
    Wang, Daifeng
    NATURE COMPUTATIONAL SCIENCE, 2022, 2 (01): : 38 - 46
  • [24] Multi-modal deep learning for landform recognition
    Du, Lin
    You, Xiong
    Li, Ke
    Meng, Liqiu
    Cheng, Gong
    Xiong, Liyang
    Wang, Guangxia
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2019, 158 : 63 - 75
  • [25] Deep Multi-modal Learning with Cascade Consensus
    Yang, Yang
    Wu, Yi-Feng
    Zhan, De-Chuan
    Jiang, Yuan
    PRICAI 2018: TRENDS IN ARTIFICIAL INTELLIGENCE, PT II, 2018, 11013 : 64 - 72
  • [26] METEOR: Learning Memory and Time Efficient Representations from Multi-modal Data Streams
    Silva, Amila
    Karunasekera, Shanika
    Leckie, Christopher
    Luo, Ling
    CIKM '20: PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, 2020, : 1375 - 1384
  • [27] Multi-modal deep distance metric learning
    Roostaiyan, Seyed Mahdi
    Imani, Ehsan
    Baghshah, Mahdieh Soleymani
    INTELLIGENT DATA ANALYSIS, 2017, 21 (06) : 1351 - 1369
  • [28] Multi-modal deep learning for joint prediction of otitis media and diagnostic difficulty
    Sundgaard, Josefine Vilsboll
    Hannemose, Morten Rieger
    Laugesen, Soren
    Bray, Peter
    Harte, James
    Kamide, Yosuke
    Tanaka, Chiemi
    Paulsen, Rasmus R.
    Christensen, Anders Nymark
    LARYNGOSCOPE INVESTIGATIVE OTOLARYNGOLOGY, 2024, 9 (01):
  • [29] Multi-Modal Graph Learning for Disease Prediction
    Zheng, Shuai
    Zhu, Zhenfeng
    Liu, Zhizhe
    Guo, Zhenyu
    Liu, Yang
    Yang, Yuchen
    Zhao, Yao
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2022, 41 (09) : 2207 - 2216
  • [30] MMDL: A Novel Multi-modal Deep Learning Model for Stock Market Prediction
    2022 IEEE 9TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA), 2022, : 1070 - 1071