Multi-modal learning for inpatient length of stay prediction

被引:6
|
作者
Chen, Junde [1 ]
Wen, Yuxin [1 ]
Pokojovy, Michael [2 ]
Tseng, Tzu-Liang [3 ]
McCaffrey, Peter [4 ]
Vo, Alexander [4 ]
Walser, Eric [4 ]
Moen, Scott [4 ]
机构
[1] Chapman Univ, Dale E & Sarah Ann Fowler Sch Engn, Orange, CA 92866 USA
[2] Old Dominion Univ, Dept Math & Stat, Norfolk, VA 23529 USA
[3] Univ Texas El Paso, Dept Ind Mfg & Syst Engn, El Paso, TX 79968 USA
[4] Univ Texas Med Branch, Galveston, TX 77550 USA
基金
美国国家科学基金会;
关键词
Chest X-ray images; Data -fusion model; Length of stay prediction; Multi -modal learning; HOSPITAL MORTALITY;
D O I
10.1016/j.compbiomed.2024.108121
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Predicting inpatient length of stay (LoS) is important for hospitals aiming to improve service efficiency and enhance management capabilities. Patient medical records are strongly associated with LoS. However, due to diverse modalities, heterogeneity, and complexity of data, it becomes challenging to effectively leverage these heterogeneous data to put forth a predictive model that can accurately predict LoS. To address the challenge, this study aims to establish a novel data-fusion model, termed as DF-Mdl, to integrate heterogeneous clinical data for predicting the LoS of inpatients between hospital discharge and admission. Multi-modal data such as demographic data, clinical notes, laboratory test results, and medical images are utilized in our proposed methodology with individual "basic" sub-models separately applied to each different data modality. Specifically, a convolutional neural network (CNN) model, which we termed CRXMDL, is designed for chest X-ray (CXR) image data, two long short-term memory networks are used to extract features from long text data, and a novel attention-embedded 1D convolutional neural network is developed to extract useful information from numerical data. Finally, these basic models are integrated to form a new data-fusion model (DF-Mdl) for inpatient LoS prediction. The proposed method attains the best R2 and EVAR values of 0.6039 and 0.6042 among competitors for the LoS prediction on the Medical Information Mart for Intensive Care (MIMIC)-IV test dataset. Empirical evidence suggests better performance compared with other state-of-the-art (SOTA) methods, which demonstrates the effectiveness and feasibility of the proposed approach.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Reliable Multi-modal Learning: A Survey
    Yang Y.
    Zhan D.-C.
    Jiang Y.
    Xiong H.
    Ruan Jian Xue Bao/Journal of Software, 2021, 32 (04): : 1067 - 1081
  • [32] Multi-Modal Meta Continual Learning
    Gai, Sibo
    Chen, Zhengyu
    Wang, Donglin
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [33] MULTI-MODAL LEARNING FOR GESTURE RECOGNITION
    Cao, Congqi
    Zhang, Yifan
    Lu, Hanqing
    2015 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO (ICME), 2015,
  • [34] Learning multi-modal control programs
    Mehta, TR
    Egerstedt, M
    HYBRID SYSTEMS: COMPUTATION AND CONTROL, 2005, 3414 : 466 - 479
  • [35] Imagery in multi-modal object learning
    Jüttner, M
    Rentschler, I
    BEHAVIORAL AND BRAIN SCIENCES, 2002, 25 (02) : 197 - +
  • [36] Multi-modal sequence learning for Alzheimer's disease progression prediction with incomplete variable-length longitudinal data
    Xu, Lei
    Wu, Hui
    He, Chunming
    Wang, Jun
    Zhang, Changqing
    Nie, Feiping
    Chen, Lei
    MEDICAL IMAGE ANALYSIS, 2022, 82
  • [37] Multi-modal Network Representation Learning
    Zhang, Chuxu
    Jiang, Meng
    Zhang, Xiangliang
    Ye, Yanfang
    Chawla, Nitesh, V
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 3557 - 3558
  • [38] Modelling multi-modal learning in a hawkmoth
    Balkenius, Anna
    Kelber, Almut
    Balkenius, Christian
    FROM ANIMALS TO ANIMATS 9, PROCEEDINGS, 2006, 4095 : 422 - 433
  • [39] MaPLe: Multi-modal Prompt Learning
    Khattak, Muhammad Uzair
    Rasheed, Hanoona
    Maaz, Muhammad
    Khan, Salman
    Khan, Fahad Shahbaz
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 19113 - 19122
  • [40] Multi-Modal Convolutional Dictionary Learning
    Gao, Fangyuan
    Deng, Xin
    Xu, Mai
    Xu, Jingyi
    Dragotti, Pier Luigi
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 1325 - 1339