Learning to Adapt Dynamic Clinical Event Sequences with Residual Mixture of Experts

被引:1
|
作者
Lee, Jeong Min [1 ]
Hauskrecht, Milos [1 ]
机构
[1] Univ Pittsburgh, Dept Comp Sci, Pittsburgh, PA 15260 USA
关键词
D O I
10.1007/978-3-031-09342-5_15
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Clinical event sequences in Electronic Health Records (EHRs) record detailed information about the patient condition and patient care as they occur in time. Recent years have witnessed increased interest of machine learning community in developing machine learning models solving different types of problems defined upon information in EHRs. More recently, neural sequential models, such as RNN and LSTM, became popular and widely applied models for representing patient sequence data and for predicting future events or outcomes based on such data. However, a single neural sequential model may not properly represent complex dynamics of all patients and the differences in their behaviors. In this work, we aim to alleviate this limitation by refining a one-fits-all model using aMixture-of-Experts (MoE) architecture. The architecture consists of multiple (expert) RNN models covering patient sub-populations and refining the predictions of the base model. That is, instead of training expert RNN models from scratch we define them on the residual signal that attempts to model the differences from the population-wide model. The heterogeneity of various patient sequences is modeled through multiple experts that consist of RNN. Particularly, instead of directly training MoE from scratch, we augment MoE based on the prediction signal from pretrained base GRU model. With this way, the mixture of experts can provide flexible adaptation to the (limited) predictive power of the single base RNN model. We experiment with the newly proposed model on real-world EHRs data and the multivariate clinical event prediction task. We implement RNN using Gated Recurrent Units (GRU). We show 4.1% gain on AUPRC statistics compared to a single RNN prediction.
引用
收藏
页码:155 / 166
页数:12
相关论文
共 50 条
  • [31] A MULTIMODAL MIXTURE-OF-EXPERTS MODEL FOR DYNAMIC EMOTION PREDICTION IN MOVIES
    Goyal, Ankit
    Kumar, Naveen
    Guha, Tanaya
    Narayanan, Shrikanth S.
    2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 2822 - 2826
  • [32] Celebrating Diversity: A Mixture of Experts Approach for Runtime Mapping in Dynamic Environments
    Emani, Murali Krishna
    O'Boyle, Michael
    ACM SIGPLAN NOTICES, 2015, 50 (06) : 499 - 508
  • [33] A Mixture-of-Experts Prediction Framework for Evolutionary Dynamic Multiobjective Optimization
    Rambabu, Rethnaraj
    Vadakkepat, Prahlad
    Tan, Kay Chen
    Jiang, Min
    IEEE TRANSACTIONS ON CYBERNETICS, 2020, 50 (12) : 5099 - 5112
  • [34] Mixture of experts applied to nonlinear dynamic systems identification: A comparative study
    Lima, CAM
    Coelho, ALV
    Von Zuben, FJ
    VII BRAZILIAN SYMPOSIUM ON NEURAL NETWORKS, PROCEEDINGS, 2002, : 162 - 167
  • [35] SpeechMoE: Scaling to Large Acoustic Models with Dynamic Routing Mixture of Experts
    You, Zhao
    Feng, Shulin
    Su, Dan
    Yu, Dong
    INTERSPEECH 2021, 2021, : 2077 - 2081
  • [36] Linear Prediction Based Mixture Models for Event Detection in Video Sequences
    Matern, Dierck
    Condurache, Alexandru Paul
    Mertins, Alfred
    PATTERN RECOGNITION AND IMAGE ANALYSIS: 5TH IBERIAN CONFERENCE, IBPRIA 2011, 2011, 6669 : 25 - 32
  • [37] Deep Neural Networks with Mixture of Experts Layers for Complex Event Recognition from Images
    Li, Mingyao
    Kamata, Sei-ichiro
    2018 JOINT 7TH INTERNATIONAL CONFERENCE ON INFORMATICS, ELECTRONICS & VISION (ICIEV) AND 2018 2ND INTERNATIONAL CONFERENCE ON IMAGING, VISION & PATTERN RECOGNITION (ICIVPR), 2018, : 410 - 415
  • [38] A mixture of experts classifier with learning based on both labelled and unlabelled data
    Miller, DJ
    Uyar, HS
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 9: PROCEEDINGS OF THE 1996 CONFERENCE, 1997, 9 : 571 - 577
  • [39] PFL-MoE: Personalized Federated Learning Based on Mixture of Experts
    Guo, Binbin
    Mei, Yuan
    Xiao, Danyang
    Wu, Weigang
    WEB AND BIG DATA, APWEB-WAIM 2021, PT I, 2021, 12858 : 480 - 486
  • [40] Multi-modal Mixture of Experts Represetation Learning for Sequential Recommendation
    Bian, Shuqing
    Pan, Xingyu
    Zhao, Wayne Xin
    Wang, Jinpeng
    Wang, Chuyuan
    Wen, Ji-Rong
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 110 - 119