Learning to Adapt Dynamic Clinical Event Sequences with Residual Mixture of Experts

被引:1
|
作者
Lee, Jeong Min [1 ]
Hauskrecht, Milos [1 ]
机构
[1] Univ Pittsburgh, Dept Comp Sci, Pittsburgh, PA 15260 USA
关键词
D O I
10.1007/978-3-031-09342-5_15
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Clinical event sequences in Electronic Health Records (EHRs) record detailed information about the patient condition and patient care as they occur in time. Recent years have witnessed increased interest of machine learning community in developing machine learning models solving different types of problems defined upon information in EHRs. More recently, neural sequential models, such as RNN and LSTM, became popular and widely applied models for representing patient sequence data and for predicting future events or outcomes based on such data. However, a single neural sequential model may not properly represent complex dynamics of all patients and the differences in their behaviors. In this work, we aim to alleviate this limitation by refining a one-fits-all model using aMixture-of-Experts (MoE) architecture. The architecture consists of multiple (expert) RNN models covering patient sub-populations and refining the predictions of the base model. That is, instead of training expert RNN models from scratch we define them on the residual signal that attempts to model the differences from the population-wide model. The heterogeneity of various patient sequences is modeled through multiple experts that consist of RNN. Particularly, instead of directly training MoE from scratch, we augment MoE based on the prediction signal from pretrained base GRU model. With this way, the mixture of experts can provide flexible adaptation to the (limited) predictive power of the single base RNN model. We experiment with the newly proposed model on real-world EHRs data and the multivariate clinical event prediction task. We implement RNN using Gated Recurrent Units (GRU). We show 4.1% gain on AUPRC statistics compared to a single RNN prediction.
引用
收藏
页码:155 / 166
页数:12
相关论文
共 50 条
  • [1] Mixture of Experts Residual Learning for Hamming Hashing
    Jinyu Xu
    Qing Xie
    Jiachen Li
    Yanchun Ma
    Yuhan Liu
    Neural Processing Letters, 2023, 55 : 7077 - 7093
  • [2] Mixture of Experts Residual Learning for Hamming Hashing
    Xu, Jinyu
    Xie, Qing
    Li, Jiachen
    Ma, Yanchun
    Liu, Yuhan
    NEURAL PROCESSING LETTERS, 2023, 55 (06) : 7077 - 7093
  • [3] Learning dynamic event descriptions in image sequences
    Veeraraghavan, Harini
    Papanikolopoulos, Nikolaos
    Schrater, Paul
    2007 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-8, 2007, : 807 - +
  • [4] Dynamic learning in behavioral games: A hidden Markov mixture of experts approach
    Ansari, Asim
    Montoya, Ricardo
    Netzer, Oded
    QME-QUANTITATIVE MARKETING AND ECONOMICS, 2012, 10 (04): : 475 - 503
  • [5] Dynamic learning in behavioral games: A hidden Markov mixture of experts approach
    Asim Ansari
    Ricardo Montoya
    Oded Netzer
    Quantitative Marketing and Economics, 2012, 10 : 475 - 503
  • [6] Adaptive Anomaly Detection for Dynamic Clinical Event Sequences
    Niu, Haoran
    Omitaomu, Olufemi A.
    Cao, Qing C.
    Olama, Mohammad
    Ozmen, Ozgur
    Klasky, Hilda
    Pullum, Laura
    Malviya, Addi Thakur
    Kuruganti, Teja
    Scott, Jeanie
    Laurio, Angela
    Drews, Frank
    Sauer, Brian
    Ward, Merry
    Nebeker, Jonathan
    2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, : 4919 - 4928
  • [7] A model for learning to segment temporal sequences, utilizing a mixture of RNN experts together with adaptive variance
    Namikawa, Jun
    Tani, Jun
    NEURAL NETWORKS, 2008, 21 (10) : 1466 - 1475
  • [8] Robust learning algorithm for the mixture of experts
    Allende, H
    Torres, R
    Salas, R
    Moraga, C
    PATTERN RECOGNITION AND IMAGE ANALYSIS, PROCEEDINGS, 2003, 2652 : 19 - 27
  • [9] A Mixture of Experts with Adaptive Semantic Encoding for Event Detection
    Li, Zhongqiu
    Hong, Yu
    He, Shiming
    Yang, Shuai
    Zhou, Guodong
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [10] Dynamic Mixture of Experts Models for Online Prediction
    Munezero, Parfait
    Villani, Mattias
    Kohn, Robert
    Kohn, Robert
    TECHNOMETRICS, 2022, : 257 - 268