Adversarial Multi-Task Learning for Robust End-to-End ECG-based Heartbeat Classification

被引:0
|
作者
Shahin, Mostafa [1 ]
Oo, Ethan [1 ]
Ahmed, Beena [1 ]
机构
[1] Univ New South Wales, Sch Elect Engn & Telecommun, Sydney, NSW 2052, Australia
关键词
ARRHYTHMIA DETECTION;
D O I
10.1109/embc44109.2020.9175640
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
In clinical practice, heart arrhythmias are manually diagnosed by a doctor, which is a time-consuming process. Furthermore, this process is error-prone due to noise from the recording equipment and biological non-idealities of patients. Thus, an automated arrhythmia classifier would be time and cost-effective as well as offer better generalization across patients. In this paper, we propose an adversarial multi-task learning method to improve the generalization of heartbeat arrythmia classification. We built an end-to-end deep neural network (DNN) system consisting of three sub-networks; a generator, a heartbeat-type discriminator, and a subject (or patient) discriminator. Each of these two discriminators had its own loss function to control its impact. The generator was "friendly" to the heartbeat-type discrimination task by minimizing its loss function and "hostile" to the subject discrimination task by maximizing its loss function. The network was trained using raw ECG signals to discriminate between five types of heartbeats - normal heartbeats, right bundle branch blocks (RBBB), premature ventricular contractions (PVC), paced beats (PB) and fusion of ventricular and normal beats (FVN). The method was tested with the MIT-BIH arrhythmia dataset and achieved a 17% reduction in classification error compared to a baseline using a fully-connected DNN classifier.
引用
收藏
页码:341 / 344
页数:4
相关论文
共 50 条
  • [41] Vision-Based Multi-Task Manipulation for Inexpensive Robots Using End-To-End Learning from Demonstration
    Rahmatizadeh, Rouhollah
    Abolghasemi, Pooya
    Boloni, Ladislau
    Levine, Sergey
    2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 3758 - 3765
  • [42] Multi-task and multi-view training for end-to-end relation extraction
    Zhang, Junchi
    Zhang, Yue
    Ji, Donghong
    Liu, Mengchi
    NEUROCOMPUTING, 2019, 364 : 245 - 253
  • [43] ASR Posterior-based Loss for Multi-task End-to-end Speech Translation
    Ko, Yuka
    Sudoh, Katsuhito
    Sakti, Sakriani
    Nakamura, Satoshi
    INTERSPEECH 2021, 2021, : 2272 - 2276
  • [44] An End-to-End Multi-Task and Fusion CNN for Inertial-Based Gait Recognition
    Delgado-Escano, Ruben
    Castro, Francisco M.
    Cozar, Julian Ramos
    Marin-Jimenez, Manuel J.
    Guil, Nicolas
    IEEE ACCESS, 2019, 7 : 1897 - 1908
  • [45] Single-Channel Ecg-Based Sleep Stage Classification With End-To-End Trainable Deep Neural Networks
    Choi, Iksoo
    Sung, Wonyong
    2023 45TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY, EMBC, 2023,
  • [46] End-to-end Multi-task Learning Framework for Spatio-Temporal Grounding in Video Corpus
    Gao, Yingqi
    Luo, Zhiling
    Chen, Shiqian
    Zhou, Wei
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 3958 - 3962
  • [47] An effective multi-task learning model for end-to-end emotion-cause pair extraction
    Chenbing Li
    Jie Hu
    Tianrui Li
    Shengdong Du
    Fei Teng
    Applied Intelligence, 2023, 53 : 3519 - 3529
  • [48] ATTENTION-AUGMENTED END-TO-END MULTI-TASK LEARNING FOR EMOTION PREDICTION FROM SPEECH
    Zhang, Zixing
    Wu, Bingwen
    Schuller, Bjoern
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 6705 - 6709
  • [49] An end-to-end multi-task learning to link framework for emotion-cause pair extraction
    Song, Haolin
    Song, Dawei
    2021 INTERNATIONAL CONFERENCE ON IMAGE, VIDEO PROCESSING, AND ARTIFICIAL INTELLIGENCE, 2021, 12076
  • [50] An effective multi-task learning model for end-to-end emotion-cause pair extraction
    Li, Chenbing
    Hu, Jie
    Li, Tianrui
    Du, Shengdong
    Teng, Fei
    APPLIED INTELLIGENCE, 2023, 53 (03) : 3519 - 3529