MaeFE: Masked Autoencoders Family of Electrocardiogram for Self-Supervised Pretraining and Transfer Learning

被引:12
|
作者
Zhang, Huaicheng [1 ]
Liu, Wenhan [1 ]
Shi, Jiguang [1 ]
Chang, Sheng [1 ]
Wang, Hao [1 ]
He, Jin [1 ]
Huang, Qijun [1 ]
机构
[1] Wuhan Univ, Sch Phys & Technol, Wuhan 430072, Hubei, Peoples R China
基金
中国国家自然科学基金;
关键词
Electrocardiography (ECG); mask autoencoder (MAE); pretraining; self-supervised learning; transfer learning;
D O I
10.1109/TIM.2022.3228267
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Electrocardiogram (ECG) is a universal diagnostic tool for heart disease, which can provide data for deep learning. The scarcity of labeled data is a major challenge for medical artificial intelligence diagnosis. Acquiring labeled medical data is time-consuming and high-cost because medical specialists are needed. As a kind of generative self-supervised learning method, a masked autoencoder (MAE) is capable to solve these problems. MAE family of ECG (MaeFE) is proposed in this article. Considering the temporal and spatial features of ECG, MaeFE contains three customized masking modes, including masked time autoencoder (MTAE), masked lead autoencoder (MLAE), and masked lead and time autoencoder (MLTAE). MTAE and MLAE pay greater attention to temporal features and spatial features, respectively. MLTAE is a multihead architecture that combines MTAE and MLAE. In the pretraining stage, ECG signals from the pretrain dataset are divided into patches and partially masked. The encoder transfers unmasked patches to tokens and the decoder reconstructs masked ones. In downstream tasks, the pretrained encoder is utilized as a classifier, which is arrhythmia classification performed in the downstream dataset. The process is the so-called transfer learning. MaeFE outperforms the state-of-the-art self-supervised learning methods, SimCLR, MoCo, CLOCS, and MaskUNet in downstream tasks. MTAE has the best comprehensive performance. Compared to contrastive learning models, MTAE achieves at least a 5.18%, 11.80%, and 3.23% increase in accuracy (Acc), Macro-F1, and area under the curve (AUC), respectively, using the linear probe. It also outperforms other models at 8.99% in Acc, 20.18% in Macro-F1, and 7.13% in AUC using fine-tuning. As another downstream task, experiments on the multilabel classification of arrhythmia are also conducted, which reflects the excellent generalization performance of MaeFE. Depending on experimental results, MaeFE turns out to be efficient and robust in downstream tasks. Overcoming the scarcity of labeled data, MaeFE is better than other self-supervised learning methods and achieves satisfying performance. Consequently, the algorithm in this article is on track of playing a major role in practical applications.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Masked Discrimination for Self-supervised Learning on Point Clouds
    Liu, Haotian
    Cai, Mu
    Lee, Yong Jae
    COMPUTER VISION - ECCV 2022, PT II, 2022, 13662 : 657 - 675
  • [22] A Survey on Masked Autoencoder for Visual Self-supervised Learning
    Zhang, Chaoning
    Zhang, Chenshuang
    Song, Junha
    Yi, John Seon Keun
    Kweon, In So
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 6805 - 6813
  • [23] Self-Supervised Learning with Electrocardiogram Delineation for Arrhythmia Detection
    Lee, Byeong Tak
    Kong, Seo Taek
    Song, Youngjae
    Lee, Yeha
    2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC), 2021, : 591 - 594
  • [24] Self-supervised autoencoders for clustering and classification
    Nousi, Paraskevi
    Tefas, Anastasios
    EVOLVING SYSTEMS, 2020, 11 (03) : 453 - 466
  • [25] Self-supervised autoencoders for clustering and classification
    Paraskevi Nousi
    Anastasios Tefas
    Evolving Systems, 2020, 11 : 453 - 466
  • [26] AudioLDM 2: Learning Holistic Audio Generation With Self-Supervised Pretraining
    Liu, Haohe
    Yuan, Yi
    Liu, Xubo
    Mei, Xinhao
    Kong, Qiuqiang
    Tian, Qiao
    Wang, Yuping
    Wang, Wenwu
    Wang, Yuxuan
    Plumbley, Mark D.
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 2871 - 2883
  • [27] INJECTING TEXT IN SELF-SUPERVISED SPEECH PRETRAINING
    Chen, Zhehuai
    Zhang, Yu
    Rosenberg, Andrew
    Ramabhadran, Bhuvana
    Wang, Gary
    Moreno, Pedro
    2021 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU), 2021, : 251 - 258
  • [28] MGM-AE: Self-Supervised Learning on 3D Shape Using Mesh Graph Masked Autoencoders
    Yang, Zhangsihao
    Ding, Kaize
    Liu, Huan
    Wang, Yalin
    2024 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION, WACV 2024, 2024, : 3291 - 3301
  • [29] ProteinMAE: masked autoencoder for protein surface self-supervised learning
    Yuan, Mingzhi
    Shen, Ao
    Fu, Kexue
    Guan, Jiaming
    Ma, Yingfan
    Qiao, Qin
    Wang, Manning
    BIOINFORMATICS, 2023, 39 (12)
  • [30] A New Deep Learning Method with Self-Supervised Learning for Delineation of the Electrocardiogram
    Wu, Wenwen
    Huang, Yanqi
    Wu, Xiaomei
    ENTROPY, 2022, 24 (12)