MaeFE: Masked Autoencoders Family of Electrocardiogram for Self-Supervised Pretraining and Transfer Learning

被引:12
|
作者
Zhang, Huaicheng [1 ]
Liu, Wenhan [1 ]
Shi, Jiguang [1 ]
Chang, Sheng [1 ]
Wang, Hao [1 ]
He, Jin [1 ]
Huang, Qijun [1 ]
机构
[1] Wuhan Univ, Sch Phys & Technol, Wuhan 430072, Hubei, Peoples R China
基金
中国国家自然科学基金;
关键词
Electrocardiography (ECG); mask autoencoder (MAE); pretraining; self-supervised learning; transfer learning;
D O I
10.1109/TIM.2022.3228267
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Electrocardiogram (ECG) is a universal diagnostic tool for heart disease, which can provide data for deep learning. The scarcity of labeled data is a major challenge for medical artificial intelligence diagnosis. Acquiring labeled medical data is time-consuming and high-cost because medical specialists are needed. As a kind of generative self-supervised learning method, a masked autoencoder (MAE) is capable to solve these problems. MAE family of ECG (MaeFE) is proposed in this article. Considering the temporal and spatial features of ECG, MaeFE contains three customized masking modes, including masked time autoencoder (MTAE), masked lead autoencoder (MLAE), and masked lead and time autoencoder (MLTAE). MTAE and MLAE pay greater attention to temporal features and spatial features, respectively. MLTAE is a multihead architecture that combines MTAE and MLAE. In the pretraining stage, ECG signals from the pretrain dataset are divided into patches and partially masked. The encoder transfers unmasked patches to tokens and the decoder reconstructs masked ones. In downstream tasks, the pretrained encoder is utilized as a classifier, which is arrhythmia classification performed in the downstream dataset. The process is the so-called transfer learning. MaeFE outperforms the state-of-the-art self-supervised learning methods, SimCLR, MoCo, CLOCS, and MaskUNet in downstream tasks. MTAE has the best comprehensive performance. Compared to contrastive learning models, MTAE achieves at least a 5.18%, 11.80%, and 3.23% increase in accuracy (Acc), Macro-F1, and area under the curve (AUC), respectively, using the linear probe. It also outperforms other models at 8.99% in Acc, 20.18% in Macro-F1, and 7.13% in AUC using fine-tuning. As another downstream task, experiments on the multilabel classification of arrhythmia are also conducted, which reflects the excellent generalization performance of MaeFE. Depending on experimental results, MaeFE turns out to be efficient and robust in downstream tasks. Overcoming the scarcity of labeled data, MaeFE is better than other self-supervised learning methods and achieves satisfying performance. Consequently, the algorithm in this article is on track of playing a major role in practical applications.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Masked Motion Encoding for Self-Supervised Video Representation Learning
    Sun, Xinyu
    Chen, Peihao
    Chen, Liangwei
    Li, Changhao
    Li, Thomas H.
    Tan, Mingkui
    Gan, Chuang
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 2235 - 2245
  • [32] NeRF-MAE: Masked AutoEncoders for Self-supervised 3D Representation Learning for Neural Radiance Fields
    Irshad, Muhammad Zubair
    Zakharov, Sergey
    Guizilini, Vitor
    Gaidon, Adrien
    Kira, Zsolt
    Ambrus, Rares
    COMPUTER VISION - ECCV 2024, PT LXXXVIII, 2025, 15146 : 434 - 453
  • [33] Instance Localization for Self-supervised Detection Pretraining
    Yang, Ceyuan
    Wu, Zhirong
    Zhou, Bolei
    Lin, Stephen
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3986 - 3995
  • [34] Contrastive Predictive Autoencoders for Dynamic Point Cloud Self-Supervised Learning
    Sheng, Xiaoxiao
    Shen, Zhiqiang
    Xiao, Gang
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 8, 2023, : 9802 - 9810
  • [35] Forecast-MAE: Self-supervised Pre-training for Motion Forecasting with Masked Autoencoders
    Cheng, Jie
    Mei, Xiaodong
    Liu, Ming
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 8645 - 8655
  • [36] Transfer learning application of self-supervised learning in ARPES
    Ekahana, Sandy Adhitia
    Winata, Genta Indra
    Soh, Y.
    Tamai, Anna
    Milan, Radovic
    Aeppli, Gabriel
    Shi, Ming
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2023, 4 (03):
  • [37] Prediction of freezing of gait based on self-supervised pretraining via contrastive learning
    Xia, Yi
    Sun, Hua
    Zhang, Baifu
    Xu, Yangyang
    Ye, Qiang
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 89
  • [38] Self-Supervised Autoencoders for Visual Anomaly Detection
    Bauer, Alexander
    Nakajima, Shinichi
    Mueller, Klaus-Robert
    MATHEMATICS, 2024, 12 (24)
  • [39] Resimulation-based self-supervised learning for pretraining physics foundation models
    Harris, P.
    Krupa, J.
    Kagan, M.
    Maier, B.
    Woodward, N.
    PHYSICAL REVIEW D, 2025, 111 (03)
  • [40] Meta-Learning and Self-Supervised Pretraining for Storm Event Imagery Translation
    Rugina, Ileana
    Dangovski, Rumen
    Simek, Olga
    Veillette, Mark
    Khorrami, Pooya
    Soljacic, Marin
    Cheung, Brian
    2023 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE, HPEC, 2023,