Enhancing Representation Learning of EEG Data with Masked Autoencoders

被引:0
|
作者
Zhou, Yifei [1 ]
Liu, Sitong [1 ]
机构
[1] George Washington Univ, Washington, DC 20052 USA
来源
关键词
EEG; Gaze estimation; Self-supervised pre-training; Masked autoencoders;
D O I
10.1007/978-3-031-61572-6_7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised learning has been a powerful training paradigm to facilitate representation learning. In this study, we design a masked autoencoder (MAE) to guide deep learning models to learn electroencephalography (EEG) signal representation. Our MAE includes an encoder and a decoder. A certain proportion of input EEG signals are randomly masked and sent to our MAE. The goal is to recover these masked signals. After this self-supervised pre-training, the encoder is fine-tuned on downstream tasks. We evaluate our MAE on EEGEyeNet gaze estimation task. We find that the MAE is an effective brain signal learner. It also significantly improves learning efficiency. Compared to the model without MAE pre-training, the pre-trained one achieves equal performance with 1/3 the time of training and outperforms it in half the training time. Our study shows that self-supervised learning is a promising research direction for EEG-based applications as other fields (natural language processing, computer vision, robotics, etc.), and thus we expect foundation models to be successful in EEG domain.
引用
收藏
页码:88 / 100
页数:13
相关论文
共 50 条
  • [1] Learn from Incomplete Tactile Data: Tactile Representation Learning with Masked Autoencoders
    Cao, Guanqun
    Jiang, Jiaqi
    Bollegala, Danushka
    Luo, Shan
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 10800 - 10805
  • [2] GMAE: Representation Learning on Graph via Masked Graph Autoencoders
    Zheng, Chengbin
    Yang, Zhicheng
    Lu, Yang
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 2515 - 2521
  • [3] Masked Autoencoders in 3D Point Cloud Representation Learning
    Jiang, Jincen
    Lu, Xuequan
    Zhao, Lizhi
    Dazeley, Richard
    Wang, Meili
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 820 - 831
  • [4] Masked Spectrogram Modeling using Masked Autoencoders for Learning General-purpose Audio Representation
    Niizumi, Daisuke
    Takeuchi, Daiki
    Ohishi, Yasunori
    Harada, Noboru
    Kashino, Kunio
    HEAR: HOLISTIC EVALUATION OF AUDIO REPRESENTATIONS, VOL 166, 2021, 166 : 1 - 24
  • [5] T-MAE : Temporal Masked Autoencoders for Point Cloud Representation Learning
    Wei, Weijie
    Nejadasl, Fatemeh Karimi
    Gevers, Theo
    Oswald, Martin R.
    COMPUTER VISION - ECCV 2024, PT XI, 2025, 15069 : 178 - 195
  • [6] Tackling Missing Modalities in Audio-Visual Representation Learning Using Masked Autoencoders
    Chochlakis, Georgios
    Lavania, Chandrashekhar
    Mathur, Prashant
    Han, Kyu J.
    INTERSPEECH 2024, 2024, : 4678 - 4682
  • [7] Enhancing pediatric pneumonia diagnosis through masked autoencoders
    Yoon, Taeyoung
    Kang, Daesung
    SCIENTIFIC REPORTS, 2024, 14 (01)
  • [8] Enhancing pediatric pneumonia diagnosis through masked autoencoders
    Taeyoung Yoon
    Daesung Kang
    Scientific Reports, 14
  • [9] Irrelevant Patch-Masked Autoencoders for Enhancing Vision Transformers under Limited Data
    Ren, Qiang
    Wang, Junli
    KNOWLEDGE-BASED SYSTEMS, 2025, 310
  • [10] Improving Masked Autoencoders by Learning Where to Mask
    Chen, Haijian
    Zhang, Wendong
    Wang, Yunbo
    Yang, Xiaokang
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT VIII, 2024, 14432 : 377 - 390