Enhancing Representation Learning of EEG Data with Masked Autoencoders

被引:0
|
作者
Zhou, Yifei [1 ]
Liu, Sitong [1 ]
机构
[1] George Washington Univ, Washington, DC 20052 USA
来源
关键词
EEG; Gaze estimation; Self-supervised pre-training; Masked autoencoders;
D O I
10.1007/978-3-031-61572-6_7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised learning has been a powerful training paradigm to facilitate representation learning. In this study, we design a masked autoencoder (MAE) to guide deep learning models to learn electroencephalography (EEG) signal representation. Our MAE includes an encoder and a decoder. A certain proportion of input EEG signals are randomly masked and sent to our MAE. The goal is to recover these masked signals. After this self-supervised pre-training, the encoder is fine-tuned on downstream tasks. We evaluate our MAE on EEGEyeNet gaze estimation task. We find that the MAE is an effective brain signal learner. It also significantly improves learning efficiency. Compared to the model without MAE pre-training, the pre-trained one achieves equal performance with 1/3 the time of training and outperforms it in half the training time. Our study shows that self-supervised learning is a promising research direction for EEG-based applications as other fields (natural language processing, computer vision, robotics, etc.), and thus we expect foundation models to be successful in EEG domain.
引用
收藏
页码:88 / 100
页数:13
相关论文
共 50 条
  • [41] NeRF-MAE: Masked AutoEncoders for Self-supervised 3D Representation Learning for Neural Radiance Fields
    Irshad, Muhammad Zubair
    Zakharov, Sergey
    Guizilini, Vitor
    Gaidon, Adrien
    Kira, Zsolt
    Ambrus, Rares
    COMPUTER VISION - ECCV 2024, PT LXXXVIII, 2025, 15146 : 434 - 453
  • [42] MeshMAE: Masked Autoencoders for 3D Mesh Data Analysis
    Liang, Yaqian
    Zhao, Shanshan
    Yu, Baosheng
    Zhang, Jing
    He, Fazhi
    COMPUTER VISION - ECCV 2022, PT III, 2022, 13663 : 37 - 54
  • [43] ColorMAE: Exploring Data-Independent Masking Strategies in Masked AutoEncoders
    Hinojosa, Carlos
    Liu, Shuming
    Ghanem, Bernard
    COMPUTER VISION - ECCV 2024, PT XX, 2025, 15078 : 432 - 449
  • [44] Modeling conditional distributions of neural and behavioral data with masked variational autoencoders
    Schulz, Auguste
    Vetter, Julius
    Gao, Richard
    Morales, Daniel
    Lobato-Rios, Victor
    Ramdya, Pavan
    Goncalves, Pedro J.
    Macke, Jakob H.
    CELL REPORTS, 2025, 44 (03):
  • [45] Unsupervised Speech Representation Learning Using WaveNet Autoencoders
    Chorowski, Jan
    Weiss, Ron J.
    Bengio, Samy
    van den Oord, Aaron
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2019, 27 (12) : 2041 - 2053
  • [46] DeepHealth: Deep Representation Learning with Autoencoders for Healthcare Prediction
    Xu, Wen
    He, Jing
    Shu, Yanfeng
    2020 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2020, : 42 - 49
  • [47] Representation learning via serial autoencoders for domain adaptation
    Yang, Shuai
    Zhang, Yuhong
    Zhu, Yi
    Li, Peipei
    Hu, Xuegang
    NEUROCOMPUTING, 2019, 351 : 1 - 9
  • [48] Representation Learning with Convolutional Sparse Autoencoders for Remote Sensing
    Firat, Orhan
    Vural, Fatos T. Yarman
    2013 21ST SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2013,
  • [49] Fine-grained representation learning in convolutional autoencoders
    Luo, Chang
    Wang, Jie
    JOURNAL OF ELECTRONIC IMAGING, 2016, 25 (02)
  • [50] Deep Learning of Part-Based Representation of Data Using Sparse Autoencoders With Nonnegativity Constraints
    Hosseini-Asl, Ehsan
    Zurada, Jacek M.
    Nasraoui, Olfa
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2016, 27 (12) : 2486 - 2498