Enhancing Representation Learning of EEG Data with Masked Autoencoders

被引:0
|
作者
Zhou, Yifei [1 ]
Liu, Sitong [1 ]
机构
[1] George Washington Univ, Washington, DC 20052 USA
来源
关键词
EEG; Gaze estimation; Self-supervised pre-training; Masked autoencoders;
D O I
10.1007/978-3-031-61572-6_7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised learning has been a powerful training paradigm to facilitate representation learning. In this study, we design a masked autoencoder (MAE) to guide deep learning models to learn electroencephalography (EEG) signal representation. Our MAE includes an encoder and a decoder. A certain proportion of input EEG signals are randomly masked and sent to our MAE. The goal is to recover these masked signals. After this self-supervised pre-training, the encoder is fine-tuned on downstream tasks. We evaluate our MAE on EEGEyeNet gaze estimation task. We find that the MAE is an effective brain signal learner. It also significantly improves learning efficiency. Compared to the model without MAE pre-training, the pre-trained one achieves equal performance with 1/3 the time of training and outperforms it in half the training time. Our study shows that self-supervised learning is a promising research direction for EEG-based applications as other fields (natural language processing, computer vision, robotics, etc.), and thus we expect foundation models to be successful in EEG domain.
引用
收藏
页码:88 / 100
页数:13
相关论文
共 50 条
  • [21] Audiovisual Masked Autoencoders
    Georgescu, Mariana-Iuliana
    Fonseca, Eduardo
    Ionescu, Radu Tudor
    Lucic, Mario
    Schmid, Cordelia
    Arnab, Anurag
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 16098 - 16108
  • [22] GAUSSIAN MASKED AUTOENCODERS
    Rajasegaran, Jathushan
    Chen, Xinlei
    Li, Rulilong
    Feichtenhofer, Christoph
    Malik, Jitendra
    Ginosar, Shiry
    arXiv,
  • [23] Supervised Representation Learning: Transfer Learning with Deep Autoencoders
    Zhuang, Fuzhen
    Cheng, Xiaohu
    Luo, Ping
    Pan, Sinno Jialin
    He, Qing
    PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), 2015, : 4119 - 4125
  • [24] SPARSE REPRESENTATION LEARNING OF DATA BY AUTOENCODERS WITH L1/2 REGULARIZATION
    Li, F.
    Zurada, J. M.
    Wu, W.
    NEURAL NETWORK WORLD, 2018, 28 (02) : 133 - 147
  • [25] AdaMAE: Adaptive Masking for Efficient Spatiotemporal Learning with Masked Autoencoders
    Bandara, Wele Gedara Chaminda
    Patel, Naman
    Gholami, Ali
    Nikkhah, Mehdi
    Agrawal, Motilal
    Patel, Vishal M.
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 14507 - 14517
  • [26] SemMAE: Semantic-Guided Masking for Learning Masked Autoencoders
    Li, Gang
    Zheng, Heliang
    Liu, Daqing
    Wang, Chaoyue
    Su, Bing
    Zheng, Changwen
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [27] Masked Autoencoders for Point Cloud Self-supervised Learning
    Pang, Yatian
    Wang, Wenxiao
    Tay, Francis E. H.
    Liu, Wei
    Tian, Yonghong
    Yuan, Li
    COMPUTER VISION - ECCV 2022, PT II, 2022, 13662 : 604 - 621
  • [28] Masked Contrastive Representation Learning for Reinforcement Learning
    Zhu, Jinhua
    Xia, Yingce
    Wu, Lijun
    Deng, Jiajun
    Zhou, Wengang
    Qin, Tao
    Liu, Tie-Yan
    Li, Houqiang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (03) : 3421 - 3433
  • [29] Contrastive Masked Graph Autoencoders for Spatial Transcriptomics Data Analysis
    Fang, Donghai
    Gao, Yichen
    Wang, Zhaoying
    Zhu, Fangfang
    Min, Wenwen
    BIOINFORMATICS RESEARCH AND APPLICATIONS, PT I, ISBRA 2024, 2024, 14954 : 76 - 88
  • [30] MCMAE: Masked Convolution Meets Masked Autoencoders
    Gao, Peng
    Ma, Teli
    Li, Hongsheng
    Lin, Ziyi
    Dai, Jifeng
    Qiao, Yu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,