Enhancing Representation Learning of EEG Data with Masked Autoencoders

被引:0
|
作者
Zhou, Yifei [1 ]
Liu, Sitong [1 ]
机构
[1] George Washington Univ, Washington, DC 20052 USA
来源
关键词
EEG; Gaze estimation; Self-supervised pre-training; Masked autoencoders;
D O I
10.1007/978-3-031-61572-6_7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised learning has been a powerful training paradigm to facilitate representation learning. In this study, we design a masked autoencoder (MAE) to guide deep learning models to learn electroencephalography (EEG) signal representation. Our MAE includes an encoder and a decoder. A certain proportion of input EEG signals are randomly masked and sent to our MAE. The goal is to recover these masked signals. After this self-supervised pre-training, the encoder is fine-tuned on downstream tasks. We evaluate our MAE on EEGEyeNet gaze estimation task. We find that the MAE is an effective brain signal learner. It also significantly improves learning efficiency. Compared to the model without MAE pre-training, the pre-trained one achieves equal performance with 1/3 the time of training and outperforms it in half the training time. Our study shows that self-supervised learning is a promising research direction for EEG-based applications as other fields (natural language processing, computer vision, robotics, etc.), and thus we expect foundation models to be successful in EEG domain.
引用
收藏
页码:88 / 100
页数:13
相关论文
共 50 条
  • [31] Unsupervised Motion Representation Learning with Capsule Autoencoders
    Xu, Ziwei
    Shen, Xudong
    Wong, Yongkang
    Kankanhalli, Mohan S.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [32] Attribute Network Representation Learning with Dual Autoencoders
    Wang, Jinghong
    Zhou, Zhixia
    Li, Bi
    Wu, Mancai
    SYMMETRY-BASEL, 2022, 14 (09):
  • [33] Variational Autoencoders with Triplet Loss for Representation Learning
    Isil, Cagatay
    Solmaz, Berkan
    Koc, Aykut
    2018 26TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2018,
  • [34] Hybrid Quantum Variational Autoencoders for Representation Learning
    Rivas, Pablo
    Zhao, Liang
    Orduz, Javier
    2021 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE (CSCI 2021), 2021, : 52 - 57
  • [35] Poster Abstract: Representation Learning from Multimodal Sensor Data with Maximally Correlated Autoencoders
    Ma, Fei
    Gu, Weixi
    Ni, Shiguang
    Zhang, Lin
    2022 21ST ACM/IEEE INTERNATIONAL CONFERENCE ON INFORMATION PROCESSING IN SENSOR NETWORKS (IPSN 2022), 2022, : 513 - 514
  • [36] Enhancing Accuracy Through Data Augmentation using Variational Autoencoders in Machine Learning Techniques
    Dubey, Vaibhav
    Kaur, Bhavneet
    Goel, Paurav
    2024 CONTROL INSTRUMENTATION SYSTEM CONFERENCE, CISCON 2024, 2024,
  • [37] Autoencoders with Deformable Convolutions for latent representation of EEG Spectrograms in classification tasks
    Zubrikhina, Maria
    Masnyi, Dmitrii
    Hamoudi, Rifat
    Alhaj, Hamid
    Issa, Bashar
    Kustubaeva, Almira
    Kamzanova, Altyngul
    Zholdassova, Manzura
    Bernstein, Alexander
    Burnaev, Evgeny
    Artemov, Alexey
    Sharaev, Maxim
    FIFTEENTH INTERNATIONAL CONFERENCE ON MACHINE VISION, ICMV 2022, 2023, 12701
  • [38] Heterogeneous Graph Masked Autoencoders
    Tian, Yijun
    Dong, Kaiwen
    Zhang, Chunhui
    Zhang, Chuxu
    Chawla, Nitesh V.
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 8, 2023, : 9997 - 10005
  • [39] Masked Autoencoders As Spatiotemporal Learners
    Feichtenhofer, Christoph
    Fan, Haoqi
    Li, Yanghao
    He, Kaiming
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [40] Diffusion Models as Masked Autoencoders
    Wei, Chen
    Mangalam, Karttikeya
    Huang, Po-Yao
    Li, Yanghao
    Fan, Haoqi
    Xu, Hu
    Wang, Huiyu
    Xie, Cihang
    Yuille, Alan
    Feichtenhofer, Christoph
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 16238 - 16248