DAEEGViT: A domain adaptive vision transformer framework for EEG cognitive state identification

被引:1
|
作者
Ouyang, Yu [1 ]
Liu, Yang [1 ]
Shan, Liang [2 ]
Jia, Zhe [1 ]
Qian, Dongguan [1 ]
Zeng, Tao [3 ]
Zeng, Hong [1 ]
机构
[1] Hangzhou Dianzi Univ, Sch Comp Sci & Technol, Hangzhou 310018, Peoples R China
[2] Zhejiang Hosp informat Ctr, Hangzhou 310007, Peoples R China
[3] Nanchang Univ, Affiliated Hosp 2, Dept Urol, Nanchang 330006, Peoples R China
关键词
Electroencephalography; Cognitive state identification; Self-attention; Domain adaptation; Vision transformer; PERFORMANCE;
D O I
10.1016/j.bspc.2024.107019
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Cognitive state identification based on Electroencephalography (EEG) not only helps to diagnose various types of cognitive dysfunctions as early as possible in the clinical field, but also enables timely detection of cognitive overload caused by subjects' participation in cognitive tasks, which could prevent many accidents from occurring. Deep learning (DL) plays a vital role in the recognition of cognitive states based on EEG. However, due to the significant individual differences of EEG, it remains a great challenge to design a robust deep learning model to identify the cognition state in time and effectively. In this study, a domain adaptive vision transformer framework DAEEGViT is proposed for cognitive state identification. In DAEEGViT, a MBConv module with a self-attention mechanism is introduced to extract the global and local features of EEG simultaneously, followed by a domain adaptation method to improve the transfer learning capability and a classifier to predict the cognitive state. Two self-collected datasets: FDD and ECED, with two public datasets: SEED and SEED-IV have been used in our experiments to identify the fatigue, cognitive impairment and emotion states. Compared with the normal ViT model, in the intra-subject experiments, the accuracy of DAEEGViT improved 25.28%, 54.70% and 43.70% on FDD, SEED, and SEED-IV, respectively. In the cross- subject experiments, it improved 1.23%, 1.85%, and 0.97% on three tasks of ECED (Happiness, neutral and sadness). On FDD, SEED, and SEED-IV, it improved 14.33%, 4.72% and 5.35%, respectively. The results prove that DAEEGViT has distinguished improvement in cognitive state recognition.
引用
收藏
页数:11
相关论文
共 34 条
  • [31] Investigating convolutional and transformer-based models for classifying Mild Cognitive Impairment using 2D spectral images of resting-state EEG
    Seker, Mesut
    Ozerdem, Mehmet Sirac
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2025, 105
  • [32] ViTBayesianNet: An adaptive deep bayesian network-aided alzheimer disease detection framework with vision transformer-based residual densenet for feature extraction using MRI images
    Mohan, Revathi
    Arunachalam, Rajesh
    Verma, Neha
    Mali, Shital
    NETWORK-COMPUTATION IN NEURAL SYSTEMS, 2024,
  • [33] E3GCAPS: Efficient EEG-based multi-capsule framework with dynamic attention for cross-subject cognitive state detection
    Zhao, Yue
    Dai, Guojun
    Fang, Xin
    Wu, Zhengxuan
    Xia, Nianzhang
    Jin, Yanping
    Zeng, Hong
    CHINA COMMUNICATIONS, 2022, 19 (02) : 73 - 89
  • [34] E3GCAPS: Efficient EEG-Based Multi-Capsule Framework with Dynamic Attention for Cross-Subject Cognitive State Detection
    Yue Zhao
    Guojun Dai
    Xin Fang
    Zhengxuan Wu
    Nianzhang Xia
    Yanping Jin
    Hong Zeng
    ChinaCommunications, 2022, 19 (02) : 73 - 89