DAEEGViT: A domain adaptive vision transformer framework for EEG cognitive state identification

被引:1
|
作者
Ouyang, Yu [1 ]
Liu, Yang [1 ]
Shan, Liang [2 ]
Jia, Zhe [1 ]
Qian, Dongguan [1 ]
Zeng, Tao [3 ]
Zeng, Hong [1 ]
机构
[1] Hangzhou Dianzi Univ, Sch Comp Sci & Technol, Hangzhou 310018, Peoples R China
[2] Zhejiang Hosp informat Ctr, Hangzhou 310007, Peoples R China
[3] Nanchang Univ, Affiliated Hosp 2, Dept Urol, Nanchang 330006, Peoples R China
关键词
Electroencephalography; Cognitive state identification; Self-attention; Domain adaptation; Vision transformer; PERFORMANCE;
D O I
10.1016/j.bspc.2024.107019
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Cognitive state identification based on Electroencephalography (EEG) not only helps to diagnose various types of cognitive dysfunctions as early as possible in the clinical field, but also enables timely detection of cognitive overload caused by subjects' participation in cognitive tasks, which could prevent many accidents from occurring. Deep learning (DL) plays a vital role in the recognition of cognitive states based on EEG. However, due to the significant individual differences of EEG, it remains a great challenge to design a robust deep learning model to identify the cognition state in time and effectively. In this study, a domain adaptive vision transformer framework DAEEGViT is proposed for cognitive state identification. In DAEEGViT, a MBConv module with a self-attention mechanism is introduced to extract the global and local features of EEG simultaneously, followed by a domain adaptation method to improve the transfer learning capability and a classifier to predict the cognitive state. Two self-collected datasets: FDD and ECED, with two public datasets: SEED and SEED-IV have been used in our experiments to identify the fatigue, cognitive impairment and emotion states. Compared with the normal ViT model, in the intra-subject experiments, the accuracy of DAEEGViT improved 25.28%, 54.70% and 43.70% on FDD, SEED, and SEED-IV, respectively. In the cross- subject experiments, it improved 1.23%, 1.85%, and 0.97% on three tasks of ECED (Happiness, neutral and sadness). On FDD, SEED, and SEED-IV, it improved 14.33%, 4.72% and 5.35%, respectively. The results prove that DAEEGViT has distinguished improvement in cognitive state recognition.
引用
收藏
页数:11
相关论文
共 34 条
  • [1] Accelerating Domain Adaptation with Cascaded Adaptive Vision Transformer
    Jiang, Qilin
    Cui, Chaoran
    Zhang, Chunyun
    Zhen, Yongrui
    Gong, Shuai
    Liu, Ziyi
    Meng, Fan'an
    Zhao, Hongyan
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT 1, 2025, 15031 : 209 - 221
  • [2] Body Part-Level Domain Alignment for Domain-Adaptive Person Re-Identification With Transformer Framework
    Wang, Yiming
    Qi, Guanqiu
    Li, Shuang
    Chai, Yi
    Li, Huafeng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 3321 - 3334
  • [3] Hybrid Vision Transformer for Domain Adaptable Person Re-identification
    Waseem, Muhammad Danish
    Tahir, Muhammad Atif
    Durrani, Muhammad Nouman
    ADVANCES IN COMPUTATIONAL COLLECTIVE INTELLIGENCE (ICCCI 2021), 2021, 1463 : 114 - 122
  • [4] Vision Transformer Framework Approach for Yellow Nail Syndrome Disease Identification
    Roy, Vikas Kumar
    Thakur, Vasu
    Nijhawan, Rahul
    PROCEEDINGS OF SECOND INTERNATIONAL CONFERENCE ON SUSTAINABLE EXPERT SYSTEMS (ICSES 2021), 2022, 351 : 413 - 425
  • [5] A Cross-domain Vision Transformer Based Framework for Baggage Threat Classification
    Nasim, Ammara
    Khan, Zawar
    Hassan, Taimur
    Jawed, Soyiba
    Akram, Muhammad Usman
    Zeb, Jahan
    2024 16TH INTERNATIONAL CONFERENCE ON COMPUTER AND AUTOMATION ENGINEERING, ICCAE 2024, 2024, : 493 - 497
  • [6] Prompting and Tuning: A Two-Stage Unsupervised Domain Adaptive Person Re-identification Method on Vision Transformer Backbone
    Yu, Shengming
    Dou, Zhaopeng
    Wang, Shengjin
    TSINGHUA SCIENCE AND TECHNOLOGY, 2023, 28 (04): : 799 - 810
  • [7] MDL-NAS: A Joint Multi-domain Learning Framework for Vision Transformer
    Wang, Shiguang
    Xie, Tao
    Cheng, Jian
    Zhang, Xingcheng
    Liu, Haijun
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 20094 - 20104
  • [8] Unsupervised Domain Adaptive Person Re-Identification Method Based on Transformer
    Yan, Xiai
    Ding, Shengkai
    Zhou, Wei
    Shi, Weiqi
    Tian, Hua
    ELECTRONICS, 2022, 11 (19)
  • [9] An adaptive deep-learning load forecasting framework by integrating transformer and domain knowledge
    Gao, Jiaxin
    Chen, Yuntian
    Hu, Wenbo
    Zhang, Dongxiao
    ADVANCES IN APPLIED ENERGY, 2023, 10
  • [10] EEG-FCV: An EEG-Based Functional Connectivity Visualization Framework for Cognitive State Evaluation
    Zeng, Hong
    Jin, Yanping
    Wu, Qi
    Pan, Deng
    Xu, Feifan
    Zhao, Yue
    Hu, Hua
    Kong, Wanzeng
    FRONTIERS IN PSYCHIATRY, 2022, 13