CFDA-CSF: A Multi-Modal Domain Adaptation Method for Cross-Subject Emotion Recognition

被引:5
|
作者
Jimenez-Guarneros, Magdiel [1 ]
Fuentes-Pineda, Gibran [1 ]
机构
[1] Univ Nacl Autonoma Mexico, Dept Comp Sci, Inst Invest Matemat Aplicadas & Sistemas IIMAS, Coyoacan 04510, Mexico
关键词
Electroencephalography; Emotion recognition; Correlation; Task analysis; Brain modeling; Proposals; Training; Deep learning; electroencephalogram; emotion recognition; eye tracking; multi-modal domain adaptation;
D O I
10.1109/TAFFC.2024.3357656
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-modal classifiers for emotion recognition have become prominent, as the emotional states of subjects can be more comprehensively inferred from Electroencephalogram (EEG) signals and eye movements. However, existing classifiers experience a decrease in performance due to the distribution shift when applied to new users. Unsupervised domain adaptation (UDA) emerges as a solution to address the distribution shift between subjects by learning a shared latent feature space. Nevertheless, most UDA approaches focus on a single modality, while existing multi-modal approaches do not consider that fine-grained structures should also be explicitly aligned and the learned feature space must be discriminative. In this paper, we propose Coarse and Fine-grained Distribution Alignment with Correlated and Separable Features (CFDA-CSF), which performs a coarse alignment over the global feature space, and a fine-grained alignment between modalities from each domain distribution. At the same time, the model learns intra-domain correlated features, while a separable feature space is encouraged on new subjects. We conduct an extensive experimental study across the available sessions on three public datasets for multi-modal emotion recognition: SEED, SEED-IV, and SEED-V. Our proposal effectively improves the recognition performance in every session, achieving an average accuracy of 93.05%, 85.87% and 91.20% for SEED; 85.72%, 89.60%, and 86.88% for SEED-IV; and 88.49%, 91.37% and 91.57% for SEED-V.
引用
收藏
页码:1502 / 1513
页数:12
相关论文
共 50 条
  • [41] EEG-based cross-subject emotion recognition using multi-source domain transfer learning
    Quan, Jie
    Li, Ying
    Wang, Lingyue
    He, Renjie
    Yang, Shuo
    Guo, Lei
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 84
  • [42] Multi-Modal Domain Adaptation Variational Auto-encoder for EEG-Based Emotion Recognition
    Wang, Yixin
    Qiu, Shuang
    Li, Dan
    Du, Changde
    Lu, Bao-Liang
    He, Huiguang
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2022, 9 (09) : 1612 - 1626
  • [43] Cross-Subject Emotion Recognition Based on Domain Similarity of EEG Signal Transfer Learning
    Ma, Yuliang
    Zhao, Weicheng
    Meng, Ming
    Zhang, Qizhong
    She, Qingshan
    Zhang, Jianhai
    IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2023, 31 : 936 - 943
  • [44] A Multi-modal Visual Emotion Recognition Method to Instantiate an Ontology
    Heredia, Juan Pablo A.
    Cardinale, Yudith
    Dongo, Irvin
    Diaz-Amado, Jose
    PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON SOFTWARE TECHNOLOGIES (ICSOFT), 2021, : 453 - 464
  • [45] A cross-scenario and cross-subject domain adaptation method for driving fatigue detection
    Luo, Yun
    Liu, Wei
    Li, Hanqi
    Lu, Yong
    Lu, Bao-Liang
    JOURNAL OF NEURAL ENGINEERING, 2024, 21 (04)
  • [46] A deep subdomain associate adaptation network for cross-session and cross-subject EEG emotion recognition
    Meng, Ming
    Hu, Jiahao
    Gao, Yunyuan
    Kong, Wanzeng
    Luo, Zhizeng
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2022, 78
  • [47] Cross-Subject Multimodal Emotion Recognition Based on Hybrid Fusion
    Cimtay, Yucel
    Ekmekcioglu, Erhan
    Caglar-Ozhan, Seyma
    IEEE ACCESS, 2020, 8 : 168865 - 168878
  • [48] Multi-Modal Domain Adaptation for Fine-Grained Action Recognition
    Munro, Jonathan
    Damen, Dima
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 119 - 129
  • [49] Gusa: Graph-Based Unsupervised Subdomain Adaptation for Cross-Subject EEG Emotion Recognition
    Li, Xiaojun
    Chen, C. L. Philip
    Chen, Bianna
    Zhang, Tong
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2024, 15 (03) : 1451 - 1462
  • [50] Dynamic Threshold Distribution Domain Adaptation Network: A Cross-Subject Fatigue Recognition Method Based on EEG Signals
    Ma, Chao
    Zhang, Meng
    Sun, Xinlin
    Wang, He
    Gao, Zhongke
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2024, 16 (01) : 190 - 201