CFDA-CSF: A Multi-Modal Domain Adaptation Method for Cross-Subject Emotion Recognition

被引:5
|
作者
Jimenez-Guarneros, Magdiel [1 ]
Fuentes-Pineda, Gibran [1 ]
机构
[1] Univ Nacl Autonoma Mexico, Dept Comp Sci, Inst Invest Matemat Aplicadas & Sistemas IIMAS, Coyoacan 04510, Mexico
关键词
Electroencephalography; Emotion recognition; Correlation; Task analysis; Brain modeling; Proposals; Training; Deep learning; electroencephalogram; emotion recognition; eye tracking; multi-modal domain adaptation;
D O I
10.1109/TAFFC.2024.3357656
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-modal classifiers for emotion recognition have become prominent, as the emotional states of subjects can be more comprehensively inferred from Electroencephalogram (EEG) signals and eye movements. However, existing classifiers experience a decrease in performance due to the distribution shift when applied to new users. Unsupervised domain adaptation (UDA) emerges as a solution to address the distribution shift between subjects by learning a shared latent feature space. Nevertheless, most UDA approaches focus on a single modality, while existing multi-modal approaches do not consider that fine-grained structures should also be explicitly aligned and the learned feature space must be discriminative. In this paper, we propose Coarse and Fine-grained Distribution Alignment with Correlated and Separable Features (CFDA-CSF), which performs a coarse alignment over the global feature space, and a fine-grained alignment between modalities from each domain distribution. At the same time, the model learns intra-domain correlated features, while a separable feature space is encouraged on new subjects. We conduct an extensive experimental study across the available sessions on three public datasets for multi-modal emotion recognition: SEED, SEED-IV, and SEED-V. Our proposal effectively improves the recognition performance in every session, achieving an average accuracy of 93.05%, 85.87% and 91.20% for SEED; 85.72%, 89.60%, and 86.88% for SEED-IV; and 88.49%, 91.37% and 91.57% for SEED-V.
引用
收藏
页码:1502 / 1513
页数:12
相关论文
共 50 条
  • [21] Multi-Modal Domain Adaptation Variational Autoencoder for EEG-Based Emotion Recognition
    Yixin Wang
    Shuang Qiu
    Dan Li
    Changde Du
    Bao-Liang Lu
    Huiguang He
    IEEE/CAA Journal of Automatica Sinica, 2022, 9 (09) : 1612 - 1626
  • [22] Learning a robust unified domain adaptation framework for cross-subject EEG-based emotion recognition
    Jimenez-Guarneros, Magdiel
    Fuentes-Pineda, Gibran
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 86
  • [23] MSS-JDA: Multi-Source Self-Selected Joint Domain Adaptation method based on cross-subject EEG emotion recognition
    Chen, Shinan
    Ma, Weifeng
    Wang, Yuchen
    Sun, Xiaoyong
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2025, 100
  • [24] Multi-source domain generalization and adaptation toward cross-subject myoelectric pattern recognition
    Zhang, Xuan
    Wu, Le
    Zhang, Xu
    Chen, Xiang
    Li, Chang
    Chen, Xun
    JOURNAL OF NEURAL ENGINEERING, 2023, 20 (01)
  • [25] Semi-Supervised Cross-Subject Emotion Recognition Based on Stacked Denoising Autoencoder Architecture Using a Fusion of Multi-Modal Physiological Signals
    Luo, Junhai
    Tian, Yuxin
    Yu, Hang
    Chen, Yu
    Wu, Man
    ENTROPY, 2022, 24 (05)
  • [26] Cross-modal dynamic convolution for multi-modal emotion recognition
    Wen, Huanglu
    You, Shaodi
    Fu, Ying
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2021, 78
  • [27] Cross-subject EEG emotion recognition using multi-source domain manifold feature selection
    She, Qingshan
    Shi, Xinsheng
    Fang, Feng
    Ma, Yuliang
    Zhang, Yingchun
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 159
  • [28] Cross-Subject Emotion Recognition Based on Domain Similarity of EEG Signal Transfer
    Ma, Yuliang
    Zhao, Weicheng
    Meng, Ming
    Zhang, Qizhong
    She, Qingshan
    Zhang, Jianhai
    IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2023, 31 : 936 - 943
  • [29] Cross-Subject EEG-Based Emotion Recognition with Deep Domain Confusion
    Zhang, Weiwei
    Wang, Fei
    Jiang, Yang
    Xu, Zongfeng
    Wu, Shichao
    Zhang, Yahui
    INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2019, PT I, 2019, 11740 : 558 - 570
  • [30] Cross-Subject Cognitive Workload Recognition Based on EEG and Deep Domain Adaptation
    Zhou, Yueying
    Wang, Pengpai
    Gong, Peiliang
    Wei, Fulin
    Wen, Xuyun
    Wu, Xia
    Zhang, Daoqiang
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72