CFDA-CSF: A Multi-Modal Domain Adaptation Method for Cross-Subject Emotion Recognition

被引:5
|
作者
Jimenez-Guarneros, Magdiel [1 ]
Fuentes-Pineda, Gibran [1 ]
机构
[1] Univ Nacl Autonoma Mexico, Dept Comp Sci, Inst Invest Matemat Aplicadas & Sistemas IIMAS, Coyoacan 04510, Mexico
关键词
Electroencephalography; Emotion recognition; Correlation; Task analysis; Brain modeling; Proposals; Training; Deep learning; electroencephalogram; emotion recognition; eye tracking; multi-modal domain adaptation;
D O I
10.1109/TAFFC.2024.3357656
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-modal classifiers for emotion recognition have become prominent, as the emotional states of subjects can be more comprehensively inferred from Electroencephalogram (EEG) signals and eye movements. However, existing classifiers experience a decrease in performance due to the distribution shift when applied to new users. Unsupervised domain adaptation (UDA) emerges as a solution to address the distribution shift between subjects by learning a shared latent feature space. Nevertheless, most UDA approaches focus on a single modality, while existing multi-modal approaches do not consider that fine-grained structures should also be explicitly aligned and the learned feature space must be discriminative. In this paper, we propose Coarse and Fine-grained Distribution Alignment with Correlated and Separable Features (CFDA-CSF), which performs a coarse alignment over the global feature space, and a fine-grained alignment between modalities from each domain distribution. At the same time, the model learns intra-domain correlated features, while a separable feature space is encouraged on new subjects. We conduct an extensive experimental study across the available sessions on three public datasets for multi-modal emotion recognition: SEED, SEED-IV, and SEED-V. Our proposal effectively improves the recognition performance in every session, achieving an average accuracy of 93.05%, 85.87% and 91.20% for SEED; 85.72%, 89.60%, and 86.88% for SEED-IV; and 88.49%, 91.37% and 91.57% for SEED-V.
引用
收藏
页码:1502 / 1513
页数:12
相关论文
共 50 条
  • [1] Domain Adaptation for Cross-Subject Emotion Recognition by Subject Clustering
    Liu, Jin
    Shen, Xinke
    Song, Sen
    Zhang, Dan
    2021 10TH INTERNATIONAL IEEE/EMBS CONFERENCE ON NEURAL ENGINEERING (NER), 2021, : 904 - 908
  • [2] Easy Domain Adaptation for cross-subject multi-view emotion recognition
    Chen, Chuangquan
    Vong, Chi-Man
    Wang, Shitong
    Wang, Hongtao
    Pang, Miaoqi
    KNOWLEDGE-BASED SYSTEMS, 2022, 239
  • [3] Multi-modal Mood Reader: Pre-trained Model Empowers Cross-Subject Emotion Recognition
    Dong, Yihang
    Chen, Xuhang
    Shen, Yanyan
    Ng, Michael Kwok-Po
    Qian, Tao
    Wang, Shuctiang
    NEURAL COMPUTING FOR ADVANCED APPLICATIONS, NCAA 2024, PT III, 2025, 2183 : 178 - 192
  • [4] Multi-modal supervised domain adaptation with a multi-level alignment strategy and consistent decision boundaries for cross-subject emotion recognition from EEG and eye movement signals
    Jimenez-Guarneros, Magdiel
    Fuentes-Pineda, Gibran
    KNOWLEDGE-BASED SYSTEMS, 2025, 315
  • [5] Multi-source Selective Graph Domain Adaptation Network for cross-subject EEG emotion recognition
    Wang, Jing
    Ning, Xiaojun
    Xu, Wei
    Li, Yunze
    Jia, Ziyu
    Lin, Youfang
    NEURAL NETWORKS, 2024, 180
  • [6] Generator-based Domain Adaptation Method with Knowledge Free for Cross-subject EEG Emotion Recognition
    Dongmin Huang
    Sijin Zhou
    Dazhi Jiang
    Cognitive Computation, 2022, 14 : 1316 - 1327
  • [7] Generator-based Domain Adaptation Method with Knowledge Free for Cross-subject EEG Emotion Recognition
    Huang, Dongmin
    Zhou, Sijin
    Jiang, Dazhi
    COGNITIVE COMPUTATION, 2022, 14 (04) : 1316 - 1327
  • [8] Multisource Associate Domain Adaptation for Cross-Subject and Cross-Session EEG Emotion Recognition
    She, Qingshan
    Zhang, Chenqi
    Fang, Feng
    Ma, Yuliang
    Zhang, Yingchun
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [9] Multi-source joint domain adaptation for cross-subject and cross-session emotion recognition from electroencephalography
    Liang, Shengjin
    Su, Lei
    Wu, Liping
    Fu, Yunfa
    FRONTIERS IN HUMAN NEUROSCIENCE, 2022, 16
  • [10] Online Cross-subject Emotion Recognition from ECG via Unsupervised Domain Adaptation
    He, Wenwen
    Ye, Yalan
    Li, Yunxia
    Pan, Tongjie
    Lu, Li
    2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC), 2021, : 1001 - 1005