CFDA-CSF: A Multi-Modal Domain Adaptation Method for Cross-Subject Emotion Recognition

被引:5
|
作者
Jimenez-Guarneros, Magdiel [1 ]
Fuentes-Pineda, Gibran [1 ]
机构
[1] Univ Nacl Autonoma Mexico, Dept Comp Sci, Inst Invest Matemat Aplicadas & Sistemas IIMAS, Coyoacan 04510, Mexico
关键词
Electroencephalography; Emotion recognition; Correlation; Task analysis; Brain modeling; Proposals; Training; Deep learning; electroencephalogram; emotion recognition; eye tracking; multi-modal domain adaptation;
D O I
10.1109/TAFFC.2024.3357656
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-modal classifiers for emotion recognition have become prominent, as the emotional states of subjects can be more comprehensively inferred from Electroencephalogram (EEG) signals and eye movements. However, existing classifiers experience a decrease in performance due to the distribution shift when applied to new users. Unsupervised domain adaptation (UDA) emerges as a solution to address the distribution shift between subjects by learning a shared latent feature space. Nevertheless, most UDA approaches focus on a single modality, while existing multi-modal approaches do not consider that fine-grained structures should also be explicitly aligned and the learned feature space must be discriminative. In this paper, we propose Coarse and Fine-grained Distribution Alignment with Correlated and Separable Features (CFDA-CSF), which performs a coarse alignment over the global feature space, and a fine-grained alignment between modalities from each domain distribution. At the same time, the model learns intra-domain correlated features, while a separable feature space is encouraged on new subjects. We conduct an extensive experimental study across the available sessions on three public datasets for multi-modal emotion recognition: SEED, SEED-IV, and SEED-V. Our proposal effectively improves the recognition performance in every session, achieving an average accuracy of 93.05%, 85.87% and 91.20% for SEED; 85.72%, 89.60%, and 86.88% for SEED-IV; and 88.49%, 91.37% and 91.57% for SEED-V.
引用
收藏
页码:1502 / 1513
页数:12
相关论文
共 50 条
  • [31] Cross-Subject EEG Signal Recognition Using Deep Domain Adaptation Network
    Hang, Wenlong
    Feng, Wei
    Du, Ruoyu
    Liang, Shuang
    Chen, Yan
    Wang, Qiong
    Liu, Xuejun
    IEEE ACCESS, 2019, 7 : 128273 - 128282
  • [32] MASS: A Multisource Domain Adaptation Network for Cross-Subject Touch Gesture Recognition
    Li, Yun-Kai
    Meng, Qing-Hao
    Wang, Ya-Xin
    Yang, Tian-Hao
    Hou, Hui-Rang
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (03) : 3099 - 3108
  • [33] Exploring EEG Features in Cross-Subject Emotion Recognition
    Li, Xiang
    Song, Dawei
    Zhang, Peng
    Zhang, Yazhou
    Hou, Yuexian
    Hu, Bin
    FRONTIERS IN NEUROSCIENCE, 2018, 12
  • [34] Online multi-hypergraph fusion learning for cross-subject emotion recognition
    Pan, Tongjie
    Ye, Yalan
    Zhang, Yangwuyong
    Xiao, Kunshu
    Cai, Hecheng
    INFORMATION FUSION, 2024, 108
  • [35] Contextual and Cross-Modal Interaction for Multi-Modal Speech Emotion Recognition
    Yang, Dingkang
    Huang, Shuai
    Liu, Yang
    Zhang, Lihua
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 2093 - 2097
  • [36] A Novel Experiment Setting for Cross-subject Emotion Recognition
    Hu, Hao-Yi
    Zhao, Li-Ming
    Liu, Yu-Zhong
    Li, Hua-Liang
    Lu, Bao-Liang
    2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC), 2021, : 6416 - 6419
  • [37] Custom Domain Adaptation: A New Method for Cross-Subject, EEG-Based Cognitive Load Recognition
    Jimenez-Guarneros, Magdiel
    Gomez-Gil, Pilar
    IEEE SIGNAL PROCESSING LETTERS, 2020, 27 : 750 - 754
  • [38] JOINT TEMPORAL CONVOLUTIONAL NETWORKS AND ADVERSARIAL DISCRIMINATIVE DOMAIN ADAPTATION FOR EEG-BASED CROSS-SUBJECT EMOTION RECOGNITION
    He, Zhipeng
    Zhong, Yongshi
    Pan, Jiahui
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3214 - 3218
  • [39] GNN-based multi-source domain prototype representation for cross-subject EEG emotion recognition
    Guo, Yi
    Tang, Chao
    Wu, Hao
    Chen, Badong
    NEUROCOMPUTING, 2024, 609
  • [40] SEDA-EEG: A semi-supervised emotion recognition network with domain adaptation for cross-subject EEG analysis
    Tan, Weilong
    Zhang, Hongyi
    Wang, Yingbei
    Wen, Weimin
    Chen, Liang
    Li, Han
    Gao, Xingen
    Zeng, Nianyin
    NEUROCOMPUTING, 2025, 622