Cross-Modal Guiding Neural Network for Multimodal Emotion Recognition From EEG and Eye Movement Signals

被引:0
|
作者
Fu, Baole [1 ,2 ]
Chu, Wenhao [1 ,2 ]
Gu, Chunrui [1 ,2 ]
Liu, Yinhua [1 ,2 ,3 ]
机构
[1] Qingdao Univ, Inst Future, Qingdao 266071, Peoples R China
[2] Qingdao Univ, Sch Automat, Qingdao 266071, Peoples R China
[3] Qingdao Univ, Shandong Prov Key Lab Ind Control Technol, Qingdao 266071, Peoples R China
关键词
Feature extraction; Electroencephalography; Emotion recognition; Brain modeling; Videos; Convolution; Accuracy; Multimodal emotion recognition; electroencephalogram (EEG); convolutional neural network (CNN); cross-modal guidance; feature selection;
D O I
10.1109/JBHI.2024.3419043
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multimodal emotion recognition research is gaining attention because of the emerging trend of integrating information from different sensory modalities to improve performance. Electroencephalogram (EEG) signals are considered objective indicators of emotions and provide precise insights despite their complex data collection. In contrast, eye movement signals are more susceptible to environmental and individual differences but offer convenient data collection. Conventional emotion recognition methods typically use separate models for different modalities, potentially overlooking their inherent connections. This study introduces a cross-modal guiding neural network designed to fully leverage the strengths of both modalities. The network includes a dual-branch feature extraction module that simultaneously extracts features from EEG and eye movement signals. In addition, the network includes a feature guidance module that uses EEG features to direct eye movement feature extraction, reducing the impact of subjective factors. This study also introduces a feature reweighting module to explore emotion-related features within eye movement signals, thereby improving emotion classification accuracy. The empirical findings from both the SEED-IV dataset and our collected dataset substantiate the commendable performance of the model, thereby confirming its efficacy.
引用
收藏
页码:5865 / 5876
页数:12
相关论文
共 50 条
  • [1] A novel feature fusion network for multimodal emotion recognition from EEG and eye movement signals
    Fu, Baole
    Gu, Chunrui
    Fu, Ming
    Xia, Yuxiao
    Liu, Yinhua
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [2] Cross-modal credibility modelling for EEG-based multimodal emotion recognition
    Zhang, Yuzhe
    Liu, Huan
    Wang, Di
    Zhang, Dalin
    Lou, Tianyu
    Zheng, Qinghua
    Quek, Chai
    JOURNAL OF NEURAL ENGINEERING, 2024, 21 (02)
  • [3] Multimodal Emotion Recognition from Eye Image, Eye Movement and EEG Using Deep Neural Networks
    Guo, Jiang-Jian
    Zhou, Rong
    Zhao, Li-Ming
    Lu, Bao-Liang
    2019 41ST ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2019, : 3071 - 3074
  • [4] A multimodal shared network with a cross-modal distribution constraint for continuous emotion recognition
    Li, Chiqin
    Xie, Lun
    Shao, Xingmao
    Pan, Hang
    Wang, Zhiliang
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [5] MMDA: A Multimodal and Multisource Domain Adaptation Method for Cross-Subject Emotion Recognition From EEG and Eye Movement Signals
    Jimenez-Guarneros, Magdiel
    Fuentes-Pineda, Gibran
    Grande-Barreto, Jonas
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024,
  • [6] A novel feature fusion network for multimodal emotion recognition from EEG and eye movement signals (vol 17, 1287377, 2023)
    Fu, Baole
    Gu, Chunrui
    Fu, Ming
    Xia, Yuxiao
    Liu, Yinhua
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [7] Cross-Modal Dynamic Transfer Learning for Multimodal Emotion Recognition
    Hong, Soyeon
    Kang, Hyeoungguk
    Cho, Hyunsouk
    IEEE ACCESS, 2024, 12 : 14324 - 14333
  • [8] A cross-modal fusion network based on graph feature learning for multimodal emotion recognition
    Cao Xiaopeng
    Zhang Linying
    Chen Qiuxian
    Ning Hailong
    Dong Yizhuo
    The Journal of China Universities of Posts and Telecommunications, 2024, 31 (06) : 16 - 25
  • [9] Emotion recognition using cross-modal attention from EEG and facial expression
    Cui, Rongxuan
    Chen, Wanzhong
    Li, Mingyang
    KNOWLEDGE-BASED SYSTEMS, 2024, 304
  • [10] FUNCTIONAL EMOTION TRANSFORMER FOR EEG-ASSISTED CROSS-MODAL EMOTION RECOGNITION
    Jiang, Wei-Bang
    Li, Ziyi
    Zheng, Wei-Long
    Lu, Bao-Liang
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 1841 - 1845