Cross-Modal Guiding Neural Network for Multimodal Emotion Recognition From EEG and Eye Movement Signals

被引:0
|
作者
Fu, Baole [1 ,2 ]
Chu, Wenhao [1 ,2 ]
Gu, Chunrui [1 ,2 ]
Liu, Yinhua [1 ,2 ,3 ]
机构
[1] Qingdao Univ, Inst Future, Qingdao 266071, Peoples R China
[2] Qingdao Univ, Sch Automat, Qingdao 266071, Peoples R China
[3] Qingdao Univ, Shandong Prov Key Lab Ind Control Technol, Qingdao 266071, Peoples R China
关键词
Feature extraction; Electroencephalography; Emotion recognition; Brain modeling; Videos; Convolution; Accuracy; Multimodal emotion recognition; electroencephalogram (EEG); convolutional neural network (CNN); cross-modal guidance; feature selection;
D O I
10.1109/JBHI.2024.3419043
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multimodal emotion recognition research is gaining attention because of the emerging trend of integrating information from different sensory modalities to improve performance. Electroencephalogram (EEG) signals are considered objective indicators of emotions and provide precise insights despite their complex data collection. In contrast, eye movement signals are more susceptible to environmental and individual differences but offer convenient data collection. Conventional emotion recognition methods typically use separate models for different modalities, potentially overlooking their inherent connections. This study introduces a cross-modal guiding neural network designed to fully leverage the strengths of both modalities. The network includes a dual-branch feature extraction module that simultaneously extracts features from EEG and eye movement signals. In addition, the network includes a feature guidance module that uses EEG features to direct eye movement feature extraction, reducing the impact of subjective factors. This study also introduces a feature reweighting module to explore emotion-related features within eye movement signals, thereby improving emotion classification accuracy. The empirical findings from both the SEED-IV dataset and our collected dataset substantiate the commendable performance of the model, thereby confirming its efficacy.
引用
收藏
页码:5865 / 5876
页数:12
相关论文
共 50 条
  • [21] EmotionKD: A Cross-Modal Knowledge Distillation Framework for Emotion Recognition Based on Physiological Signals
    Liu, Yucheng
    Jia, Ziyu
    Wang, Haichao
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 6122 - 6131
  • [22] HYPERCOMPLEX MULTIMODAL EMOTION RECOGNITION FROM EEG AND PERIPHERAL PHYSIOLOGICAL SIGNALS
    Lopez, Eleonora
    Chiarantano, Eleonora
    Grassucci, Eleonora
    Comminiello, Danilo
    2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [23] Multimodal Paradigm for Emotion Recognition Based on EEG Signals
    Masood, Naveen
    Farooq, Humera
    HUMAN-COMPUTER INTERACTION: THEORIES, METHODS, AND HUMAN ISSUES, HCI INTERNATIONAL 2018, PT I, 2018, 10901 : 419 - 428
  • [24] Multimodal emotion recognition for the fusion of speech and EEG signals
    Ma J.
    Sun Y.
    Zhang X.
    Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2019, 46 (01): : 143 - 150
  • [25] Cross-modal contrastive learning for multimodal sentiment recognition
    Yang, Shanliang
    Cui, Lichao
    Wang, Lei
    Wang, Tao
    APPLIED INTELLIGENCE, 2024, 54 (05) : 4260 - 4276
  • [26] Cross-modal contrastive learning for multimodal sentiment recognition
    Shanliang Yang
    Lichao Cui
    Lei Wang
    Tao Wang
    Applied Intelligence, 2024, 54 : 4260 - 4276
  • [28] MemoCMT: multimodal emotion recognition using cross-modal transformer-based feature fusion
    Khan, Mustaqeem
    Tran, Phuong-Nam
    Pham, Nhat Truong
    El Saddik, Abdulmotaleb
    Othmani, Alice
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [29] Effects of eye and hand movement on cross-modal memory
    Fujiki, Akiko
    MEMORY & COGNITION, 2023, 51 (06) : 1444 - 1460
  • [30] Effects of eye and hand movement on cross-modal memory
    Akiko Fujiki
    Memory & Cognition, 2023, 51 : 1444 - 1460