Deep Feature Extraction and Attention Fusion for Multimodal Emotion Recognition

被引:5
|
作者
Yang, Zhiyi [1 ]
Li, Dahua [1 ]
Hou, Fazheng [1 ]
Song, Yu [1 ]
Gao, Qiang [2 ]
机构
[1] Tianjin Univ Technol, Sch Elect Engn & Automat, Tianjin Key Lab Control Theory & Applicat Complica, Tianjin 300384, Peoples R China
[2] Tianjin Univ Technol, Maritime Coll, Tianjin Key Lab Control Theory & Applicat Complica, Tianjin 300384, Peoples R China
基金
中国国家自然科学基金;
关键词
EEG; eye movement; interactive attention; self-attention; emotion recognition; ALGORITHM;
D O I
10.1109/TCSII.2023.3318814
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Recently, electroencephalogram (EEG)-based multimodal emotion recognition has emerged as one of the research hotspots in affective computing. However, the existing methods tend to ignore the interaction information between the EEG and other modal features. In this brief, we propose a novel model termed EEANet (EEG and eye movement Attention Network) to find the modal correlation at feature level. The DE feature and 31 eye movement features were extracted from the pre-processed EEG and eye movement signals, and then two feedforward encoders were used to capture the deep features, respectively. The interactive attention layer is applied to learn multi-modal complementary information and semantic-level context information. Finally, the multi-head self-attention mechanism allows the model to focus on the discriminative features for emotion classification. The model was verified on the SEED-IV dataset, and the results showed that the accuracy of emotion recognition was significantly improved with the EEANet, and the average accuracy of the four classifications was 92.26%.
引用
收藏
页码:1526 / 1530
页数:5
相关论文
共 50 条
  • [1] Multimodal Emotion Recognition Based on Feature Fusion
    Xu, Yurui
    Wu, Xiao
    Su, Hang
    Liu, Xiaorui
    2022 INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM 2022), 2022, : 7 - 11
  • [2] Feature Fusion for Multimodal Emotion Recognition Based on Deep Canonical Correlation Analysis
    Zhang, Ke
    Li, Yuanqing
    Wang, Jingyu
    Wang, Zhen
    Li, Xuelong
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 1898 - 1902
  • [3] ADFF: Attention Based Deep Feature Fusion Approach for Music Emotion Recognition
    Huang, Zi
    Ji, Shulei
    Hu, Zhilan
    Cai, Chuangjian
    Luo, Jing
    Yang, Xinyu
    INTERSPEECH 2022, 2022, : 4152 - 4156
  • [4] MSER: Multimodal speech emotion recognition using cross-attention with deep fusion
    Khan, Mustaqeem
    Gueaieb, Wail
    El Saddik, Abdulmotaleb
    Kwon, Soonil
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 245
  • [5] Speech emotion recognition based on multimodal and multiscale feature fusion
    Hu, Huangshui
    Wei, Jie
    Sun, Hongyu
    Wang, Chuhang
    Tao, Shuo
    SIGNAL IMAGE AND VIDEO PROCESSING, 2025, 19 (01)
  • [6] Deep spatio-temporal feature fusion with compact bilinear pooling for multimodal emotion recognition
    Dung Nguyen
    Kien Nguyen
    Sridharan, Sridha
    Dean, David
    Fookes, Clinton
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2018, 174 : 33 - 42
  • [7] Audio-Video Fusion with Double Attention for Multimodal Emotion Recognition
    Mocanu, Bogdan
    Tapu, Ruxandra
    2022 IEEE 14TH IMAGE, VIDEO, AND MULTIDIMENSIONAL SIGNAL PROCESSING WORKSHOP (IVMSP), 2022,
  • [8] Feature-Level Fusion of Multimodal Physiological Signals for Emotion Recognition
    Chen, Jing
    Ru, Bin
    Xu, Lixin
    Moore, Philip
    Su, Yun
    PROCEEDINGS 2015 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, 2015, : 395 - 399
  • [9] A multimodal fusion-based deep learning framework combined with keyframe extraction and spatial and channel attention for group emotion recognition from videos
    Qi, Shubao
    Liu, Baolin
    PATTERN ANALYSIS AND APPLICATIONS, 2023, 26 (03) : 1493 - 1503
  • [10] A multimodal fusion-based deep learning framework combined with keyframe extraction and spatial and channel attention for group emotion recognition from videos
    Shubao Qi
    Baolin Liu
    Pattern Analysis and Applications, 2023, 26 (3) : 1493 - 1503