Dimensional emotion recognition based on two stream CNN fusion attention mechanism

被引:1
|
作者
Qi, Mei [1 ]
Zhang, Hairong [1 ]
机构
[1] Anhui Open Univ, Sch Informat & Construct Engn, 3 JiuHuashan Rd, Hefei 230022, Anhui, Peoples R China
关键词
Two stream CNN; sharing and global attention mechanism; dimensional emotion;
D O I
10.1117/12.2678902
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Aiming at the problem that discrete emotion recognition cannot depict continuous emotion changes, in order to capture high-level dimensional emotional information, this paper integrates attention mechanism into the two stream CNN model and proposes a Two Stream Convolutional Neural Network with Shared and Global attention mechanism (TSCNN-SGA). TSCNN-SGA uses the same structure of CNN network structure to extract the static stream of expression images and dynamic stream of expression sequences features respectively, firstly, in the dynamic and static dual flow feature extraction network, the output feature map of the previous convolution layer group is used to cascade to calculate the shared attention weight of the next layer group, secondly, the two stream convolution feature map with shared attention is cascaded, the attention weights of different positions are mapped onto the cascaded feature map and weighted, finally, the shared weight matrix in the convolution end of TSCNN-SSA and the global attention mechanism after the two stream feature cascade work together to obtain the depth space-time feature, which is input to the bidirectional long-short time network to obtain the final dimensional sentiment prediction value. Compared with different baseline methods, the average value of the proposed method's concordance correlation coefficient (CCC) in the arousal-valence space reached 0.576, which can effectively identify dimensional emotions.
引用
收藏
页数:8
相关论文
共 50 条
  • [22] Scale fusion light CNN for hyperspectral face recognition with knowledge distillation and attention mechanism
    Jie-Yi Niu
    Zhi-Hua Xie
    Yi Li
    Si-Jia Cheng
    Jia-Wei Fan
    Applied Intelligence, 2022, 52 : 6181 - 6195
  • [23] Scale fusion light CNN for hyperspectral face recognition with knowledge distillation and attention mechanism
    Niu, Jie-Yi
    Xie, Zhi-Hua
    Li, Yi
    Cheng, Si-Jia
    Fan, Jia-Wei
    APPLIED INTELLIGENCE, 2022, 52 (06) : 6181 - 6195
  • [24] Two-Stream Deep Fusion Network Based on VAE and CNN for Synthetic Aperture Radar Target Recognition
    Du, Lan
    Li, Lu
    Guo, Yuchen
    Wang, Yan
    Ren, Ke
    Chen, Jian
    REMOTE SENSING, 2021, 13 (20)
  • [25] Facial Emotion Recognition Based on CNN
    Liu, Shuang
    Li, Dahua
    Gao, Qiang
    Song, Yu
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 398 - 403
  • [26] Remote Sensing Image Fusion Algorithm Based on Two-Stream Fusion Network and Residual Channel Attention Mechanism
    Huang, Mengxing
    Liu, Shi
    Li, Zhenfeng
    Feng, Siling
    Wu, Di
    Wu, Yuanyuan
    Shu, Feng
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [27] Speech Emotion Recognition using XGBoost and CNN BLSTM with Attention
    He, Jingru
    Ren, Liyong
    2021 IEEE SMARTWORLD, UBIQUITOUS INTELLIGENCE & COMPUTING, ADVANCED & TRUSTED COMPUTING, SCALABLE COMPUTING & COMMUNICATIONS, INTERNET OF PEOPLE, AND SMART CITY INNOVATIONS (SMARTWORLD/SCALCOM/UIC/ATC/IOP/SCI 2021), 2021, : 154 - 159
  • [28] Research on EEG emotion recognition based on CNN+BiLSTM+self-attention model
    Li, Xueqing
    Li, Penghai
    Fang, Zhendong
    Cheng, Longlong
    Wang, Zhiyong
    Wang, Weijie
    OPTOELECTRONICS LETTERS, 2023, 19 (08) : 506 - 512
  • [29] Research on EEG emotion recognition based on CNN+BiLSTM+self-attention model
    Xueqing Li
    Penghai Li
    Zhendong Fang
    Longlong Cheng
    Zhiyong Wang
    Weijie Wang
    Optoelectronics Letters, 2023, 19 : 506 - 512
  • [30] EEG Emotion Recognition Combined with Attention Mechanism and Feature Fusion 1DCNN
    Yan, Chao
    Zhang, Xueying
    Zhang, Jing
    Chen, Guijun
    Huang, Lixia
    Computer Engineering and Applications, 2023, 59 (13) : 171 - 177