A Multimodal Sentiment Analysis Method Based on Fuzzy Attention Fusion

被引:0
|
作者
Zhi, Yuxing [1 ]
Li, Junhuai [1 ]
Wang, Huaijun [1 ]
Chen, Jing [1 ]
Wei, Wei [1 ]
机构
[1] Xian Univ Technol, Sch Comp Sci & Engn, Xian 710048, Peoples R China
基金
中国国家自然科学基金;
关键词
Sentiment analysis; Contrastive learning; Task analysis; Fuzzy systems; Data models; Uncertainty; Semantics; Attention mechanism; fuzzy c-means (FCM); multimodal sentiment analysis (MSA); representation learning;
D O I
10.1109/TFUZZ.2024.3434614
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Affective analysis is a technology that aims to understand human sentiment states, and it is widely applied in human-computer interaction and social sentiment analysis. Compared to unimodal, multimodal sentiment analysis (MSA) focuses more on the complementary information and differences from multimodalities, which can better represent the actual sentiment expressed by humans. Existing MSA methods usually ignore the problem of multimodal data ambiguity and the uncertainty of influence redundant features on the sentiment discriminability. To address these issues, we propose a fuzzy attention fusion-based MSA method, called FFMSA. FFMSA alleviates the heterogeneity of multimodal data through shared and private subspaces, and solves the ambiguity using a fuzzy attention mechanism based on continuous value decision making, in order to obtain accurate sentiment features for downstream tasks. The private subspace refines the latent features within each single modality through constraints on their uniqueness, while the shared subspace learns common features using a nonparametric independence criterion algorithm. By constructing sample pairs for unsupervised contrastive learning, we use fuzzy c-means to model uncertainty to constrain the similarity between similar samples to enhance the expression of shared features. Furthermore, we adopt a multiangle modeling approach to capture the consistency and complementarity of multimodalities, dynamically adjusting the interaction between different modalities through a fuzzy attention mechanism to achieve comprehensive sentiment fusion. Experimental results on two datasets demonstrate that our FFMSA outperforms state-of-the-art approaches in MSA and emotion recognition. The proposed FFMSA achieves sentiment binary classification accuracy of 85.8% and 86.4% on CMU-MOSI and CMU-MOSEI, respectively.
引用
收藏
页码:5886 / 5898
页数:13
相关论文
共 50 条
  • [1] Attention fusion network for multimodal sentiment analysis
    Yuanyi Luo
    Rui Wu
    Jiafeng Liu
    Xianglong Tang
    Multimedia Tools and Applications, 2024, 83 : 8207 - 8217
  • [2] Attention fusion network for multimodal sentiment analysis
    Luo, Yuanyi
    Wu, Rui
    Liu, Jiafeng
    Tang, Xianglong
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (03) : 8207 - 8217
  • [3] Multimodal Sentiment Analysis Based on Attention Mechanism and Tensor Fusion Network
    Zhang, Kang
    Geng, Yushui
    Zhao, Jing
    Li, Wenxiao
    Liu, Jianxin
    2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, : 1473 - 1477
  • [4] Multimodal Sentiment Analysis Method Based on Cross-Modal Attention and Gated Unit Fusion Network
    Chen, Yansong
    Zhang, Le
    Zhang, Leihan
    Lü, Xueqiang
    Data Analysis and Knowledge Discovery, 2024, 8 (07) : 67 - 76
  • [5] Sentiment analysis of social media comments based on multimodal attention fusion network
    Liu, Ziyu
    Yang, Tao
    Chen, Wen
    Chen, Jiangchuan
    Li, Qinru
    Zhang, Jun
    APPLIED SOFT COMPUTING, 2024, 164
  • [6] SKEAFN: Sentiment Knowledge Enhanced Attention Fusion Network for multimodal sentiment analysis
    Zhu, Chuanbo
    Chen, Min
    Zhang, Sheng
    Sun, Chao
    Liang, Han
    Liu, Yifan
    Chen, Jincai
    INFORMATION FUSION, 2023, 100
  • [7] Graph Reconstruction Attention Fusion Network for Multimodal Sentiment Analysis
    Hu, Ronglong
    Yi, Jizheng
    Chen, Lijiang
    Jin, Ze
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2025, 21 (01) : 297 - 306
  • [8] Multimodal sentiment analysis based on multiple attention
    Wang, Hongbin
    Ren, Chun
    Yu, Zhengtao
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 140
  • [9] Dynamic Dominant Fusion Multimodal Sentiment Analysis Method Based on Autoencoder
    Yang, Xi
    Guo, Junjun
    Yan, Haining
    Tan, Kaiwen
    Xiang, Yan
    Yu, Zhengtao
    Computer Engineering and Applications, 2024, 60 (06) : 180 - 187
  • [10] BAFN: Bi-Direction Attention Based Fusion Network for Multimodal Sentiment Analysis
    Tang, Jiajia
    Liu, Dongjun
    Jin, Xuanyu
    Peng, Yong
    Zhao, Qibin
    Ding, Yu
    Kong, Wanzeng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (04) : 1966 - 1978