A Multimodal Sentiment Analysis Method Based on Fuzzy Attention Fusion

被引:0
|
作者
Zhi, Yuxing [1 ]
Li, Junhuai [1 ]
Wang, Huaijun [1 ]
Chen, Jing [1 ]
Wei, Wei [1 ]
机构
[1] Xian Univ Technol, Sch Comp Sci & Engn, Xian 710048, Peoples R China
基金
中国国家自然科学基金;
关键词
Sentiment analysis; Contrastive learning; Task analysis; Fuzzy systems; Data models; Uncertainty; Semantics; Attention mechanism; fuzzy c-means (FCM); multimodal sentiment analysis (MSA); representation learning;
D O I
10.1109/TFUZZ.2024.3434614
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Affective analysis is a technology that aims to understand human sentiment states, and it is widely applied in human-computer interaction and social sentiment analysis. Compared to unimodal, multimodal sentiment analysis (MSA) focuses more on the complementary information and differences from multimodalities, which can better represent the actual sentiment expressed by humans. Existing MSA methods usually ignore the problem of multimodal data ambiguity and the uncertainty of influence redundant features on the sentiment discriminability. To address these issues, we propose a fuzzy attention fusion-based MSA method, called FFMSA. FFMSA alleviates the heterogeneity of multimodal data through shared and private subspaces, and solves the ambiguity using a fuzzy attention mechanism based on continuous value decision making, in order to obtain accurate sentiment features for downstream tasks. The private subspace refines the latent features within each single modality through constraints on their uniqueness, while the shared subspace learns common features using a nonparametric independence criterion algorithm. By constructing sample pairs for unsupervised contrastive learning, we use fuzzy c-means to model uncertainty to constrain the similarity between similar samples to enhance the expression of shared features. Furthermore, we adopt a multiangle modeling approach to capture the consistency and complementarity of multimodalities, dynamically adjusting the interaction between different modalities through a fuzzy attention mechanism to achieve comprehensive sentiment fusion. Experimental results on two datasets demonstrate that our FFMSA outperforms state-of-the-art approaches in MSA and emotion recognition. The proposed FFMSA achieves sentiment binary classification accuracy of 85.8% and 86.4% on CMU-MOSI and CMU-MOSEI, respectively.
引用
收藏
页码:5886 / 5898
页数:13
相关论文
共 50 条
  • [21] Multimodal Sentiment Analysis Based on Composite Hierarchical Fusion
    Lei, Yu
    Qu, Keshuai
    Zhao, Yifan
    Han, Qing
    Wang, Xuguang
    COMPUTER JOURNAL, 2024, 67 (06): : 2230 - 2245
  • [22] Multimodal sentiment analysis based on fusion methods: A survey
    Zhu, Linan
    Zhu, Zhechao
    Zhang, Chenwei
    Xu, Yifei
    Kong, Xiangjie
    INFORMATION FUSION, 2023, 95 : 306 - 325
  • [23] Survey of Sentiment Analysis Algorithms Based on Multimodal Fusion
    Guo, Xu
    Mairidan, Wushouer
    Gulanbaier, Tuerhong
    Computer Engineering and Applications, 2024, 60 (02) : 1 - 18
  • [24] Multi-attention Fusion for Multimodal Sentiment Classification
    Li, Guangmin
    Zeng, Xin
    Chen, Chi
    Zhou, Long
    PROCEEDINGS OF 2024 ACM ICMR WORKSHOP ON MULTIMODAL VIDEO RETRIEVAL, ICMR-MVR 2024, 2024, : 1 - 7
  • [25] Gated attention fusion network for multimodal sentiment classification
    Du, Yongping
    Liu, Yang
    Peng, Zhi
    Jin, Xingnan
    KNOWLEDGE-BASED SYSTEMS, 2022, 240
  • [26] A multimodal fusion network with attention mechanisms for visual-textual sentiment analysis
    Gan, Chenquan
    Fu, Xiang
    Feng, Qingdong
    Zhu, Qingyi
    Cao, Yang
    Zhu, Ye
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 242
  • [27] Bimodal Fusion Network with Multi-Head Attention for Multimodal Sentiment Analysis
    Zhang, Rui
    Xue, Chengrong
    Qi, Qingfu
    Lin, Liyuan
    Zhang, Jing
    Zhang, Lun
    APPLIED SCIENCES-BASEL, 2023, 13 (03):
  • [28] Multimodal Sentiment Analysis Based on Bidirectional Mask Attention Mechanism
    Zhang Y.
    Zhang H.
    Liu Y.
    Liang K.
    Wang Y.
    Data Analysis and Knowledge Discovery, 2023, 7 (04) : 46 - 55
  • [29] Multimodal Sentiment Analysis Based on Cross-Modal Attention and Gated Cyclic Hierarchical Fusion Networks
    Quan, Zhibang
    Sun, Tao
    Su, Mengli
    Wei, Jishu
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022
  • [30] AB-GRU: An attention-based bidirectional GRU model for multimodal sentiment fusion and analysis
    Wu, Jun
    Zheng, Xinli
    Wang, Jiangpeng
    Wu, Junwei
    Wang, Ji
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2023, 20 (10) : 18523 - 18544