A Multimodal Sentiment Analysis Method Based on Fuzzy Attention Fusion

被引:0
|
作者
Zhi, Yuxing [1 ]
Li, Junhuai [1 ]
Wang, Huaijun [1 ]
Chen, Jing [1 ]
Wei, Wei [1 ]
机构
[1] Xian Univ Technol, Sch Comp Sci & Engn, Xian 710048, Peoples R China
基金
中国国家自然科学基金;
关键词
Sentiment analysis; Contrastive learning; Task analysis; Fuzzy systems; Data models; Uncertainty; Semantics; Attention mechanism; fuzzy c-means (FCM); multimodal sentiment analysis (MSA); representation learning;
D O I
10.1109/TFUZZ.2024.3434614
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Affective analysis is a technology that aims to understand human sentiment states, and it is widely applied in human-computer interaction and social sentiment analysis. Compared to unimodal, multimodal sentiment analysis (MSA) focuses more on the complementary information and differences from multimodalities, which can better represent the actual sentiment expressed by humans. Existing MSA methods usually ignore the problem of multimodal data ambiguity and the uncertainty of influence redundant features on the sentiment discriminability. To address these issues, we propose a fuzzy attention fusion-based MSA method, called FFMSA. FFMSA alleviates the heterogeneity of multimodal data through shared and private subspaces, and solves the ambiguity using a fuzzy attention mechanism based on continuous value decision making, in order to obtain accurate sentiment features for downstream tasks. The private subspace refines the latent features within each single modality through constraints on their uniqueness, while the shared subspace learns common features using a nonparametric independence criterion algorithm. By constructing sample pairs for unsupervised contrastive learning, we use fuzzy c-means to model uncertainty to constrain the similarity between similar samples to enhance the expression of shared features. Furthermore, we adopt a multiangle modeling approach to capture the consistency and complementarity of multimodalities, dynamically adjusting the interaction between different modalities through a fuzzy attention mechanism to achieve comprehensive sentiment fusion. Experimental results on two datasets demonstrate that our FFMSA outperforms state-of-the-art approaches in MSA and emotion recognition. The proposed FFMSA achieves sentiment binary classification accuracy of 85.8% and 86.4% on CMU-MOSI and CMU-MOSEI, respectively.
引用
收藏
页码:5886 / 5898
页数:13
相关论文
共 50 条
  • [41] Few-shot Multimodal Sentiment Analysis Based on Multimodal Probabilistic Fusion Prompts
    Yang, Xiaocui
    Feng, Shi
    Wang, Daling
    Zhang, Yifei
    Poria, Soujanya
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 6045 - 6053
  • [42] Text sentiment analysis of fusion model based on attention mechanism
    Deng, Hongjie
    Ergu, Daji
    Liu, Fangyao
    Cai, Ying
    Ma, Bo
    8TH INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY AND QUANTITATIVE MANAGEMENT (ITQM 2020 & 2021): DEVELOPING GLOBAL DIGITAL ECONOMY AFTER COVID-19, 2022, 199 : 741 - 748
  • [43] Multimodal Fusion Method Based on Self-Attention Mechanism
    Zhu, Hu
    Wang, Ze
    Shi, Yu
    Hua, Yingying
    Xu, Guoxia
    Deng, Lizhen
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2020, 2020
  • [44] Fuzzy commonsense reasoning for multimodal sentiment analysis
    Chaturvedi, Iti
    Satapathy, Ranjan
    Cavallari, Sandro
    Cambria, Erik
    PATTERN RECOGNITION LETTERS, 2019, 125 : 264 - 270
  • [45] Sentiment analysis based on text information enhancement and multimodal feature fusion
    Liu, Zijun
    Cai, Li
    Yang, Wenjie
    Liu, Junhui
    PATTERN RECOGNITION, 2024, 156
  • [46] Implicit Sentiment Analysis for Chinese Texts Based on Multimodal Information Fusion
    Zhang, Huanxiang
    Li, Mengyun
    Zhang, Jing
    Computer Engineering and Applications, 61 (02): : 179 - 190
  • [47] A short video sentiment analysis model based on multimodal feature fusion
    Shi, Hongyu
    SYSTEMS AND SOFT COMPUTING, 2024, 6
  • [48] Feature Extraction Network with Attention Mechanism for Data Enhancement and Recombination Fusion for Multimodal Sentiment Analysis
    Qi, Qingfu
    Lin, Liyuan
    Zhang, Rui
    INFORMATION, 2021, 12 (09)
  • [49] A cross modal hierarchical fusion multimodal sentiment analysis method based on multi-task learning
    Wang, Lan
    Peng, Junjie
    Zheng, Cangzhi
    Zhao, Tong
    Zhu, Li'an
    INFORMATION PROCESSING & MANAGEMENT, 2024, 61 (03)
  • [50] Multi-layer cross-modality attention fusion network for multimodal sentiment analysis
    Yin Z.
    Du Y.
    Liu Y.
    Wang Y.
    Multimedia Tools and Applications, 2024, 83 (21) : 60171 - 60187