A Multimodal Sentiment Analysis Method Based on Fuzzy Attention Fusion

被引:0
|
作者
Zhi, Yuxing [1 ]
Li, Junhuai [1 ]
Wang, Huaijun [1 ]
Chen, Jing [1 ]
Wei, Wei [1 ]
机构
[1] Xian Univ Technol, Sch Comp Sci & Engn, Xian 710048, Peoples R China
基金
中国国家自然科学基金;
关键词
Sentiment analysis; Contrastive learning; Task analysis; Fuzzy systems; Data models; Uncertainty; Semantics; Attention mechanism; fuzzy c-means (FCM); multimodal sentiment analysis (MSA); representation learning;
D O I
10.1109/TFUZZ.2024.3434614
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Affective analysis is a technology that aims to understand human sentiment states, and it is widely applied in human-computer interaction and social sentiment analysis. Compared to unimodal, multimodal sentiment analysis (MSA) focuses more on the complementary information and differences from multimodalities, which can better represent the actual sentiment expressed by humans. Existing MSA methods usually ignore the problem of multimodal data ambiguity and the uncertainty of influence redundant features on the sentiment discriminability. To address these issues, we propose a fuzzy attention fusion-based MSA method, called FFMSA. FFMSA alleviates the heterogeneity of multimodal data through shared and private subspaces, and solves the ambiguity using a fuzzy attention mechanism based on continuous value decision making, in order to obtain accurate sentiment features for downstream tasks. The private subspace refines the latent features within each single modality through constraints on their uniqueness, while the shared subspace learns common features using a nonparametric independence criterion algorithm. By constructing sample pairs for unsupervised contrastive learning, we use fuzzy c-means to model uncertainty to constrain the similarity between similar samples to enhance the expression of shared features. Furthermore, we adopt a multiangle modeling approach to capture the consistency and complementarity of multimodalities, dynamically adjusting the interaction between different modalities through a fuzzy attention mechanism to achieve comprehensive sentiment fusion. Experimental results on two datasets demonstrate that our FFMSA outperforms state-of-the-art approaches in MSA and emotion recognition. The proposed FFMSA achieves sentiment binary classification accuracy of 85.8% and 86.4% on CMU-MOSI and CMU-MOSEI, respectively.
引用
收藏
页码:5886 / 5898
页数:13
相关论文
共 50 条
  • [31] SCANET: Improving multimodal representation and fusion with sparse- and cross-attention for multimodal sentiment analysis
    Wang, Hao
    Yang, Mingchuan
    Li, Zheng
    Liu, Zhenhua
    Hu, Jie
    Fu, Ziwang
    Liu, Feng
    COMPUTER ANIMATION AND VIRTUAL WORLDS, 2022, 33 (3-4)
  • [32] Prompt Link Multimodal Fusion in Multimodal Sentiment Analysis
    Zhu, Kang
    Fan, Cunhang
    Tao, Jianhua
    Lv, Zhao
    INTERSPEECH 2024, 2024, : 4668 - 4672
  • [33] Multimodal Sentiment Analysis Representations Learning via Contrastive Learning with Condense Attention Fusion
    Wang, Huiru
    Li, Xiuhong
    Ren, Zenyu
    Wang, Min
    Ma, Chunming
    SENSORS, 2023, 23 (05)
  • [34] TeFNA: Text-centered fusion network with crossmodal attention for multimodal sentiment analysis
    Huang, Changqin
    Zhang, Junling
    Wu, Xuemei
    Wang, Yi
    Li, Ming
    Huang, Xiaodi
    KNOWLEDGE-BASED SYSTEMS, 2023, 269
  • [35] AFR-BERT: Attention-based mechanism feature relevance fusion multimodal sentiment analysis model
    Ji Mingyu
    Zhou Jiawei
    Wei Ning
    PLOS ONE, 2022, 17 (09):
  • [36] Multimodal Sentiment Analysis of Government Information Comments Based on Contrastive Learning and Cross-Attention Fusion Networks
    Mu, Guangyu
    Chen, Chuanzhi
    Li, Xiurong
    Li, Jiaxue
    Ju, Xiaoqing
    Dai, Jiaxiu
    IEEE ACCESS, 2024, 12 : 165525 - 165538
  • [37] Trustworthy Multimodal Fusion for Sentiment Analysis in Ordinal Sentiment Space
    Xie, Zhuyang
    Yang, Yan
    Wang, Jie
    Liu, Xiaorong
    Li, Xiaofan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (08) : 7657 - 7670
  • [38] Emoji multimodal microblog sentiment analysis based on mutual attention mechanism
    Lou, Yinxia
    Zhou, Junxiang
    Zhou, Jun
    Ji, Donghong
    Zhang, Qing
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [39] BiMSA: Multimodal Sentiment Analysis Based on BiGRU and Bidirectional Interactive Attention
    Wang, Qi
    Yu, Haizheng
    Wang, Yao
    Bian, Hong
    EUROPEAN JOURNAL ON ARTIFICIAL INTELLIGENCE, 2025,
  • [40] Multimodal sentiment analysis based on multi-head attention mechanism
    Xi, Chen
    Lu, Guanming
    Yan, Jingjie
    ICMLSC 2020: PROCEEDINGS OF THE 4TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND SOFT COMPUTING, 2020, : 34 - 39