DGFN Multimodal Emotion Analysis Model Based on Dynamic Graph Fusion Network

被引:0
|
作者
Li, Jingwei [1 ,2 ]
Bai, Xinyi [1 ,2 ]
Han, Zhaoming [1 ,2 ]
机构
[1] Henan Inst Technol, Coll Comp Sci & Technol, Xinxiang 453003, Henan, Peoples R China
[2] Henan IoT Big Data Engn Technol Res Ctr Mfg Ind, Xinxiang, Peoples R China
关键词
Multimodal; Graphic Fusion; Sentiment Analysis;
D O I
10.4018/IJDSST.352417
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, integrating text and image data for sentiment analysis in social networks has become a key approach. However, techniques for capturing complex cross-modal information and effectively fusing multimodal features still have shortcomings. We design a multimodal sentiment analysis model called the Dynamic Graph-Text Fusion Network (DGFN) to address these challenges. Text features are captured by leveraging the neighborhood information aggregation properties of Graph Convolutional Networks, treating words as nodes and integrating their features through their adjacency relationships. Additionally, the multi-head attention mechanism is utilized to extract rich semantic information from different subspaces simultaneously. For image feature extraction, a convolutional attention module is employed. Subsequently, an attention-based fusion module integrates the text and image features. Experimental results on the two datasets show significant improvements in sentiment classification accuracy and F1 scores, validating the effectiveness of the proposed DGFN model.
引用
收藏
页数:18
相关论文
共 50 条
  • [21] Multimodal Emotion Recognition Based on Feature Fusion
    Xu, Yurui
    Wu, Xiao
    Su, Hang
    Liu, Xiaorui
    2022 INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM 2022), 2022, : 7 - 11
  • [22] A depression detection model based on multimodal graph neural network
    Xia, Yujing
    Liu, Lin
    Dong, Tao
    Chen, Juan
    Cheng, Yu
    Tang, Lin
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (23) : 63379 - 63395
  • [23] Graph Fusion Network-Based Multimodal Learning for Freezing of Gait Detection
    Hu, Kun
    Wang, Zhiyong
    Martens, Kaylena A. Ehgoetz
    Hagenbuchner, Markus
    Bennamoun, Mohammed
    Tsoi, Ah Chung
    Lewis, Simon J. G.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (03) : 1588 - 1600
  • [24] A Dual-Branch Dynamic Graph Convolution Based Adaptive TransFormer Feature Fusion Network for EEG Emotion Recognition
    Sun, Mingyi
    Cui, Weigang
    Yu, Shuyue
    Han, Hongbin
    Hu, Bin
    Li, Yang
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2022, 13 (04) : 2218 - 2228
  • [25] ROI-Based Multimodal Neuroimaging Feature Fusion Method and Its Graph Neural Network Diagnostic Model
    Wang, Xuan
    Yang, Xiaopeng
    Zhang, Xiaotong
    Chen, Yang
    IEEE ACCESS, 2025, 13 : 26915 - 26926
  • [26] ROI-Based Multimodal Neuroimaging Feature Fusion Method and Its Graph Neural Network Diagnostic Model
    Wang, Xuan
    Yang, Xiaopeng
    Zhang, Xiaotong
    Chen, Yang
    IEEE Access, 2024, : 1 - 1
  • [27] Topics Guided Multimodal Fusion Network for Conversational Emotion Recognition
    Yuan, Peicong
    Cai, Guoyong
    Chen, Ming
    Tang, Xiaolv
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT III, ICIC 2024, 2024, 14877 : 250 - 262
  • [28] Emotion Recognition Based on Feedback Weighted Fusion of Multimodal Emotion Data
    Wei, Wei
    Jia, Qingxuan
    Feng, Yongli
    2017 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (IEEE ROBIO 2017), 2017, : 1682 - 1687
  • [29] Feature Fusion for Multimodal Emotion Recognition Based on Deep Canonical Correlation Analysis
    Zhang, Ke
    Li, Yuanqing
    Wang, Jingyu
    Wang, Zhen
    Li, Xuelong
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 1898 - 1902
  • [30] Joyful: Joint Modality Fusion and Graph Contrastive Learning for Multimodal Emotion Recognition
    Li, Dongyuan
    Wang, Yusong
    Funakoshi, Kotaro
    Okumura, Manabu
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 16051 - 16069