DGFN Multimodal Emotion Analysis Model Based on Dynamic Graph Fusion Network

被引:0
|
作者
Li, Jingwei [1 ,2 ]
Bai, Xinyi [1 ,2 ]
Han, Zhaoming [1 ,2 ]
机构
[1] Henan Inst Technol, Coll Comp Sci & Technol, Xinxiang 453003, Henan, Peoples R China
[2] Henan IoT Big Data Engn Technol Res Ctr Mfg Ind, Xinxiang, Peoples R China
关键词
Multimodal; Graphic Fusion; Sentiment Analysis;
D O I
10.4018/IJDSST.352417
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, integrating text and image data for sentiment analysis in social networks has become a key approach. However, techniques for capturing complex cross-modal information and effectively fusing multimodal features still have shortcomings. We design a multimodal sentiment analysis model called the Dynamic Graph-Text Fusion Network (DGFN) to address these challenges. Text features are captured by leveraging the neighborhood information aggregation properties of Graph Convolutional Networks, treating words as nodes and integrating their features through their adjacency relationships. Additionally, the multi-head attention mechanism is utilized to extract rich semantic information from different subspaces simultaneously. For image feature extraction, a convolutional attention module is employed. Subsequently, an attention-based fusion module integrates the text and image features. Experimental results on the two datasets show significant improvements in sentiment classification accuracy and F1 scores, validating the effectiveness of the proposed DGFN model.
引用
收藏
页数:18
相关论文
共 50 条
  • [41] MF-Net: a multimodal fusion network for emotion recognition based on multiple physiological signals
    Zhu, Lei
    Ding, Yu
    Huang, Aiai
    Tan, Xufei
    Zhang, Jianhai
    SIGNAL IMAGE AND VIDEO PROCESSING, 2025, 19 (01)
  • [42] Interpretable Emotion Analysis Based on Knowledge Graph and OCC Model
    Wang, Shuo
    Zhang, Yifei
    Lin, Bochen
    Li, Boxun
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 2038 - 2045
  • [43] TDFNet: Transformer-Based Deep-Scale Fusion Network for Multimodal Emotion Recognition
    Zhao, Zhengdao
    Wang, Yuhua
    Shen, Guang
    Xu, Yuezhu
    Zhang, Jiayuan
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 : 3771 - 3782
  • [44] HIERARCHICAL NETWORK BASED ON THE FUSION OF STATIC AND DYNAMIC FEATURES FOR SPEECH EMOTION RECOGNITION
    Cao, Qi
    Hou, Mixiao
    Chen, Bingzhi
    Zhang, Zheng
    Lu, Guangming
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 6334 - 6338
  • [45] Graph Neural Network-Based Speech Emotion Recognition: A Fusion of Skip Graph Convolutional Networks and Graph Attention Networks
    Wang, Han
    Kim, Deok-Hwan
    ELECTRONICS, 2024, 13 (21)
  • [46] Multimodal temporal context network for tracking dynamic changes in emotion
    Zhang, Xiufeng
    Zhou, Jinwei
    Qi, Guobin
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (01):
  • [47] A multi-stage dynamical fusion network for multimodal emotion recognition
    Chen, Sihan
    Tang, Jiajia
    Zhu, Li
    Kong, Wanzeng
    COGNITIVE NEURODYNAMICS, 2023, 17 (03) : 671 - 680
  • [48] Multimodal Emotion Recognition Using a Hierarchical Fusion Convolutional Neural Network
    Zhang, Yong
    Cheng, Cheng
    Zhang, Yidie
    IEEE ACCESS, 2021, 9 : 7943 - 7951
  • [49] Combining Multimodal Features within a Fusion Network for Emotion Recognition in the Wild
    Sun, Bo
    Li, Liandong
    Zhou, Guoyan
    Wu, Xuewen
    He, Jun
    Yu, Lejun
    Li, Dongxue
    Wei, Qinglan
    ICMI'15: PROCEEDINGS OF THE 2015 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2015, : 497 - 502
  • [50] A multi-stage dynamical fusion network for multimodal emotion recognition
    Sihan Chen
    Jiajia Tang
    Li Zhu
    Wanzeng Kong
    Cognitive Neurodynamics, 2023, 17 : 671 - 680