Fusion of Multimodal Information for Video Comment Text Sentiment Analysis Methods

被引:0
|
作者
Han, Jing [1 ]
Lv, Jinghua [2 ]
机构
[1] Tech Coll, Sch Creat Art & Fash Design Huzhou Vocat, Huzhou 313099, Zhejiang, Peoples R China
[2] Kyungsung Univ, Dept Chinese Studies, Pusan 48434, South Korea
关键词
Video commentary text sentiment analysis; multimodal information fusion; M-S multimodal sentiment model; convolutional neural network;
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Sentiment analysis of video comment text has important application value in modern social media and opinion mana gement. By conducting sentiment analysis on video comments, we can better understand the emotional tendency of users, optimise content recommendation, and effectively manage public opinion, which is of great practical significance to the push of video content. Aiming at the current video comment text sentiment analysis methods problems such as understanding ambiguity, complex const ruction, and low accuracy. This paper proposes a sentiment analysis method based on the M-S multimodal sentiment model. Firstly , briefly describes the existing methods of video comment text sentiment analysis and their advantages and disadvantages; then it studies the key steps of multimodal sentiment analysis, and proposes a multimodal sentiment model based on the M-S multimodal sentiment model; finally, the efficiency of the experimental data from the Communist Youth League video comment text was verified through simulation experiments. The results show that the proposed model improves the accuracy and real-time performance of the prediction model, and solves the problem that the time complexity of the model is too large for practical application in the existing multimodal sentiment analysis task of the video comment text sentiment analysis method, and the interrelationships and mutual influences of the multimodal information are not considered.
引用
收藏
页码:266 / 274
页数:9
相关论文
共 50 条
  • [41] Multimodal Sentiment Analysis of Online Product Information Based on Text Mining Under the Influence of Social Media
    Zeng, Xiao
    Zhong, Ziqi
    JOURNAL OF ORGANIZATIONAL AND END USER COMPUTING, 2022, 34 (08)
  • [42] Context aided Video-to-Text Information Fusion
    Blasch, Erik
    Nagy, James
    Aved, Alex
    Pottenger, William M.
    Schneider, Michael
    Hammoud, Riad
    Jones, Eric K.
    Basharat, Arslan
    Hoogs, Anthony
    Chen, Genshe
    Shen, Dan
    Ling, Haibin
    2014 17TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), 2014,
  • [43] Retrieving YouTube Video by Sentiment Analysis on User Comment
    Bhuiyan, Hanif
    Ara, Jinat
    Bardhan, Rajon
    Islam, Md. Rashedul
    2017 IEEE INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING APPLICATIONS (ICSIPA), 2017, : 474 - 478
  • [44] Joint multimodal sentiment analysis based on information relevance
    Chen, Danlei
    Su, Wang
    Wu, Peng
    Hua, Bolin
    INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (02)
  • [45] Global information regulation network for multimodal sentiment analysis
    Xie, Shufan
    Chen, Qiaohong
    Fang, Xian
    Sun, Qi
    IMAGE AND VISION COMPUTING, 2024, 151
  • [46] Information fusion for affective computing and sentiment analysis
    Hussain, Amir
    Cambria, Erik
    Poria, Soujanya
    Hawalah, Ahmad
    Herrera, Francisco
    INFORMATION FUSION, 2021, 71 : 97 - 98
  • [47] Multimodal Sentiment Analysis: A Survey of Methods, Trends, and Challenges
    Das, Ringki
    Singh, Thoudam Doren
    ACM COMPUTING SURVEYS, 2023, 55 (13S)
  • [48] Global Local Fusion Neural Network for Multimodal Sentiment Analysis
    Hu, Xiaoran
    Yamamura, Masayuki
    APPLIED SCIENCES-BASEL, 2022, 12 (17):
  • [49] Multichannel Cross-Modal Fusion Network for Multimodal Sentiment Analysis Considering Language Information Enhancement
    Hu, Ronglong
    Yi, Jizheng
    Chen, Aibin
    Chen, Lijiang
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (07) : 9814 - 9824
  • [50] Multimodal sentiment analysis using hierarchical fusion with context modeling
    Majumder, N.
    Hazarika, D.
    Gelbukh, A.
    Cambria, E.
    Poria, S.
    KNOWLEDGE-BASED SYSTEMS, 2018, 161 : 124 - 133