A Sentiment Analysis Method for Big Social Online Multimodal Comments Based on Pre-trained Models

被引:0
|
作者
Wan, Jun [1 ,2 ]
Wozniak, Marcin [3 ]
机构
[1] Mahasarakham Univ, Maha Sarakham, Thailand
[2] ChongQing City Vocat Coll, Chongqing, Peoples R China
[3] Silesian Tech Univ, Fac Appl Math, Gliwice, Poland
关键词
Pre-trained model; Social multimodality; Online comments; Big data; Emotional analysis; COVID-19; CLASSIFICATION; FUSION; NETWORK;
D O I
10.1007/s11036-024-02303-1
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In addition to a large amount of text, there are also many emoticons in the comment data on social media platforms. The multimodal nature of online comment data increases the difficulty of sentiment analysis. A big data sentiment analysis technology for social online multimodal (SOM) comments has been proposed. This technology uses web scraping technology to obtain SOM comment big data from the internet, including text data and emoji data, and then extracts and segments the text big data, preprocess part of speech tagging. Using the attention mechanism-based feature extraction method for big SOM comment data and the correlation based expression feature extraction method for SOM comment, the emotional features of SOM comment text and expression package data were obtained, respectively. Using the extracted two emotional features as inputs and the ELMO pre-training model as the basis, a GE-Bi LSTM model for SOM comment sentiment analysis is established. This model combines the ELMO pre training model with the Glove model to obtain the emotional factors of social multimodal big data. After recombining them, the GE-Bi LSTM model output layer is used to output the sentiment analysis of big SOM comment data. The experiment shows that this technology has strong extraction and segmentation capabilities for SOM comment text data, which can effectively extract emotional features contained in text data and emoji packet data, and obtain accurate emotional analysis results for big SOM comment data.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Multi-modal Sentiment Analysis of Mongolian Language based on Pre-trained Models and High-resolution Networks
    Yang, Yang
    Ren, Qing-Dao-Er-Ji
    He, Rui-Feng
    2024 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING, IALP 2024, 2024, : 291 - 296
  • [32] A Comparative Study of Pre-trained Word Embeddings for Arabic Sentiment Analysis
    Zouidine, Mohamed
    Khalil, Mohammed
    2022 IEEE 46TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE (COMPSAC 2022), 2022, : 1243 - 1248
  • [33] The Process Analysis Method of SAR Target Recognition in Pre-Trained CNN Models
    Zheng, Tong
    Li, Jin
    Tian, Hao
    Wu, Qing
    SENSORS, 2023, 23 (14)
  • [34] SPEECH SENTIMENT ANALYSIS VIA PRE-TRAINED FEATURES FROM END-TO-END ASR MODELS
    Lu, Zhiyun
    Cao, Liangliang
    Zhang, Yu
    Chiu, Chung-Cheng
    Fan, James
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7149 - 7153
  • [35] Context Analysis for Pre-trained Masked Language Models
    Lai, Yi-An
    Lalwani, Garima
    Zhang, Yi
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 3789 - 3804
  • [36] Text clustering based on pre-trained models and autoencoders
    Xu, Qiang
    Gu, Hao
    Ji, ShengWei
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2024, 17
  • [37] A Comparative Study on Pre-Trained Models Based on BERT
    Zhang, Minghua
    2024 6TH INTERNATIONAL CONFERENCE ON NATURAL LANGUAGE PROCESSING, ICNLP 2024, 2024, : 326 - 330
  • [38] Graph-aware pre-trained language model for political sentiment analysis in Filipino social media
    Aquino, Jean Aristide
    Liew, Di Jie
    Chang, Yung-Chun
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 146
  • [39] Fusing Pre-trained Language Models with Multimodal Prompts through Reinforcement Learning
    Yu, Youngjae
    Chung, Jiwan
    Yun, Heeseung
    Hessel, Jack
    Park, Jae Sung
    Lu, Ximing
    Zellers, Rowan
    Ammanabrolu, Prithviraj
    Le Bras, Ronan
    Kim, Gunhee
    Choi, Yejin
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 10845 - 10856
  • [40] Multimodal Search on Iconclass using Vision-Language Pre-Trained Models
    Santini, Cristian
    Posthumus, Etienne
    Tietz, Tabea
    Tan, Mary Ann
    Bruns, Oleksandra
    Sack, Harald
    2023 ACM/IEEE JOINT CONFERENCE ON DIGITAL LIBRARIES, JCDL, 2023, : 285 - 287