A Sentiment Analysis Method for Big Social Online Multimodal Comments Based on Pre-trained Models

被引:0
|
作者
Wan, Jun [1 ,2 ]
Wozniak, Marcin [3 ]
机构
[1] Mahasarakham Univ, Maha Sarakham, Thailand
[2] ChongQing City Vocat Coll, Chongqing, Peoples R China
[3] Silesian Tech Univ, Fac Appl Math, Gliwice, Poland
关键词
Pre-trained model; Social multimodality; Online comments; Big data; Emotional analysis; COVID-19; CLASSIFICATION; FUSION; NETWORK;
D O I
10.1007/s11036-024-02303-1
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In addition to a large amount of text, there are also many emoticons in the comment data on social media platforms. The multimodal nature of online comment data increases the difficulty of sentiment analysis. A big data sentiment analysis technology for social online multimodal (SOM) comments has been proposed. This technology uses web scraping technology to obtain SOM comment big data from the internet, including text data and emoji data, and then extracts and segments the text big data, preprocess part of speech tagging. Using the attention mechanism-based feature extraction method for big SOM comment data and the correlation based expression feature extraction method for SOM comment, the emotional features of SOM comment text and expression package data were obtained, respectively. Using the extracted two emotional features as inputs and the ELMO pre-training model as the basis, a GE-Bi LSTM model for SOM comment sentiment analysis is established. This model combines the ELMO pre training model with the Glove model to obtain the emotional factors of social multimodal big data. After recombining them, the GE-Bi LSTM model output layer is used to output the sentiment analysis of big SOM comment data. The experiment shows that this technology has strong extraction and segmentation capabilities for SOM comment text data, which can effectively extract emotional features contained in text data and emoji packet data, and obtain accurate emotional analysis results for big SOM comment data.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization
    Yu, Tiezheng
    Dai, Wenliang
    Liu, Zihan
    Fung, Pascale
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3995 - 4007
  • [42] The BLA Benchmark: Investigating Basic Language Abilities of Pre-Trained Multimodal Models
    Chen, Xinyi
    Fernandez, Raquel
    Pezzelle, Sandro
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 5817 - 5830
  • [43] Incorporating Dynamic Semantics into Pre-Trained Language Model for Aspect-based Sentiment Analysis
    Zhang, Kai
    Zhang, Kun
    Zhang, Mengdi
    Zhao, Hongke
    Liu, Qi
    Wu, Wei
    Chen, Enhong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 3599 - 3610
  • [44] Pre-trained Word Embeddings for Arabic Aspect-Based Sentiment Analysis of Airline Tweets
    Ashi, Mohammed Matuq
    Siddiqui, Muazzam Ahmed
    Nadeem, Farrukh
    PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT SYSTEMS AND INFORMATICS 2018, 2019, 845 : 241 - 251
  • [45] An Entity-Level Sentiment Analysis of Financial Text Based on Pre-Trained Language Model
    Huang, Zhihong
    Fang, Zhijian
    2020 IEEE 18TH INTERNATIONAL CONFERENCE ON INDUSTRIAL INFORMATICS (INDIN), VOL 1, 2020, : 391 - 396
  • [46] Incorporating emoji sentiment information into a pre-trained language model for Chinese and English sentiment analysis
    Huang, Jiaming
    Li, Xianyong
    Li, Qizhi
    Du, Yajun
    Fan, Yongquan
    Chen, Xiaoliang
    Huang, Dong
    Wang, Shumin
    Li, Xianyong
    INTELLIGENT DATA ANALYSIS, 2024, 28 (06) : 1601 - 1625
  • [47] CLIPTrans: Transferring Visual Knowledge with Pre-trained Models for Multimodal Machine Translation
    Gupta, Devaansh
    Kharbanda, Siddhant
    Zhou, Jiawei
    Li, Wanhua
    Pfister, Hanspeter
    Wei, Donglai
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 2863 - 2874
  • [48] Learning Social Relationship From Videos via Pre-Trained Multimodal Transformer
    Teng, Yiyang
    Song, Chenguang
    Wu, Bin
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 1377 - 1381
  • [49] Relational Prompt-Based Pre-Trained Language Models for Social Event Detection
    Li, Pu
    Yu, Xiaoyan
    Peng, Hao
    Xian, Yantuan
    Wang, Linqin
    Sun, Li
    Zhang, Jingyun
    Yu, Philip S.
    ACM Transactions on Information Systems, 2024, 43 (01)
  • [50] Sentiment Analysis of Product Reviews Using Machine Learning and Pre-Trained LLM
    Ghatora, Pawanjit Singh
    Hosseini, Seyed Ebrahim
    Pervez, Shahbaz
    Iqbal, Muhammad Javed
    Shaukat, Nabil
    BIG DATA AND COGNITIVE COMPUTING, 2024, 8 (12)