How to detect propaganda from social media? Exploitation of semantic and fine-tuned language models

被引:0
|
作者
Malik M.S.I. [1 ,2 ]
Imran T. [2 ]
Mamdouh J.M. [3 ]
机构
[1] Department of Computer Science, School of Data Analysis and Artificial Intelligence, Higher School of Economics, Moscow
[2] Department of Computer Science, Capital University of Science and Technology, Islamabad
[3] Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh
关键词
BERT; Binary model; Linguistic; LSA; News articles; Propaganda; Semantic; word2vec;
D O I
10.7717/PEERJ-CS.1248
中图分类号
学科分类号
摘要
Online propaganda is a mechanism to influence the opinions of social media users. It is a growing menace to public health, democratic institutions, and public society. The present study proposes a propaganda detection framework as a binary classification model based on a news repository. Several feature models are explored to develop a robust model such as part-of-speech, LIWC, word uni-gram, Embeddings from Language Models (ELMo), FastText, word2vec, latent semantic analysis (LSA), and char tri-gram feature models. Moreover, fine-tuning of the BERT is also performed. Three oversampling methods are investigated to handle the imbalance status of the Qprop dataset. SMOTE Edited Nearest Neighbors (ENN) presented the best results. The fine-tuning of BERT revealed that the BERT-320 sequence length is the best model. As a standalone model, the char tri-gram presented superior performance as compared to other features. The robust performance is observed against the combination of char tri-gram + BERT and char tri-gram + word2vec and they outperformed the two state-of-the-art baselines. In contrast to prior approaches, the addition of feature selection further improves the performance and achieved more than 97.60% recall, f1-score, and AUC on the dev and test part of the dataset. The findings of the present study can be used to organize news articles for various public news websites © Copyright 2023 Malik et al.
引用
收藏
相关论文
共 50 条
  • [1] How to detect propaganda from social media? Exploitation of semantic and fine-tuned language models
    Malik, Muhammad Shahid Iqbal
    Imran, Tahir
    Mamdouh, Jamjoom Mona
    PEERJ COMPUTER SCIENCE, 2023, 9
  • [2] Distilling Semantic Concept Embeddings from Contrastively Fine-Tuned Language Models
    Li, Na
    Kteich, Hanane
    Bouraoui, Zied
    Schockaert, Steven
    PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 216 - 226
  • [3] Exploring Memorization in Fine-tuned Language Models
    Zeng, Shenglai
    Li, Yaxin
    Ren, Jie
    Liu, Yiding
    Xu, Han
    He, Pengfei
    Xing, Yue
    Wang, Shuaiqiang
    Tang, Jiliang
    Yin, Dawei
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 3917 - 3948
  • [4] Fingerprinting Fine-tuned Language Models in the Wild
    Diwan, Nirav
    Chakravorty, Tanmoy
    Shafiq, Zubair
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 4652 - 4664
  • [5] Advancing emotion recognition in social media: A novel integration of heterogeneous neural networks with fine-tuned language models
    Maazallahi, Abbas
    Asadpour, Masoud
    Bazmi, Parisa
    INFORMATION PROCESSING & MANAGEMENT, 2024, 62 (02)
  • [6] Racial Skew in Fine-Tuned Legal AI Language Models
    Malic, Vincent Quirante
    Kumari, Anamika
    Liu, Xiaozhong
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 245 - 252
  • [7] On the Generalization Abilities of Fine-Tuned Commonsense Language Representation Models
    Shen, Ke
    Kejriwal, Mayank
    ARTIFICIAL INTELLIGENCE XXXVIII, 2021, 13101 : 3 - 16
  • [8] How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness?
    Dong, Xinshuai
    Luu Anh Tuan
    Lin, Min
    Yan, Shuicheng
    Zhang, Hanwang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [9] Deciphering language disturbances in schizophrenia: A study using fine-tuned language models
    Li, Renyu
    Cao, Minne
    Fu, Dawei
    Wei, Wei
    Wang, Dequan
    Yuan, Zhaoxia
    Hu, Ruofei
    Deng, Wei
    SCHIZOPHRENIA RESEARCH, 2024, 271 : 120 - 128
  • [10] Small Language Models Fine-tuned to Coordinate Larger Language Models improve Complex Reasoning
    Juneja, Gurusha
    Dutta, Subhabrata
    Chakrabarti, Soumen
    Manchhanda, Sunny
    Chakraborty, Tanmoy
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 3675 - 3691