How to detect propaganda from social media? Exploitation of semantic and fine-tuned language models

被引:0
|
作者
Malik M.S.I. [1 ,2 ]
Imran T. [2 ]
Mamdouh J.M. [3 ]
机构
[1] Department of Computer Science, School of Data Analysis and Artificial Intelligence, Higher School of Economics, Moscow
[2] Department of Computer Science, Capital University of Science and Technology, Islamabad
[3] Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh
关键词
BERT; Binary model; Linguistic; LSA; News articles; Propaganda; Semantic; word2vec;
D O I
10.7717/PEERJ-CS.1248
中图分类号
学科分类号
摘要
Online propaganda is a mechanism to influence the opinions of social media users. It is a growing menace to public health, democratic institutions, and public society. The present study proposes a propaganda detection framework as a binary classification model based on a news repository. Several feature models are explored to develop a robust model such as part-of-speech, LIWC, word uni-gram, Embeddings from Language Models (ELMo), FastText, word2vec, latent semantic analysis (LSA), and char tri-gram feature models. Moreover, fine-tuning of the BERT is also performed. Three oversampling methods are investigated to handle the imbalance status of the Qprop dataset. SMOTE Edited Nearest Neighbors (ENN) presented the best results. The fine-tuning of BERT revealed that the BERT-320 sequence length is the best model. As a standalone model, the char tri-gram presented superior performance as compared to other features. The robust performance is observed against the combination of char tri-gram + BERT and char tri-gram + word2vec and they outperformed the two state-of-the-art baselines. In contrast to prior approaches, the addition of feature selection further improves the performance and achieved more than 97.60% recall, f1-score, and AUC on the dev and test part of the dataset. The findings of the present study can be used to organize news articles for various public news websites © Copyright 2023 Malik et al.
引用
收藏
相关论文
共 50 条
  • [41] An experimental study measuring the generalization of fine-tuned language representation models across commonsense reasoning benchmarks
    Shen, Ke
    Kejriwal, Mayank
    EXPERT SYSTEMS, 2023, 40 (05)
  • [42] A deep dive into automated sexism detection using fine-tuned deep learning and large language models
    Vetagiri, Advaitha
    Pakray, Partha
    Das, Amitava
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 145
  • [43] An assessment framework of higher-order thinking skills based on fine-tuned large language models
    Xiao, Xiong
    Li, Yue
    He, Xiuling
    Fang, Jing
    Yan, Zhonghua
    Xie, Chong
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 272
  • [44] Genealogical Relationship Extraction from Unstructured Text Using Fine-Tuned Transformer Models
    Parrolivelli, Carloangello
    Stanchev, Lubomir
    2023 IEEE 17TH INTERNATIONAL CONFERENCE ON SEMANTIC COMPUTING, ICSC, 2023, : 167 - 174
  • [45] Small Pre-trained Language Models Can be Fine-tuned as Large Models via Over-Parameterization
    Gao, Ze-Feng
    Zhou, Kun
    Liu, Peiyu
    Zhao, Wayne Xin
    Wen, Ji-Rong
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 3819 - 3834
  • [46] Offensive Language Detection in Arabic Social Networks Using Evolutionary-Based Classifiers Learned From Fine-Tuned Embeddings
    Shannaq, Fatima
    Hammo, Bassam
    Faris, Hossam
    Castillo-Valdivieso, Pedro A.
    IEEE ACCESS, 2022, 10 : 75018 - 75039
  • [47] Separate the Wheat from the Chaff: A Post-Hoc Approach to Safety Re-Alignment for Fine-Tuned Language Models
    Wu, Di
    Lu, Xin
    Zhao, Yanyan
    Qin, Bing
    arXiv,
  • [48] FASTNav: Fine-Tuned Adaptive Small-Language- Models Trained for Multi-Point Robot Navigation
    Chen, Yuxuan
    Han, Yixin
    Li, Xiao
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (01): : 390 - 397
  • [49] Towards Understanding Large-Scale Discourse Structures in Pre-Trained and Fine-Tuned Language Models
    Huber, Patrick
    Carenini, Giuseppe
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 2376 - 2394
  • [50] Assisting Drafting of Chinese Legal Documents Using Fine-Tuned Pre-trained Large Language Models
    Lin, Chun-Hsien
    Cheng, Pu-Jen
    REVIEW OF SOCIONETWORK STRATEGIES, 2025, : 83 - 110