How to detect propaganda from social media? Exploitation of semantic and fine-tuned language models

被引:0
|
作者
Malik M.S.I. [1 ,2 ]
Imran T. [2 ]
Mamdouh J.M. [3 ]
机构
[1] Department of Computer Science, School of Data Analysis and Artificial Intelligence, Higher School of Economics, Moscow
[2] Department of Computer Science, Capital University of Science and Technology, Islamabad
[3] Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh
关键词
BERT; Binary model; Linguistic; LSA; News articles; Propaganda; Semantic; word2vec;
D O I
10.7717/PEERJ-CS.1248
中图分类号
学科分类号
摘要
Online propaganda is a mechanism to influence the opinions of social media users. It is a growing menace to public health, democratic institutions, and public society. The present study proposes a propaganda detection framework as a binary classification model based on a news repository. Several feature models are explored to develop a robust model such as part-of-speech, LIWC, word uni-gram, Embeddings from Language Models (ELMo), FastText, word2vec, latent semantic analysis (LSA), and char tri-gram feature models. Moreover, fine-tuning of the BERT is also performed. Three oversampling methods are investigated to handle the imbalance status of the Qprop dataset. SMOTE Edited Nearest Neighbors (ENN) presented the best results. The fine-tuning of BERT revealed that the BERT-320 sequence length is the best model. As a standalone model, the char tri-gram presented superior performance as compared to other features. The robust performance is observed against the combination of char tri-gram + BERT and char tri-gram + word2vec and they outperformed the two state-of-the-art baselines. In contrast to prior approaches, the addition of feature selection further improves the performance and achieved more than 97.60% recall, f1-score, and AUC on the dev and test part of the dataset. The findings of the present study can be used to organize news articles for various public news websites © Copyright 2023 Malik et al.
引用
收藏
相关论文
共 50 条
  • [21] Automated classification of brain MRI reports using fine-tuned large language models
    Kanzawa, Jun
    Yasaka, Koichiro
    Fujita, Nana
    Fujiwara, Shin
    Abe, Osamu
    NEURORADIOLOGY, 2024, 66 (12) : 2177 - 2183
  • [22] Generating Software Tests for Mobile Applications Using Fine-Tuned Large Language Models
    Hoffmann, Jacob
    Frister, Demian
    PROCEEDINGS OF THE 2024 IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATION OF SOFTWARE TEST, AST 2024, 2024, : 76 - 77
  • [23] Automatic Component Prediction for Issue Reports Using Fine-Tuned Pretrained Language Models
    Wang, Dae-Sung
    Lee, Chan-Gun
    IEEE ACCESS, 2022, 10 : 131456 - 131468
  • [24] The Impact of AUTOGEN and Similar Fine-Tuned Large Language Models on the Integrity of Scholarly Writing
    Resnik, David B.
    Hosseini, Mohammad
    AMERICAN JOURNAL OF BIOETHICS, 2023, 23 (10): : 50 - 52
  • [25] Comparative Analysis of Generic and Fine-Tuned Large Language Models for Conversational Agent Systems
    Villa, Laura
    Carneros-Prado, David
    Dobrescu, Cosmin C.
    Sanchez-Miguel, Adrian
    Cubero, Guillermo
    Hervas, Ramon
    ROBOTICS, 2024, 13 (05)
  • [26] LocBERT: Improving Social Media User Location Prediction Using Fine-Tuned BERT
    Khan, Asif
    Zhang, Huaping
    Boudjellal, Nada
    Ahmad, Arshad
    Khan, Maqbool
    DATABASE AND EXPERT SYSTEMS APPLICATIONS - DEXA 2023 WORKSHOPS, 2023, 1872 : 23 - 32
  • [27] Understanding language-elicited EEG data by predicting it from a fine-tuned language model
    Schwartz, Dan
    Mitchell, Tom
    2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 43 - 57
  • [28] Using fine-tuned large language models to parse clinical notes in musculoskeletal pain disorders
    Vaid, Akhil
    Landi, Isotta
    Nadkarni, Girish
    Nabeel, Ismail
    LANCET DIGITAL HEALTH, 2023, 5 (12): : E855 - E858
  • [29] Maximal Multiverse Learning for Promoting Cross-Task Generalization of Fine-Tuned Language Models
    Malkiel, Itzik
    Wolf, Lior
    16TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2021), 2021, : 187 - 199
  • [30] Surprising Efficacy of Fine-Tuned Transformers for Fact-Checking over Larger Language Models
    Setty, Vinay
    PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 2842 - 2846