Predicting social media users' indirect aggression through pre-trained models

被引:0
|
作者
Zhou, Zhenkun [1 ]
Yu, Mengli [2 ,3 ,4 ]
Peng, Xingyu [5 ]
He, Yuxin [1 ]
机构
[1] Capital Univ Econ & Business, Sch Stat, Dept Data Sci, Beijing, Peoples R China
[2] Nankai Univ, Sch Journalism & Commun, Tianjin, Peoples R China
[3] Nankai Univ, Convergence Media Res Ctr, Tianjin, Peoples R China
[4] Nankai Univ, Publishing Res Inst, Tianjin, Peoples R China
[5] Beihang Univ, State Key Lab Software Dev Environm, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Indirect aggression; Social media; Psychological traits; Pre-trained model; BERT; ERNIE; TRAITS;
D O I
10.7717/peerj-cs.2292
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Indirect aggression has become a prevalent phenomenon that erodes the social media environment. Due to the expense and the difficulty in determining objectively what constitutes indirect aggression, the traditional self-reporting questionnaire is hard to be employed in the current cyber area. In this study, we present a model for predicting indirect aggression online based on pre-trained models. Building on Weibo users' social media activities, we constructed basic, dynamic, and content features and classified indirect aggression into three subtypes: social exclusion, malicious humour, and guilt induction. We then built the prediction model by combining it with large-scale pre-trained models. The empirical evidence shows that this prediction model (ERNIE) outperforms the pre-trained models and predicts indirect aggression online much better than the models without extra pre-trained information. This study offers a practical model to predict users' indirect aggression. Furthermore, this work contributes to a better understanding of indirect aggression behaviors and can support social media platforms' organization and management.
引用
收藏
页数:21
相关论文
共 50 条
  • [1] Predicting social media users’ indirect aggression through pre-trained models
    Zhou, Zhenkun
    Yu, Mengli
    Peng, Xingyu
    He, Yuxin
    PeerJ Computer Science, 2024, 10 : 1 - 21
  • [2] Aspect-Based Sentiment Analysis of Social Media Data With Pre-Trained Language Models
    Troya, Anina
    Pillai, Reshmi Gopalakrishna
    Rivero, Cristian Rodriguez
    Genc, Zulkuf
    Kayal, Subhradeep
    Araci, Dogu
    2021 5TH INTERNATIONAL CONFERENCE ON NATURAL LANGUAGE PROCESSING AND INFORMATION RETRIEVAL, NLPIR 2021, 2021, : 8 - 17
  • [3] RoBERTuito: a pre-trained language model for social media text in Spanish
    Manuel Perez, Juan
    Furman, Damian A.
    Alonso Alemany, Laura
    Luque, Franco
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 7235 - 7243
  • [4] Refining Pre-Trained Motion Models
    Sun, Xinglong
    Harley, Adam W.
    Guibas, Leonidas J.
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, : 4932 - 4938
  • [5] Efficiently Robustify Pre-Trained Models
    Jain, Nishant
    Behl, Harkirat
    Rawat, Yogesh Singh
    Vineet, Vibhav
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 5482 - 5492
  • [6] Pre-trained Models for Sonar Images
    Valdenegro-Toro, Matias
    Preciado-Grijalva, Alan
    Wehbe, Bilal
    OCEANS 2021: SAN DIEGO - PORTO, 2021,
  • [7] CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts
    Lamsal, Rabindra
    Read, Maria Rodriguez
    Karunasekera, Shanika
    KNOWLEDGE-BASED SYSTEMS, 2024, 296
  • [8] Pre-Trained Language Models and Their Applications
    Wang, Haifeng
    Li, Jiwei
    Wu, Hua
    Hovy, Eduard
    Sun, Yu
    ENGINEERING, 2023, 25 : 51 - 65
  • [9] Enhancing radiology report generation through pre-trained language models
    Leonardi, Giorgio
    Portinale, Luigi
    Santomauro, Andrea
    PROGRESS IN ARTIFICIAL INTELLIGENCE, 2024,
  • [10] An analysis of pre-trained stable diffusion models through a semantic lens
    Bonechi, Simone
    Andreini, Paolo
    Corradini, Barbara Toniella
    Scarselli, Franco
    NEUROCOMPUTING, 2025, 614