Evaluating the Robustness of Fake News Detectors to Adversarial Attacks with Real User Comments (Extended Abstract)

被引:0
|
作者
Koren, Annat [1 ]
Underwood, Chandler [2 ]
Serra, Edoardo [2 ]
Spezzano, Francesca [2 ]
机构
[1] City Coll San Francisco, San Francisco, CA 94112 USA
[2] Boise State Univ, Boise, ID USA
关键词
misinformation; adversarial machine learning; machine learning robustness;
D O I
10.1109/DSAA61799.2024.10722837
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The widespread use of social media has led to an increase in false and misleading information presented as legitimate news, also known as fake news. This poses a threat to societal stability and has led to the development of fake news detectors that use machine learning to flag suspicious information. However, existing fake news detection models are vulnerable to attacks by malicious actors who can manipulate data to change predictions. Research on attacks on news comments is limited, and current attack models are easily detectable. We propose two new attack strategies that instead use real, pre-existing comments from the same dataset as the news article to fool fake news detectors. Our experimental results show that fake news detectors are less robust to our proposed attack strategies than existing methods using pre-existing human-written comments, as well as a malicious synthetic comment generator.
引用
收藏
页码:437 / 438
页数:2
相关论文
共 35 条
  • [21] Robustness of generative AI detection: adversarial attacks on black-box neural text detectors
    Vitalii Fishchuk
    Daniel Braun
    International Journal of Speech Technology, 2024, 27 (4) : 861 - 874
  • [22] Evaluating and Improving Adversarial Robustness of Machine Learning-Based Network Intrusion Detectors
    Han, Dongqi
    Wang, Zhiliang
    Zhong, Ying
    Chen, Wenqi
    Yang, Jiahai
    Lu, Shuqiang
    Shi, Xingang
    Yin, Xia
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (08) : 2632 - 2647
  • [23] A novel method for improving the robustness of deep learning-based malware detectors against adversarial attacks
    Shaukat, Kamran
    Luo, Suhuai
    Varadharajan, Vijay
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 116
  • [24] Increasing Neural-Based Pedestrian Detectors' Robustness to Adversarial Patch Attacks Using Anomaly Localization
    Ilina, Olga
    Tereshonok, Maxim
    Ziyadinov, Vadim
    JOURNAL OF IMAGING, 2025, 11 (01)
  • [25] Not So Robust after All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks
    Garaev, Roman
    Rasheed, Bader
    Khan, Adil Mehmood
    ALGORITHMS, 2024, 17 (04)
  • [26] Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW
    Villegas-Ch, William
    Jaramillo-Alcazar, Angel
    Lujan-Mora, Sergio
    BIG DATA AND COGNITIVE COMPUTING, 2024, 8 (01)
  • [27] A Comprehensive Study of the Robustness for LiDAR-Based 3D Object Detectors Against Adversarial Attacks
    Zhang, Yifan
    Hou, Junhui
    Yuan, Yixuan
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (05) : 1592 - 1624
  • [28] A Comprehensive Study of the Robustness for LiDAR-Based 3D Object Detectors Against Adversarial Attacks
    Yifan Zhang
    Junhui Hou
    Yixuan Yuan
    International Journal of Computer Vision, 2024, 132 : 1592 - 1624
  • [29] Camouflage Is All You Need: Evaluating and Enhancing Transformer Models Robustness Against Camouflage Adversarial Attacks
    Huertas-Garcia, Alvaro
    Martin, Alejandro
    Huertas-Tato, Javier
    Camacho, David
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2025, 9 (01): : 431 - 443
  • [30] Adversarial Detection Avoidance Attacks: Evaluating the robustness of perceptual hashing-based client-side scanning
    Jain, Shubham
    Cretu, Ana-Maria
    de Montjoye, Yves-Alexandre
    PROCEEDINGS OF THE 31ST USENIX SECURITY SYMPOSIUM, 2022, : 2317 - 2334