Evaluating the Robustness of Fake News Detectors to Adversarial Attacks with Real User Comments (Extended Abstract)

被引:0
|
作者
Koren, Annat [1 ]
Underwood, Chandler [2 ]
Serra, Edoardo [2 ]
Spezzano, Francesca [2 ]
机构
[1] City Coll San Francisco, San Francisco, CA 94112 USA
[2] Boise State Univ, Boise, ID USA
关键词
misinformation; adversarial machine learning; machine learning robustness;
D O I
10.1109/DSAA61799.2024.10722837
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The widespread use of social media has led to an increase in false and misleading information presented as legitimate news, also known as fake news. This poses a threat to societal stability and has led to the development of fake news detectors that use machine learning to flag suspicious information. However, existing fake news detection models are vulnerable to attacks by malicious actors who can manipulate data to change predictions. Research on attacks on news comments is limited, and current attack models are easily detectable. We propose two new attack strategies that instead use real, pre-existing comments from the same dataset as the news article to fool fake news detectors. Our experimental results show that fake news detectors are less robust to our proposed attack strategies than existing methods using pre-existing human-written comments, as well as a malicious synthetic comment generator.
引用
收藏
页码:437 / 438
页数:2
相关论文
共 35 条
  • [1] A Methodology for Evaluating the Robustness of Anomaly Detectors to Adversarial Attacks in Industrial Scenarios
    Perales Gomez, Angel Luis
    Fernandez Maimo, Lorenzo
    Garcia Clemente, Felix J.
    Maroto Morales, Javier Alejandro
    Huertas Celdran, Alberto
    Bovet, Gerome
    IEEE ACCESS, 2022, 10 : 124582 - 124594
  • [2] All Your Fake Detector are Belong to Us: Evaluating Adversarial Robustness of Fake-News Detectors Under Black-Box Settings
    Ali, Hassan
    Khan, Muhammad Suleman
    Alghadhban, Amer
    Alazmi, Meshari
    Alzamil, Ahmad
    Al-Utaibi, Khaled
    Qadir, Junaid
    IEEE ACCESS, 2021, 9 : 81678 - 81692
  • [3] A hybrid model for fake news detection: Leveraging news content and user comments in fake news
    Albahar, Marwan
    IET INFORMATION SECURITY, 2021, 15 (02) : 169 - 177
  • [4] Fake News Detection via NLP is Vulnerable to Adversarial Attacks
    Zhou, Zhixuan
    Guan, Huankang
    Bhat, Meghana Moorthy
    Hsu, Justin
    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE (ICAART), VOL 2, 2019, : 794 - 800
  • [5] Towards Evaluating Adversarial Attacks Robustness in Wireless Communication
    Ftaimi, Asmaa
    Mazri, Tomader
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2021, 12 (06) : 639 - 646
  • [6] Evaluating the effectiveness of Adversarial Attacks against Botnet Detectors
    Apruzzese, Giovanni
    Colajanni, Michele
    Marchetti, Mirco
    2019 IEEE 18TH INTERNATIONAL SYMPOSIUM ON NETWORK COMPUTING AND APPLICATIONS (NCA), 2019, : 193 - 200
  • [7] Robust Deep Reinforcement Learning with Adversarial Attacks Extended Abstract
    Pattanaik, Anay
    Tang, Zhenyi
    Liu, Shuijing
    Bommannan, Gautham
    Chowdhary, Girish
    PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS (AAMAS' 18), 2018, : 2040 - 2042
  • [8] DETECTSEC: Evaluating the robustness of object detection models to adversarial attacks
    Du, Tianyu
    Ji, Shouling
    Wang, Bo
    He, Sirui
    Li, Jinfeng
    Li, Bo
    Wei, Tao
    Jia, Yunhan
    Beyah, Raheem
    Wang, Ting
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (09) : 6463 - 6492
  • [9] User Perceptions and Trust of Explainable Machine Learning Fake News Detectors
    Shin, Jieun
    Chan-Olmsted, Sylvia
    INTERNATIONAL JOURNAL OF COMMUNICATION, 2023, 17 : 518 - 540
  • [10] Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks
    Nesti, Federico
    Rossolini, Giulio
    Nair, Saasha
    Biondi, Alessandro
    Buttazzo, Giorgio
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 2826 - 2835