Evaluating the Robustness of Fake News Detectors to Adversarial Attacks with Real User Comments (Extended Abstract)

被引:0
|
作者
Koren, Annat [1 ]
Underwood, Chandler [2 ]
Serra, Edoardo [2 ]
Spezzano, Francesca [2 ]
机构
[1] City Coll San Francisco, San Francisco, CA 94112 USA
[2] Boise State Univ, Boise, ID USA
关键词
misinformation; adversarial machine learning; machine learning robustness;
D O I
10.1109/DSAA61799.2024.10722837
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The widespread use of social media has led to an increase in false and misleading information presented as legitimate news, also known as fake news. This poses a threat to societal stability and has led to the development of fake news detectors that use machine learning to flag suspicious information. However, existing fake news detection models are vulnerable to attacks by malicious actors who can manipulate data to change predictions. Research on attacks on news comments is limited, and current attack models are easily detectable. We propose two new attack strategies that instead use real, pre-existing comments from the same dataset as the news article to fool fake news detectors. Our experimental results show that fake news detectors are less robust to our proposed attack strategies than existing methods using pre-existing human-written comments, as well as a malicious synthetic comment generator.
引用
收藏
页码:437 / 438
页数:2
相关论文
共 35 条
  • [31] Evaluating and Enhancing the Robustness of Sustainable Neural Relationship Classifiers Using Query-Efficient Black-Box Adversarial Attacks
    Haq, Ijaz Ul
    Khan, Zahid Younas
    Ahmad, Arshad
    Hayat, Bashir
    Khan, Asif
    Lee, Ye-Eun
    Kim, Ki-Il
    SUSTAINABILITY, 2021, 13 (11)
  • [32] Fake it till you break it: Evaluating the Performance of Synthetically-optimized Adversarial Patches Against Real-world Imagery
    Zarei, Mohammad
    Ward, Chris M.
    Harguess, Josh
    Aiken, Marshal
    GEOSPATIAL INFORMATICS XIII, 2023, 12525
  • [33] Evaluating and enhancing the robustness of vision transformers against adversarial attacks in medical imaging (OCT, 10.1007/s11517-024-03226-5, 2024)
    Kanca, Elif
    Ayas, Selen
    Kablan, Elif Baykal
    Ekinci, Murat
    MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2024, : 691 - 691
  • [34] 'But wait, that isn't real': A proof-of-concept study evaluating 'Project Real', a co-created intervention that helps young people to spot fake news online
    Skipper, Yvonne
    Jolley, Daniel
    Reddington, Joseph
    BRITISH JOURNAL OF DEVELOPMENTAL PSYCHOLOGY, 2023, 41 (04) : 371 - 384
  • [35] Enabling object detectors to better distinguish between real and fake objects in semi- autonomous and fully autonomous vehicles. Protecting Autonomous Cars from Phantom Attacks
    Nassi, Ben
    Mirsky, Yisroel
    Shams, Jacob
    Ben-Netanel, Raz
    Nassi, Dudi
    Elovici, Yuval
    COMMUNICATIONS OF THE ACM, 2023, 66 (04) : 56 - 67