Saliency Map-Based Local White-Box Adversarial Attack Against Deep Neural Networks

被引:1
|
作者
Liu, Haohan [1 ,2 ]
Zuo, Xingquan [1 ,2 ]
Huang, Hai [1 ]
Wan, Xing [1 ,2 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Comp Sci, Beijing, Peoples R China
[2] Minist Educ, Key Lab Trustworthy Distributed Comp & Serv, Beijing, Peoples R China
来源
关键词
Deep learning; Saliency map; Local white-box attack; Adversarial attack;
D O I
10.1007/978-3-031-20500-2_1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The current deep neural networks (DNN) are easily fooled by adversarial examples, which are generated by adding some small, well-designed and human-imperceptible perturbations to clean examples. Adversarial examples will mislead deep learning (DL) model to make wrong predictions. At present, many existing white-box attack methods in the image field are mainly based on the global gradient of the model. That is, the global gradient is first calculated, and then the perturbation is added into the gradient direction. Those methods usually have a high attack success rate. However, there are also some shortcomings, such as excessive perturbation and easy detection by the human's eye. Therefore, in this paper we propose a SaliencyMap-based Local white-box Adversarial Attack method (SMLAA). The saliencymap used in the interpretability of artificial intelligence is introduced in SMLAA. First, Gradient-weighted Class Activation Mapping (Grad-CAM) is utilized to provide a visual interpretation of model decisions to find important areas in an image. Then, the perturbation is added only to important local areas to reduce the magnitude of perturbations. Experimental results show that compared with the global attack method, SMLAA reduces the average robustness measure by 9%-24% while ensuring the attack success rate. It means that SMLAA has a high attack success rate with fewer pixels changed.
引用
收藏
页码:3 / 14
页数:12
相关论文
共 50 条
  • [31] Efficient Untargeted White-Box Adversarial Attacks Based on Simple Initialization
    Yunyi ZHOU
    Haichang GAO
    Jianping HE
    Shudong ZHANG
    Zihui WU
    Chinese Journal of Electronics, 2024, 33 (04) : 979 - 988
  • [32] Robustness of Workload Forecasting Models in Cloud Data Centers: A White-Box Adversarial Attack Perspective
    Mahbub, Nosin Ibna
    Hossain, Md. Delowar
    Akhter, Sharmen
    Hossain, Md. Imtiaz
    Jeong, Kimoon
    Huh, Eui-Nam
    IEEE ACCESS, 2024, 12 : 55248 - 55263
  • [33] Untargeted white-box adversarial attack with heuristic defence methods in real-time deep learning based network intrusion detection system
    Roshan, Khushnaseeb
    Zafar, Aasim
    Ul Haque, Shiekh Burhan
    COMPUTER COMMUNICATIONS, 2024, 218 : 97 - 113
  • [34] Untargeted White-box Adversarial Attack with Heuristic Defence Methods in Real-time Deep Learning based Network Intrusion Detection System
    Roshan, Khushnaseeb
    Zafar, Aasim
    Haque, Sheikh Burhan Ul
    arXiv, 2023,
  • [35] Efficient Untargeted White-Box Adversarial Attacks Based on Simple Initialization
    Zhou, Yunyi
    Gao, Haichang
    He, Jianping
    Zhang, Shudong
    Wu, Zihui
    CHINESE JOURNAL OF ELECTRONICS, 2024, 33 (04) : 979 - 988
  • [36] Optimizing Deep Learning Based Intrusion Detection Systems Defense Against White-Box and Backdoor Adversarial Attacks Through a Genetic Algorithm
    Alrawashdeh, Khaled
    Goldsmith, Stephen
    2020 IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP (AIPR): TRUSTED COMPUTING, PRIVACY, AND SECURING MULTIMEDIA, 2020,
  • [37] GAN-Based Siamese Neuron Network for Modulation Classification Against White-Box Adversarial Attacks
    Zhou, Xiaoyu
    Qi, Peihan
    Zhang, Weilin
    Zheng, Shilian
    Zhang, Ning
    Li, Zan
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2024, 10 (01) : 122 - 137
  • [38] Cocktail Universal Adversarial Attack on Deep Neural Networks
    Li, Shaoxin
    Li, Xiaofeng
    Che, Xin
    Li, Xintong
    Zhang, Yong
    Chu, Lingyang
    COMPUTER VISION - ECCV 2024, PT LXV, 2025, 15123 : 396 - 412
  • [39] Blind Data Adversarial Bit-flip Attack against Deep Neural Networks
    Ghavami, Behnam
    Sadati, Mani
    Shahidzadeh, Mohammad
    Fang, Zhenman
    Shannon, Lesley
    2022 25TH EUROMICRO CONFERENCE ON DIGITAL SYSTEM DESIGN (DSD), 2022, : 899 - 904
  • [40] A concealed poisoning attack to reduce deep neural networks' robustness against adversarial samples
    Zheng, Junhao
    Chan, Patrick P. K.
    Chi, Huiyang
    He, Zhimin
    INFORMATION SCIENCES, 2022, 615 : 758 - 773