Fast Gradient Scaled Method for Generating Adversarial Examples

被引:0
|
作者
Xu, Zhefeng [1 ]
Luo, Zhijian [1 ]
Mu, Jinlong [1 ]
机构
[1] Hunan Inst Traff Engn, Hengyang, Hunan, Peoples R China
来源
6TH INTERNATIONAL CONFERENCE ON INNOVATION IN ARTIFICIAL INTELLIGENCE, ICIAI2022 | 2022年
关键词
adversarial examples; FGSM; FGScaledM; adversarial perturbations;
D O I
10.1145/3529466.3529497
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Though deep neural networks have achieved great success on many challenging tasks, they are demonstrated to be vulnerable to adversarial examples, which fool neural networks by adding human-imperceptible perturbations to the clean examples. As the first generation attack for generating adversarial examples, FGSM has inspired many follow-up attacks. However, the adversarial perturbations generated by FGSM are usually human-perceptible because FGSM modifies the pixels by the same amplitude through computing the sign of the gradients of the loss. To this end, we propose the fast gradient scaled method (FGScaledM), which scales the gradients of the loss to the valid range and can make adversarial perturbation to be more human-imperceptible. Extensive experiments on MNIST and CIFAR-10 datasets show that while maintaining similar attack success rates, our proposed FGScaledM can generate more fine-grained and more human-imperceptible adversarial perturbations than FGSM.
引用
收藏
页码:189 / 193
页数:5
相关论文
共 50 条
  • [41] Generating Distributional Adversarial Examples to Evade Statistical Detectors
    Kaya, Yigitcan
    Zafar, Muhammad Bilal
    Aydore, Sergul
    Rauschmayr, Nathalie
    Kenthapadi, Krishnaram
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [42] Generating Audio Adversarial Examples with Ensemble Substituted Models
    Zhang, Yun
    Li, Hongwei
    Xu, Guowen
    Luo, Xizhao
    Dong, Guishan
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [43] MESDeceiver: Efficiently Generating Natural Language Adversarial Examples
    Zhao, Tengfei
    Ge, Zhaocheng
    Hu, Hanping
    Shi, Dingmeng
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [44] GENERATING ADVERSARIAL EXAMPLES BY MAKEUP ATTACKS ON FACE RECOGNITION
    Zhu, Zheng-An
    Lu, Yun-Zhong
    Chiang, Chen-Kuo
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 2516 - 2520
  • [45] AdvCGAN: An Elastic and Covert Adversarial Examples Generating Framework
    Wang, Baoli
    Fan, Xinxin
    Jing, Quanliang
    Tan, Haining
    Bi, Jingping
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [46] Generating Robust Audio Adversarial Examples with Temporal Dependency
    Zhang, Hongting
    Zhou, Pan
    Yan, Qiben
    Liu, Xiao-Yang
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 3167 - 3173
  • [47] Generating adversarial examples without specifying a target model
    Yang, Gaoming
    Li, Mingwei
    Fang, Xianjing
    Zhang, Ji
    Liang, Xingzhu
    PEERJ COMPUTER SCIENCE, 2021, 7
  • [48] Generating Adversarial Examples through Latent Space Exploration of Generative Adversarial Networks
    Clare, Luana
    Correia, Joao
    PROCEEDINGS OF THE 2023 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION, GECCO 2023 COMPANION, 2023, : 1760 - 1767
  • [49] Adversarial Attacks on Visual Objects Using the Fast Gradient Sign Method
    Syed Muhammad Ali Naqvi
    Mohammad Shabaz
    Muhammad Attique Khan
    Syeda Iqra Hassan
    Journal of Grid Computing, 2023, 21
  • [50] Adversarial Attacks on Visual Objects Using the Fast Gradient Sign Method
    Naqvi, Syed Muhammad Ali
    Shabaz, Mohammad
    Khan, Muhammad Attique
    Hassan, Syeda Iqra
    JOURNAL OF GRID COMPUTING, 2023, 21 (04)