Trust Region Based Adversarial Attack on Neural Networks

被引:27
|
作者
Yao, Zhewei [1 ]
Gholami, Amir [1 ]
Xu, Peng [2 ]
Keutzer, Kurt [1 ]
Mahoney, Michael W. [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Stanford Univ, Stanford, CA 94305 USA
关键词
D O I
10.1109/CVPR.2019.01161
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Neural Networks are quite vulnerable to adversarial perturbations. Current state-of-the-art adversarial attack methods typically require very time consuming hyper-parameter tuning, or require many iterations to solve an optimization based adversarial attack. To address this problem, we present a new family of trust region based adversarial attacks, with the goal of computing adversarial perturbations efficiently. We propose several attacks based on variants of the trust region optimization method. We test the proposed methods on Cifar-10 and ImageNet datasets using several different models including AlexNet, ResNet-50, VGG-16, and DenseNet-121 models. Our methods achieve comparable results with the Carlini-Wagner (CW) attack, but with significant speed up of up to 37x, for the VGG-16 model on a Titan Xp GPU. For the case of ResNet-50 on ImageNet, we can bring down its classification accuracy to less than 0.1% with at most 1.5% relative L-infinity (or L-2) perturbation requiring only 1.02 seconds as compared to 27.04 seconds for the CW attack. We have open sourced our method which can be accessed at [1].
引用
收藏
页码:11342 / 11351
页数:10
相关论文
共 50 条
  • [41] Learning to Attack: Adversarial Transformation Networks
    Baluja, Shumeet
    Fischer, Ian
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 2687 - 2695
  • [42] Sparse adversarial attack based on lq-norm for fooling the face anti-spoofing neural networks
    Yang, Linxi
    Yang, Jiezhi
    Peng, Mingjie
    Pi, Jiatian
    Wu, Zhiyou
    Zhou, Xunyi
    Li, Jueyou
    JOURNAL OF ELECTRONIC IMAGING, 2021, 30 (02)
  • [43] NeRFail: Neural Radiance Fields-Based Multiview Adversarial Attack
    Jiang, Wenxiang
    Zhang, Hanwei
    Wang, Xi
    Guo, Zhongwen
    Wang, Hao
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 19, 2024, : 21197 - 21205
  • [44] Sparse Adversarial Attack on Modulation Recognition with Adversarial Generative Networks
    Liang, Kui
    Liu, Zhidong
    Zhao, Xin
    Zeng, Cheng
    Cai, Jun
    2024 4TH INTERNATIONAL CONFERENCE ON INFORMATION COMMUNICATION AND SOFTWARE ENGINEERING, ICICSE 2024, 2024, : 104 - 108
  • [45] Analyze textual data: deep neural network for adversarial inversion attack in wireless networks
    Mohammed A. Al Ghamdi
    SN Applied Sciences, 2023, 5
  • [46] Blind Data Adversarial Bit-flip Attack against Deep Neural Networks
    Ghavami, Behnam
    Sadati, Mani
    Shahidzadeh, Mohammad
    Fang, Zhenman
    Shannon, Lesley
    2022 25TH EUROMICRO CONFERENCE ON DIGITAL SYSTEM DESIGN (DSD), 2022, : 899 - 904
  • [47] Black-box Adversarial Attack against Visual Interpreters for Deep Neural Networks
    Hirose, Yudai
    Ono, Satoshi
    2023 18TH INTERNATIONAL CONFERENCE ON MACHINE VISION AND APPLICATIONS, MVA, 2023,
  • [48] A concealed poisoning attack to reduce deep neural networks' robustness against adversarial samples
    Zheng, Junhao
    Chan, Patrick P. K.
    Chi, Huiyang
    He, Zhimin
    INFORMATION SCIENCES, 2022, 615 : 758 - 773
  • [49] A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
    Mu, Jiaming
    Wang, Binghui
    Li, Qi
    Sun, Kun
    Xu, Mingwei
    Liu, Zhuotao
    CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 108 - 125
  • [50] ADVERSPARSE: AN ADVERSARIAL ATTACK FRAMEWORK FOR DEEP SPATIAL-TEMPORAL GRAPH NEURAL NETWORKS
    Li, Jiayu
    Zhang, Tianyun
    Jin, Shengmin
    Fardad, Makan
    Zafarani, Reza
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 5857 - 5861