An Adversarial Network-based Multi-model Black-box Attack

被引:0
|
作者
Lin, Bin [1 ]
Chen, Jixin [2 ]
Zhang, Zhihong [3 ]
Lai, Yanlin [2 ]
Wu, Xinlong [2 ]
Tian, Lulu [4 ]
Cheng, Wangchi [5 ]
机构
[1] Sichuan Normal Univ, Chengdu 610066, Peoples R China
[2] Southwest Petr Univ, Sch Comp Sci, Chengdu 610500, Peoples R China
[3] AECC Sichuan Gas Turbine Estab, Mianyang 621700, Sichuan, Peoples R China
[4] Brunel Univ London, Uxbridge UB8 3PH, Middx, England
[5] Inst Logist Sci & Technol, Beijing 100166, Peoples R China
来源
关键词
Black-box attack; adversarial examples; GAN; multi-model; deep neural networks;
D O I
10.32604/iasc.2021.016818
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Researches have shown that Deep neural networks (DNNs) are vulnerable to adversarial examples. In this paper, we propose a generative model to explore how to produce adversarial examples that can deceive multiple deep learning models simultaneously. Unlike most of popular adversarial attack algorithms, the one proposed in this paper is based on the Generative Adversarial Networks (GAN). It can quickly produce adversarial examples and perform black-box attacks on multi-model. To enhance the transferability of the samples generated by our approach, we use multiple neural networks in the training process. Experimental results on MNIST showed that our method can efficiently generate adversarial examples. Moreover, it can successfully attack various classes of deep neural networks at the same time, such as fully connected neural networks (FCNN), convolutional neural networks (CNN) and recurrent neural networks (RNN). We performed a black-box attack on VGG16 and the experimental results showed that when the test data classes are ten (0-9), the attack success rate is 97.68%, and when the test data classes are seven (0-6), the attack success rate is up to 98.25%.
引用
收藏
页码:641 / 649
页数:9
相关论文
共 50 条
  • [31] Boosting Decision-Based Black-Box Adversarial Attack with Gradient Priors
    Liu, Han
    Huang, Xingshuo
    Zhang, Xiaotong
    Li, Qimai
    Ma, Fenglong
    Wang, Wei
    Chen, Hongyang
    Yu, Hong
    Zhang, Xianchao
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1195 - 1203
  • [32] Greedy-Based Black-Box Adversarial Attack Scheme on Graph Structure
    Shao, Shushu
    Xia, Hui
    Zhang, Rui
    Cheng, Xiangguo
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, WASA 2021, PT II, 2021, 12938 : 96 - 106
  • [33] Black-box Adversarial Attack Method Based on Evolution Strategy and Attention Mechanism
    Huang L.-F.
    Zhuang W.-Z.
    Liao Y.-X.
    Liu N.
    Ruan Jian Xue Bao/Journal of Software, 2021, 32 (11): : 3512 - 3529
  • [34] PISA: Pixel skipping-based attentional black-box adversarial attack
    Wang, Jie
    Yin, Zhaoxia
    Jiang, Jing
    Tang, Jin
    Luo, Bin
    COMPUTERS & SECURITY, 2022, 123
  • [35] TAGA: A Transfer-based Black-box Adversarial Attack with Genetic Algorithms
    Huang, Liang-Jung
    Yu, Tian-Li
    PROCEEDINGS OF THE 2022 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO'22), 2022, : 712 - 720
  • [36] MLink: Linking Black-Box Models for Collaborative Multi-Model Inference
    Yuan, Mu
    Zhang, Lan
    Li, Xiang-Yang
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 9475 - 9483
  • [37] Research Status of Black-Box Intelligent Adversarial Attack Algorithms
    Wei, Jian
    Song, Xiaoqing
    Wang, Qinzhao
    Computer Engineering and Applications, 2023, 59 (13) : 61 - 73
  • [38] HYBRID ADVERSARIAL SAMPLE CRAFTING FOR BLACK-BOX EVASION ATTACK
    Zheng, Juan
    He, Zhimin
    Lin, Zhe
    2017 INTERNATIONAL CONFERENCE ON WAVELET ANALYSIS AND PATTERN RECOGNITION (ICWAPR), 2017, : 236 - 242
  • [39] Optimized Gradient Boosting Black-Box Adversarial Attack Algorithm
    Liu, Mengting
    Ling, Jie
    Computer Engineering and Applications, 2023, 59 (18) : 260 - 267
  • [40] Evolutionary Multilabel Adversarial Examples: An Effective Black-Box Attack
    Kong L.
    Luo W.
    Zhang H.
    Liu Y.
    Shi Y.
    IEEE Transactions on Artificial Intelligence, 2023, 4 (03): : 562 - 572