Adversarial Attack Against Convolutional Neural Network via Gradient Approximation

被引:0
|
作者
Wang, Zehao [1 ]
Li, Xiaoran [2 ]
机构
[1] Tiangong Univ, Sch Software, Tianjin, Peoples R China
[2] Xiamen Univ, Sch Elect Sci & Engn, Xiamen, Peoples R China
来源
ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT VI, ICIC 2024 | 2024年 / 14867卷
关键词
Adversarial Attack; Image Classification; Convolutional Neural Network; Gradient Approximation;
D O I
10.1007/978-981-97-5597-4_19
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
At present, convolutional neural networks (CNNs) have become an essential method for image recognition tasks, owing to their remarkable accuracy and efficiency. However, the susceptibility of convolutional neural networks to adversarial attacks, where slight, indiscernible alterations to input images can lead to misclassifications, poses significant security concerns. This work proposes a novel adversarial attack strategy against convolutional neural networks through the approximation of gradients, which was previously constrained by the opaqueness of gradient information within deep learning models. Specifically, our approach leverages a sophisticated optimization algorithm to approximate the gradient direction and magnitude, which can assist the generation of adversarial samples even in scenarios where direct access to the model's gradients is unavailable. From our extensive experiments, we can observe that our proposed model can significantly reduce the classification accuracy and maintain the perceptual indistinguishability of adversarial samples from their original counterparts.
引用
收藏
页码:221 / 232
页数:12
相关论文
共 50 条
  • [1] Adversarial attack defense algorithm based on convolutional neural network
    Zhang, Chengyuan
    Wang, Ping
    NEURAL COMPUTING & APPLICATIONS, 2023, 36 (17): : 9723 - 9735
  • [2] Link Prediction Adversarial Attack Via Iterative Gradient Attack
    Chen, Jinyin
    Lin, Xiang
    Shi, Ziqiang
    Liu, Yi
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2020, 7 (04) : 1081 - 1094
  • [3] A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples
    Liu, Guanxiong
    Khalil, Issa
    Khreishah, Abdallah
    Phan, NhatHai
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 834 - 846
  • [4] Boosting Adversarial Transferability via Gradient Relevance Attack
    Zhu, Hegui
    Ren, Yuchen
    Sui, Xiaoyan
    Yang, Lianping
    Jiang, Wuming
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4718 - 4727
  • [5] Generating Adversarial Samples with Convolutional Neural Network
    Qiu, Zhongxi
    He, Xiaofeng
    Chen, Lingna
    Liu, Hualing
    Zuo, LianPeng
    PROCEEDINGS OF 2019 INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE (PRAI 2019), 2019, : 41 - 45
  • [6] Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks
    Kwon, Hyun
    Lee, Jun
    SYMMETRY-BASEL, 2021, 13 (03):
  • [7] Deep Convolutional Generative Adversarial Network and Convolutional Neural Network for Smoke Detection
    Yin, Hang
    Wei, Yurong
    Liu, Hedan
    Liu, Shuangyin
    Liu, Chuanyun
    Gao, Yacui
    COMPLEXITY, 2020, 2020
  • [8] Deep Convolutional Generative Adversarial Network and Convolutional Neural Network for Smoke Detection
    Yin, Hang
    Wei, Yurong
    Liu, Hedan
    Liu, Shuangyin
    Liu, Chuanyun
    Gao, Yacui
    Liu, Shuangyin (hdlsyxlq@126.com), 1600, Hindawi Limited (2020):
  • [9] PPNNI: Privacy-Preserving Neural Network Inference Against Adversarial Example Attack
    He, Guanghui
    Ren, Yanli
    He, Gang
    Feng, Guorui
    Zhang, Xinpeng
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (06) : 4083 - 4096
  • [10] DDoS Attack Detection via Multi-Scale Convolutional Neural Network
    Cheng, Jieren
    Liu, Yifu
    Tang, Xiangyan
    Sheng, Victor S.
    Li, Mengyang
    Li, Junqi
    CMC-COMPUTERS MATERIALS & CONTINUA, 2020, 62 (03): : 1317 - 1333