Detection Tolerant Black-Box Adversarial Attack Against Automatic Modulation Classification With Deep Learning

被引:21
|
作者
Qi, Peihan [1 ]
Jiang, Tao [2 ]
Wang, Lizhan [3 ]
Yuan, Xu [4 ]
Li, Zan [1 ]
机构
[1] Xidian Univ, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[2] Xidian Univ, Sch Cyber Engn, Xian 710071, Peoples R China
[3] Xidian Univ, Guangzhou Inst Technol, Guangzhou 510555, Peoples R China
[4] Univ Louisiana Lafayette, Sch Comp & Informat, Lafayette, LA 70504 USA
基金
中国国家自然科学基金;
关键词
Computational modeling; Modulation; Data models; Perturbation methods; Training; Security; Reliability; Adversarial examples; automatic modulation classification (AMC); black-box attack; deep learning (DL);
D O I
10.1109/TR.2022.3161138
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Advances in adversarial attack and defense technologies will enhance the reliability of deep learning (DL) systems spirally. Most existing adversarial attack methods make overly ideal assumptions, which creates the illusion that the DL system can be attacked simply and has restricted the further improvement on DL systems. To perform practical adversarial attacks, a detection tolerant black-box adversarial-attack (DTBA) method against DL-based automatic modulation classification (AMC) is presented in this article. In the DTBA method, the local DL model as a substitution of the remote target DL model is trained first. The training dataset is generated by an attacker, labeled by the target model, and augmented by Jacobian transformation. Then, the conventional gradient attack method is utilized to generate adversarial attack examples toward the local DL model. Moreover, before launching attack to the target model, the local model estimates the misclassification probability of the perturbed examples in advance and deletes those invalid adversarial examples. Compared with related attack methods of different criteria on public datasets, the DTBA method can reduce the attack cost while increasing the rate of successful attack. Adversarial attack transferability of the proposed method on the target model has increased by more than 20%. The DTBA method will be suitable for launching flexible and effective black-box adversarial attacks against DL-based AMC systems.
引用
收藏
页码:674 / 686
页数:13
相关论文
共 50 条
  • [21] Cyclical Adversarial Attack Pierces Black-box Deep Neural Networks
    Huang, Lifeng
    Wei, Shuxin
    Gao, Chengying
    Liu, Ning
    PATTERN RECOGNITION, 2022, 131
  • [22] Restricted Black-Box Adversarial Attack Against DeepFake Face Swapping
    Dong, Junhao
    Wang, Yuan
    Lai, Jianhuang
    Xie, Xiaohua
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 2596 - 2608
  • [23] Dual stage black-box adversarial attack against vision transformer
    Wang, Fan
    Shao, Mingwen
    Meng, Lingzhuang
    Liu, Fukang
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (08) : 3367 - 3378
  • [24] NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
    Li, Yandong
    Li, Lijun
    Wang, Liqiang
    Zhang, Tong
    Gong, Boqing
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [25] Amora: Black-box Adversarial Morphing Attack
    Wang, Run
    Juefei-Xu, Felix
    Guo, Qing
    Huang, Yihao
    Xie, Xiaofei
    Ma, Lei
    Liu, Yang
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 1376 - 1385
  • [26] attackGAN: Adversarial Attack against Black-box IDS using Generative Adversarial Networks
    Zhao, Shuang
    Li, Jing
    Wang, Jianmin
    Zhang, Zhao
    Zhu, Lin
    Zhang, Yong
    2020 INTERNATIONAL CONFERENCE ON IDENTIFICATION, INFORMATION AND KNOWLEDGE IN THE INTERNET OF THINGS (IIKI2020), 2021, 187 : 128 - 133
  • [27] Adversarial Eigen Attack on Black-Box Models
    Zhou, Linjun
    Cui, Peng
    Zhang, Xingxuan
    Jiang, Yinan
    Yang, Shiqiang
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15233 - 15241
  • [28] A black-Box adversarial attack for poisoning clustering
    Cina, Antonio Emanuele
    Torcinovich, Alessandro
    Pelillo, Marcello
    PATTERN RECOGNITION, 2022, 122
  • [29] Saliency Attack: Towards Imperceptible Black-box Adversarial Attack
    Dai, Zeyu
    Liu, Shengcai
    Li, Qing
    Tang, Ke
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2023, 14 (03)
  • [30] Binary Black-Box Adversarial Attacks with Evolutionary Learning against IoT Malware Detection
    Wang, Fangwei
    Lu, Yuanyuan
    Wang, Changguang
    Li, Qingru
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2021, 2021