Adversarial Attacks on Deep-Learning RF Classification in Spectrum Monitoring with Imperfect Bandwidth Estimation

被引:1
|
作者
Chew, Daniel [1 ]
Barcklow, Daniel [1 ]
Baumgart, Chris [1 ]
Cooper, A. Brinton [2 ]
机构
[1] Johns Hopkins Univ, Appl Phys Lab, Baltimore, MD 21218 USA
[2] Johns Hopkins Univ, Elect & Comp Engn, Baltimore, MD 21218 USA
关键词
Spectrum Monitoring; Modulation Classification; Adversarial Attacks; Deep Learning;
D O I
10.1109/WCNC51071.2022.9771571
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In a spectrum-monitoring scenario, a monitor will attempt to intercept and classify a signal. If the monitor uses a Convolutional Neural Network (CNN) for classification, the intercepted signal can frustrate classification attempts by employing an adversarial waveform. An adversarial waveform is a small additive perturbation at the transmitter, and is generated similarly to adversarial attacks against image classifiers. We demonstrate that without foreknowledge of the CNN employed at the monitor the communication system can develop such an adversarial waveform and deploy it thus transferring the attack. The adversarial waveform is created by constraining the signal-to-interference ratio at the transmitter, which has the dual benefits of making the adversarial waveform easy to deploy and mitigates impairment to the communications link. We also demonstrate the vulnerability of a spectrum monitoring system to this type of attack as a function of symbol rate uncertainty, where the monitor does not have an exact estimate of the symbol rate employed by the communications link. The spectrum monitor becomes more susceptible to the attack as bandwidth is increased.
引用
收藏
页码:1152 / 1157
页数:6
相关论文
共 50 条
  • [11] Adversarial Attacks and Defenses in Deep Learning
    Ren, Kui
    Zheng, Tianhang
    Qin, Zhan
    Liu, Xue
    ENGINEERING, 2020, 6 (03) : 346 - 360
  • [12] Mitigating adversarial evasion attacks by deep active learning for medical image classification
    Usman Ahmed
    Jerry Chun-Wei Lin
    Gautam Srivastava
    Multimedia Tools and Applications, 2022, 81 : 41899 - 41910
  • [13] Mitigating adversarial evasion attacks by deep active learning for medical image classification
    Ahmed, Usman
    Lin, Jerry Chun-Wei
    Srivastava, Gautam
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (29) : 41899 - 41910
  • [14] Adversarial Attacks on Deep Learning-Based Methods for Network Traffic Classification
    Li, Meimei
    Xu, Yiyan
    Li, Nan
    Jin, Zhongfeng
    2022 IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, 2022, : 1123 - 1128
  • [15] Minimum Power Adversarial Attacks in Communication Signal Modulation Classification with Deep Learning
    Da Ke
    Xiang Wang
    Kaizhu Huang
    Haoyuan Wang
    Zhitao Huang
    Cognitive Computation, 2023, 15 : 580 - 589
  • [16] Fooling AI with AI: An Accelerator for Adversarial Attacks on Deep Learning Visual Classification
    Guo, Haoqiang
    Peng, Lu
    Zhang, Jian
    Qi, Fang
    Duan, Lide
    2019 IEEE 30TH INTERNATIONAL CONFERENCE ON APPLICATION-SPECIFIC SYSTEMS, ARCHITECTURES AND PROCESSORS (ASAP 2019), 2019, : 136 - 136
  • [17] Minimum Power Adversarial Attacks in Communication Signal Modulation Classification with Deep Learning
    Ke, Da
    Wang, Xiang
    Huang, Kaizhu
    Wang, Haoyuan
    Huang, Zhitao
    COGNITIVE COMPUTATION, 2023, 15 (02) : 580 - 589
  • [18] Adversarial Attacks on Deep Learning-Based DOA Estimation With Covariance Input
    Yang, Zhuang
    Zheng, Shilian
    Zhang, Luxin
    Zhao, Zhijin
    Yang, Xiaoniu
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 1377 - 1381
  • [19] Evasion Attacks with Adversarial Deep Learning Against Power System State Estimation
    Sayghe, Ali
    Zhao, Junbo
    Konstantinou, Charalambos
    2020 IEEE POWER & ENERGY SOCIETY GENERAL MEETING (PESGM), 2020,
  • [20] Adversarial Attacks and Defenses for Deep Learning Models
    Li M.
    Jiang P.
    Wang Q.
    Shen C.
    Li Q.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2021, 58 (05): : 909 - 926