Countermeasures Against Adversarial Examples in Radio Signal Classification

被引:22
|
作者
Zhang, Lu [1 ]
Lambotharan, Sangarapillai [1 ]
Zheng, Gan [1 ]
AsSadhan, Basil [2 ]
Roli, Fabio [3 ]
机构
[1] Loughborough Univ, Wolfson Sch Mech Elect & Mfg Engn, Loughborough LE11 3TU, Leics, England
[2] King Saud Univ, Dept Comp Sci, Riyadh 11421, Saudi Arabia
[3] Univ Cagliari, Dept Elect & Elect Engn, I-09123 Cagliari, Italy
基金
英国工程与自然科学研究理事会;
关键词
Modulation; Perturbation methods; Receivers; Training; Smoothing methods; Radio transmitters; Noise measurement; Deep learning; adversarial examples; radio modulation classification; neural rejection; label smoothing;
D O I
10.1109/LWC.2021.3083099
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning algorithms have been shown to be powerful in many communication network design problems, including that in automatic modulation classification. However, they are vulnerable to carefully crafted attacks called adversarial examples. Hence, the reliance of wireless networks on deep learning algorithms poses a serious threat to the security and operation of wireless networks. In this letter, we propose for the first time a countermeasure against adversarial examples in modulation classification. Our countermeasure is based on a neural rejection technique, augmented by label smoothing and Gaussian noise injection, that allows to detect and reject adversarial examples with high accuracy. Our results demonstrate that the proposed countermeasure can protect deep-learning based modulation classification systems against adversarial examples.
引用
收藏
页码:1830 / 1834
页数:5
相关论文
共 50 条
  • [11] MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples
    Jia, Jinyuan
    Qu, Wenjie
    Gong, Neil Zhenqiang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [12] Adversarial Minimax Training for Robustness Against Adversarial Examples
    Komiyama, Ryota
    Hattori, Motonobu
    NEURAL INFORMATION PROCESSING (ICONIP 2018), PT II, 2018, 11302 : 690 - 699
  • [13] Stealthy Adversarial Attacks Against Automated Modulation Classification in Cognitive Radio
    Fernando, Praveen
    Wei-Kocsis, Jin
    2023 IEEE COGNITIVE COMMUNICATIONS FOR AEROSPACE APPLICATIONS WORKSHOP, CCAAW, 2023,
  • [14] On the Defense of Spoofing Countermeasures Against Adversarial Attacks
    Nguyen-Vu, Long
    Doan, Thien-Phuc
    Bui, Mai
    Hong, Kihun
    Jung, Souhwan
    IEEE ACCESS, 2023, 11 : 94563 - 94574
  • [15] Generating Adversarial Examples Against Remote Sensing Scene Classification via Feature Approximation
    Zhu, Rui
    Ma, Shiping
    Lian, Jiawei
    He, Linyuan
    Mei, Shaohui
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2024, 17 : 10174 - 10187
  • [16] Securing Network Traffic Classification Models against Adversarial Examples Using Derived Variables
    Adeke, James Msughter
    Liu, Guangjie
    Zhao, Junjie
    Wu, Nannan
    Bashir, Hafsat Muhammad
    Davoli, Franco
    FUTURE INTERNET, 2023, 15 (12)
  • [17] Understanding adversarial robustness against on-manifold adversarial examples
    Xiao, Jiancong
    Yang, Liusha
    Fan, Yanbo
    Wang, Jue
    Luo, Zhi-Quan
    PATTERN RECOGNITION, 2025, 159
  • [18] Towards robust classification detection for adversarial examples
    Liu, Huangxiaolie
    Zhang, Dong
    Chen, Huijun
    INTERNATIONAL CONFERENCE FOR INTERNET TECHNOLOGY AND SECURED TRANSACTIONS (ICITST-2020), 2020, : 23 - 29
  • [19] Adversarial examples for extreme multilabel text classification
    Mohammadreza Qaraei
    Rohit Babbar
    Machine Learning, 2022, 111 : 4539 - 4563
  • [20] Generating Transferable Adversarial Examples for Speech Classification
    Kim, Hoki
    Park, Jinseong
    Lee, Jaewook
    PATTERN RECOGNITION, 2023, 137