Countermeasures Against Adversarial Examples in Radio Signal Classification

被引:22
|
作者
Zhang, Lu [1 ]
Lambotharan, Sangarapillai [1 ]
Zheng, Gan [1 ]
AsSadhan, Basil [2 ]
Roli, Fabio [3 ]
机构
[1] Loughborough Univ, Wolfson Sch Mech Elect & Mfg Engn, Loughborough LE11 3TU, Leics, England
[2] King Saud Univ, Dept Comp Sci, Riyadh 11421, Saudi Arabia
[3] Univ Cagliari, Dept Elect & Elect Engn, I-09123 Cagliari, Italy
基金
英国工程与自然科学研究理事会;
关键词
Modulation; Perturbation methods; Receivers; Training; Smoothing methods; Radio transmitters; Noise measurement; Deep learning; adversarial examples; radio modulation classification; neural rejection; label smoothing;
D O I
10.1109/LWC.2021.3083099
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning algorithms have been shown to be powerful in many communication network design problems, including that in automatic modulation classification. However, they are vulnerable to carefully crafted attacks called adversarial examples. Hence, the reliance of wireless networks on deep learning algorithms poses a serious threat to the security and operation of wireless networks. In this letter, we propose for the first time a countermeasure against adversarial examples in modulation classification. Our countermeasure is based on a neural rejection technique, augmented by label smoothing and Gaussian noise injection, that allows to detect and reject adversarial examples with high accuracy. Our results demonstrate that the proposed countermeasure can protect deep-learning based modulation classification systems against adversarial examples.
引用
收藏
页码:1830 / 1834
页数:5
相关论文
共 50 条
  • [1] GAN Against Adversarial Attacks in Radio Signal Classification
    Wang, Zhaowei
    Liu, Weicheng
    Wang, Hui-Ming
    IEEE COMMUNICATIONS LETTERS, 2022, 26 (12) : 2851 - 2854
  • [2] A Neural Rejection System Against Universal Adversarial Perturbations in Radio Signal Classification
    Zhang, Lu
    Lambotharan, Sangarapillai
    Zheng, Gan
    Roli, Fabio
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [3] Toward Robust Networks against Adversarial Attacks for Radio Signal Modulation Classification
    Manoj, B. R.
    Santos, Pablo Millan
    Sadeghi, Meysam
    Larsson, Erik G.
    2022 IEEE 23RD INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATION (SPAWC), 2022,
  • [4] Robust Optimal Classification Trees against Adversarial Examples
    Vos, Daniel
    Verwer, Sicco
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 8520 - 8528
  • [5] Generation and Countermeasures of adversarial examples on vision: a survey
    Liu, Jiangfan
    Li, Yishan
    Guo, Yanming
    Liu, Yu
    Tang, Jun
    Nie, Ying
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (08)
  • [6] Exploratory Research on Defense against Natural Adversarial Examples in Image Classification
    Zhu, Yaoxuan
    Yang, Hua
    Zhu, Bin
    CMC-COMPUTERS MATERIALS & CONTINUA, 2025, 82 (02): : 1947 - 1968
  • [7] Adversarial Examples for Automatic Speech Recognition: Attacks and Countermeasures
    Hu, Shengshan
    Shang, Xingcan
    Qin, Zhan
    Li, Minghui
    Wang, Qian
    Wang, Cong
    IEEE COMMUNICATIONS MAGAZINE, 2019, 57 (10) : 120 - 126
  • [8] Adversarial Examples Against Image-based Malware Classification Systems
    Vi, Bao Ngoc
    Nguyen, Huu Noi
    Nguyen, Ngoc Tran
    Tran, Cao Truong
    PROCEEDINGS OF 2019 11TH INTERNATIONAL CONFERENCE ON KNOWLEDGE AND SYSTEMS ENGINEERING (KSE 2019), 2019, : 347 - 351
  • [9] ADVERSARIAL LEARNING IN TRANSFORMER BASED NEURAL NETWORK IN RADIO SIGNAL CLASSIFICATION
    Zhang, Lu
    Lambotharan, Sangarapillai
    Zheng, Gan
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 9032 - 9036
  • [10] Adversarial Attacks on Deep-Learning Based Radio Signal Classification
    Sadeghi, Meysam
    Larsson, Erik G.
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2019, 8 (01) : 213 - 216