A knowledge distillation strategy for enhancing the adversarial robustness of lightweight automatic modulation classification models

被引:0
|
作者
Xu, Fanghao [1 ]
Wang, Chao [1 ]
Liang, Jiakai [1 ]
Zuo, Chenyang [1 ]
Yue, Keqiang [1 ,2 ]
Li, Wenjun [1 ]
机构
[1] Hangzhou Dianzi Univ, Zhejiang Integrated Circuits & Intelligent Hardwar, Hangzhou, Peoples R China
[2] Hangzhou Dianzi Univ, Hangzhou 310018, Peoples R China
基金
中国国家自然科学基金;
关键词
cognitive radio; wireless channels; SIGNAL;
D O I
10.1049/cmu2.12793
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Automatic modulation classification models based on deep learning models are at risk of being interfered by adversarial attacks. In an adversarial attack, the attacker causes the classification model to misclassify the received signal by adding carefully crafted adversarial interference to the transmitted signal. Based on the requirements of efficient computing and edge deployment, a lightweight automatic modulation classification model is proposed. Considering that the lightweight automatic modulation classification model is more susceptible to interference from adversarial attacks and that adversarial training of the lightweight auto-modulation classification model fails to achieve the desired results, an adversarial attack defense system for the lightweight automatic modulation classification model is further proposed, which can enhance the robustness when subjected to adversarial attacks. The defense method aims to transfer the adversarial robustness from a trained large automatic modulation classification model to a lightweight model through the technique of adversarial robust distillation. The proposed method exhibits better adversarial robustness than current defense techniques in feature fusion based automatic modulation classification models in white box attack scenarios. We propose an adversarial attack defense system for the lightweight automatic modulation classification model, which can enhance the robustness when subjected to adversarial attacks. image
引用
收藏
页码:827 / 845
页数:19
相关论文
共 50 条
  • [31] Automatic Segmentation using Knowledge Distillation with Ensemble Models (ASKDEM)
    Buschiazzo, Anthony
    Russell, Mason
    Osteen, Philip
    Uplinger, James
    UNMANNED SYSTEMS TECHNOLOGY XXVI, 2024, 13055
  • [32] Frequency-Constrained Iterative Adversarial Attacks for Automatic Modulation Classification
    Chen, Yigong
    Qiao, Xiaoqiang
    Zhang, Jiang
    Zhang, Tao
    Du, Yihang
    IEEE COMMUNICATIONS LETTERS, 2024, 28 (12) : 2734 - 2738
  • [33] Adversarial Transfer Learning for Deep Learning Based Automatic Modulation Classification
    Bu, Ke
    He, Yuan
    Jing, Xiaojun
    Han, Jindong
    IEEE SIGNAL PROCESSING LETTERS, 2020, 27 : 880 - 884
  • [34] ClST: A Convolutional Transformer Framework for Automatic Modulation Recognition by Knowledge Distillation
    Hou, Dongbin
    Li, Lixin
    Lin, Wensheng
    Liang, Junli
    Han, Zhu
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (07) : 8013 - 8028
  • [35] Knowledge distillation approach for skin cancer classification on lightweight deep learning model
    Saha, Suman
    Hemal, Md. Moniruzzaman
    Eidmum, Md. Zunead Abedin
    Mridha, Muhammad Firoz
    Healthcare Technology Letters, 2025, 12 (01)
  • [36] A Lightweight Framework With Knowledge Distillation for Zero-Shot Mars Scene Classification
    Tan, Xiaomeng
    Xi, Bobo
    Xu, Haitao
    Li, Jiaojiao
    Li, Yunsong
    Xue, Changbin
    Chanussot, Jocelyn
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62
  • [37] A Lightweight Convolution Network with Self-Knowledge Distillation for Hyperspectral Image Classification
    Xu, Hao
    Cao, Guo
    Deng, Lindiao
    Ding, Lanwei
    Xu, Ling
    Pan, Qikun
    Shang, Yanfeng
    FOURTEENTH INTERNATIONAL CONFERENCE ON GRAPHICS AND IMAGE PROCESSING, ICGIP 2022, 2022, 12705
  • [38] Improving deep learning with prior knowledge and cognitive models: A survey on enhancing explainability, adversarial robustness and zero-shot learning
    Mumuni, Fuseini
    Mumuni, Alhassan
    COGNITIVE SYSTEMS RESEARCH, 2024, 84
  • [39] A Novel Lightweight Grouped Gated Recurrent Unit for Automatic Modulation Classification
    Hu, Xin
    Gao, Gujiuxiang
    Li, Boyan
    Wang, Weidong
    Ghannouchi, Fadhel M.
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2024, 13 (08) : 2135 - 2139
  • [40] Lightweight Network and Model Aggregation for Automatic Modulation Classification in Wireless Communications
    Fu, Xue
    Gui, Guan
    Wang, Yu
    Ohtsuki, Tomoaki
    Adebisi, Bamidele
    Gacanin, Haris
    Adachi, Fumiyuki
    2021 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2021,