A knowledge distillation strategy for enhancing the adversarial robustness of lightweight automatic modulation classification models

被引:0
|
作者
Xu, Fanghao [1 ]
Wang, Chao [1 ]
Liang, Jiakai [1 ]
Zuo, Chenyang [1 ]
Yue, Keqiang [1 ,2 ]
Li, Wenjun [1 ]
机构
[1] Hangzhou Dianzi Univ, Zhejiang Integrated Circuits & Intelligent Hardwar, Hangzhou, Peoples R China
[2] Hangzhou Dianzi Univ, Hangzhou 310018, Peoples R China
基金
中国国家自然科学基金;
关键词
cognitive radio; wireless channels; SIGNAL;
D O I
10.1049/cmu2.12793
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Automatic modulation classification models based on deep learning models are at risk of being interfered by adversarial attacks. In an adversarial attack, the attacker causes the classification model to misclassify the received signal by adding carefully crafted adversarial interference to the transmitted signal. Based on the requirements of efficient computing and edge deployment, a lightweight automatic modulation classification model is proposed. Considering that the lightweight automatic modulation classification model is more susceptible to interference from adversarial attacks and that adversarial training of the lightweight auto-modulation classification model fails to achieve the desired results, an adversarial attack defense system for the lightweight automatic modulation classification model is further proposed, which can enhance the robustness when subjected to adversarial attacks. The defense method aims to transfer the adversarial robustness from a trained large automatic modulation classification model to a lightweight model through the technique of adversarial robust distillation. The proposed method exhibits better adversarial robustness than current defense techniques in feature fusion based automatic modulation classification models in white box attack scenarios. We propose an adversarial attack defense system for the lightweight automatic modulation classification model, which can enhance the robustness when subjected to adversarial attacks. image
引用
收藏
页码:827 / 845
页数:19
相关论文
共 50 条
  • [1] A Pruning Method Combined with Resilient Training to Improve the Adversarial Robustness of Automatic Modulation Classification Models
    Han, Chao
    Wang, Linyuan
    Li, Dongyang
    Cui, Weijia
    Yan, Bin
    MOBILE NETWORKS & APPLICATIONS, 2024,
  • [2] Automatic Modulation Classification with Neural Networks via Knowledge Distillation
    Wang, Shuai
    Liu, Chunwu
    ELECTRONICS, 2022, 11 (19)
  • [3] Learn to Defend: Adversarial Multi-Distillation for Automatic Modulation Recognition Models
    Chen, Zhuangzhi
    Wang, Zhangwei
    Xu, Dongwei
    Zhu, Jiawei
    Shen, Weiguo
    Zheng, Shilian
    Xuan, Qi
    Yang, Xiaoniu
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 3690 - 3702
  • [4] Enhancing Adversarial Robustness in Low-Label Regime via Adaptively Weighted Regularization and Knowledge Distillation
    Yang, Dongyoon
    Kong, Insung
    Kim, Yongdai
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4529 - 4538
  • [5] Diverse Knowledge Distillation (DKD): A Solution for Improving The Robustness of Ensemble Models Against Adversarial Attacks
    Mirzaeian, Ali
    Kosecka, Jana
    Homayoun, Houman
    Mohsenin, Tinoosh
    Sasan, Avesta
    PROCEEDINGS OF THE 2021 TWENTY SECOND INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN (ISQED 2021), 2021, : 319 - 324
  • [6] AdHierNet: Enhancing Adversarial Robustness and Interpretability in Text Classification
    Chen, Kai
    Deng, Yingping
    Chen, Qingcai
    Li, Dongfeng
    2024 6TH INTERNATIONAL CONFERENCE ON NATURAL LANGUAGE PROCESSING, ICNLP 2024, 2024, : 41 - 45
  • [7] Ensemble Learning of Lightweight Deep Learning Models Using Knowledge Distillation for Image Classification
    Kang, Jaeyong
    Gwak, Jeonghwan
    MATHEMATICS, 2020, 8 (10)
  • [8] Enhancing the adversarial robustness in medical image classification: exploring adversarial machine learning with vision transformers-based models
    Elif Kanca Gulsoy
    Selen Ayas
    Elif Baykal Kablan
    Murat Ekinci
    Neural Computing and Applications, 2025, 37 (12) : 7971 - 7989
  • [9] A Lightweight CNN Architecture for Automatic Modulation Classification
    Wang, Zhongyong
    Sun, Dongzhe
    Gong, Kexian
    Wang, Wei
    Sun, Peng
    ELECTRONICS, 2021, 10 (21)
  • [10] ENHANCING MODEL ROBUSTNESS BY INCORPORATING ADVERSARIAL KNOWLEDGE INTO SEMANTIC REPRESENTATION
    Li, Jinfeng
    Du, Tianyu
    Liu, Xiangyu
    Zhang, Rong
    Xue, Hui
    Ji, Shouling
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7708 - 7712