Unsupervised Learning-Based Spectrum Sensing Algorithm with Defending Adversarial Attacks

被引:0
|
作者
Li, Xinyu [1 ]
Dai, Shaogang [1 ]
Zhao, Zhijin [1 ,2 ]
机构
[1] Hangzhou Dianzi Univ, Sch Commun Engn, Hangzhou 310020, Peoples R China
[2] State Key Lab Informat Control Technol Commun Syst, Jiaxing 314000, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 16期
基金
中国国家自然科学基金;
关键词
spectrum sensing; security; adversarial attacks and defense; unsupervised learning; contrast loss; reconstruction loss; COGNITIVE RADIO NETWORKS;
D O I
10.3390/app13169101
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Although the spectrum sensing algorithms based on deep learning have achieved remarkable detection performance, the sensing performance is easily affected by adversarial attacks due to the fragility of neural networks. Even slight adversarial perturbations lead to a sharp deterioration of the model detection performance. To enhance the defense capability of the spectrum sensing model against such attacks, an unsupervised learning-based spectrum sensing algorithm with defending adversarial attacks (USDAA) is proposed, which is divided into two stages: adversarial pre-training and fine-tuning. In the adversarial pre-training stage, encoders are used to extract the features of adversarial samples and clean samples, respectively, and then decoders are used to reconstruct the samples, and comparison loss and reconstruction loss are designed to optimize the network parameters. It can reduce the dependence of model training on labeled samples and improve the robustness of the model to attack perturbations. In the fine-tuning stage, a small number of adversarial samples are used to fine-tune the pre-trained encoder and classification layer to obtain the spectrum sensing defense model. The experimental results show that the USDAA algorithm is better than the denoising autoencoder and distillation defense algorithm (DAED) against FGSM and PGD adversarial attacks. The number of labeled samples used in USDAA is only 11% of the DAED. When the false alarm probability is 0.1 and the SNR is -10 dB, the detection probability of the USDAA algorithm for the fast gradient sign method (FGSM) and the projected gradient descent (PGD) attack samples with random perturbations is above 88%, while the detection probability of the DAED algorithm for both attack samples is lower than 69%. Additionally, the USDAA algorithm has better robustness to attack with unknown perturbations.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Defending against Deep-Learning-Based Flow Correlation Attacks with Adversarial Examples
    Zhang, Ziwei
    Ye, Dengpan
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [22] AIR: Threats of Adversarial Attacks on Deep Learning-Based Information Recovery
    Chen, Jinyin
    Ge, Jie
    Zheng, Shilian
    Ye, Linhui
    Zheng, Haibin
    Shen, Weiguo
    Yue, Keqiang
    Yang, Xiaoniu
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (09) : 10698 - 10711
  • [23] Adversarial Attacks on Deep Learning-Based DOA Estimation With Covariance Input
    Yang, Zhuang
    Zheng, Shilian
    Zhang, Luxin
    Zhao, Zhijin
    Yang, Xiaoniu
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 1377 - 1381
  • [24] Adversarial Attacks and Detection on Reinforcement Learning-Based Interactive Recommender Systems
    Cao, Yuanjiang
    Chen, Xiaocong
    Yao, Lina
    Wang, Xianzhi
    Zhang, Wei Emma
    PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, : 1669 - 1672
  • [25] An Ensemble Learning-Based Cooperative Defensive Architecture Against Adversarial Attacks
    Liu, Tian
    Song, Yunfei
    Hu, Ming
    Xia, Jun
    Zhang, Jianning
    Chen, Mingsong
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2021, 30 (02)
  • [26] Invisible Adversarial Attacks on Deep Learning-Based Face Recognition Models
    Lin, Chih-Yang
    Chen, Feng-Jie
    Ng, Hui-Fuang
    Lin, Wei-Yang
    IEEE ACCESS, 2023, 11 : 51567 - 51577
  • [27] Adversarial Attacks Against Reinforcement Learning-Based Portfolio Management Strategy
    Chen, Yu-Ying
    Chen, Chiao-Ting
    Sang, Chuan-Yun
    Yang, Yao-Chun
    Huang, Szu-Hao
    IEEE ACCESS, 2021, 9 : 50667 - 50685
  • [28] Stealthy Adversarial Attacks on Machine Learning-Based Classifiers of Wireless Signals
    Zhang, Wenhan
    Krunz, Marwan
    Ditzler, Gregory
    IEEE Transactions on Machine Learning in Communications and Networking, 2024, 2 : 261 - 279
  • [29] Robust Adversarial Attacks on Deep Learning-Based RF Fingerprint Identification
    Liu, Boyang
    Zhang, Haoran
    Wan, Yiyao
    Zhou, Fuhui
    Wu, Qihui
    Ng, Derrick Wing Kwan
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2023, 12 (06) : 1037 - 1041
  • [30] ADVERSARIAL ATTACKS & DETECTION ON A DEEP LEARNING-BASED DIGITAL PATHOLOGY MODEL
    Vali, Eleanna
    Alexandridis, Georgios
    Stafylopatis, Andreas
    2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,