FedPGT: Prototype-based Federated Global Adversarial Training against Adversarial Attack

被引:0
|
作者
Xu, ZiRong [1 ]
Lai, WeiMin [1 ]
Yan, Qiao [1 ]
机构
[1] ShenZhen Univ, Sch Comp & Software, Shenzhen 518060, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated Learning; Adversarial Robustness; Adversarial Training;
D O I
10.1109/CSCWD61410.2024.10580613
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Federated learning, an innovative distributed machine learning paradigm, is designed to address critical concerns related to data silos and user data privacy breaches. However, it faces a significant challenge in the form of adversarial attacks. Recent research has attempted to mitigate this issue through techniques such as local adversarial training and model distillation. Nevertheless, these approaches are susceptible to realworld variations, ultimately leading to compromised adversarial robustness. In this paper, we propose FedPGT, an innovative approach that employs clustering techniques to assess the convergence of the model. By leveraging a prototype-based method, it guides high-quality adversarial training. FedPGT alleviates the issue of data heterogeneity in federated learning and enhances the model's adversarial robustness. Our experimental results, conducted across three distinct datasets (MNIST, FMNIST, and EMNIST-Digits), demonstrate the efficacy of FedPGT.
引用
收藏
页码:864 / 869
页数:6
相关论文
共 50 条
  • [41] Label noise analysis meets adversarial training: A defense against label poisoning in federated learning
    Hallaji, Ehsan
    Razavi-Far, Roozbeh
    Saif, Mehrdad
    Herrera-Viedma, Enrique
    KNOWLEDGE-BASED SYSTEMS, 2023, 266
  • [42] LAS-AT: Adversarial Training with Learnable Attack Strategy
    Jia, Xiaojun
    Zhang, Yong
    Wu, Baoyuan
    Ma, Ke
    Wang, Jue
    Cao, Xiaochun
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 13388 - 13398
  • [43] On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification
    Park, Sanglee
    So, Jungmin
    APPLIED SCIENCES-BASEL, 2020, 10 (22): : 1 - 16
  • [44] Backdoor attack and defense in federated generative adversarial network-based medical image synthesis
    Jin, Ruinan
    Li, Xiaoxiao
    MEDICAL IMAGE ANALYSIS, 2023, 90
  • [45] Privacy Leakage of Adversarial Training Models in Federated Learning Systems
    Zhang, Jingyang
    Chen, Yiran
    Li, Hai
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 107 - 113
  • [46] Bidirectional Corrective Model-Contrastive Federated Adversarial Training
    Zhang, Yuyue
    Shi, Yicong
    Zhao, Xiaoli
    ELECTRONICS, 2024, 13 (18)
  • [47] ENSEMBLE ADVERSARIAL TRAINING BASED DEFENSE AGAINST ADVERSARIAL ATTACKS FOR MACHINE LEARNING-BASED INTRUSION DETECTION SYSTEM
    Haroon, M. S.
    Ali, H. M.
    NEURAL NETWORK WORLD, 2023, 33 (05) : 317 - 336
  • [48] A robust adversarial attack against speech recognition with UAP
    Qin, Ziheng
    Zhang, Xianglong
    Li, Shujun
    HIGH-CONFIDENCE COMPUTING, 2023, 3 (01):
  • [49] EnsembleDet: ensembling against adversarial attack on deepfake detection
    Dutta, Himanshu
    Pandey, Aditya
    Bilgaiyan, Saurabh
    JOURNAL OF ELECTRONIC IMAGING, 2021, 30 (06)
  • [50] UNIVERSAL ADVERSARIAL ATTACK AGAINST SPEAKER RECOGNITION MODELS
    Hanina, Shoham
    Zolfi, Alon
    Elovici, Yuval
    Shabtai, Asaf
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 4860 - 4864