SARS: A Personalized Federated Learning Framework Towards Fairness and Robustness against Backdoor Attacks

被引:0
|
作者
Zhang, Webin [1 ]
Li, Youpeng [1 ]
An, Lingling [2 ]
Wan, Bo [2 ]
Wang, Xuyu [3 ]
机构
[1] XiDian Univ, Guangzhou Inst Technol, Guangzhou, Peoples R China
[2] Xidian Univ, Sch Comp Sci & Technol, Xian, Peoples R China
[3] Florida Int Univ, Knight Fdn, Sch Comp & Informat Sci, Miami, FL 33199 USA
关键词
Federated Learning; Backdoor Attack; Attention Distillation; Fairness;
D O I
10.1145/3678571
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL), an emerging distributed machine learning framework that enables each client to collaboratively train a global model by sharing local knowledge without disclosing local private data, is vulnerable to backdoor model poisoning attacks. By compromising some users, the attacker manipulates their local training process, and uploads malicious gradient updates to poison the global model, resulting in the poisoned global model behaving abnormally on the sub-tasks specified by the malicious user. Prior research has proposed various strategies to mitigate backdoor attacks. However, existing FL backdoor defense methods affect the fairness of the FL system, while fair FL performance may not be robust. Motivated by these concerns, in this paper, we propose S elf-Awareness R evi S ion (SARS), a personalized FL framework designed to resist backdoor attacks and ensure the fairness of the FL system. SARS consists of two key modules: adaptation feature extraction and knowledge mapping. In the adaptation feature extraction module, benign users can adaptively extract clean global knowledge with self-awareness and self-revision of the backdoor knowledge transferred from the global model. Based on the previous module, users can effectively ensure the correct mapping of clean sample features and labels. Through extensive experimental results, SARS can defend against backdoor attacks and improve the fairness of the FL system by comparing several state-of-the-art FL backdoor defenses or fair FL methods, including FedAvg, Ditto, WeakDP, FoolsGold, and FLAME.
引用
收藏
页数:24
相关论文
共 50 条
  • [31] Coordinated Backdoor Attacks against Federated Learning with Model-Dependent Triggers
    Gong, Xueluan
    Chen, Yanjiao
    Huang, Huayang
    Liao, Yuqing
    Wang, Shuai
    Wang, Qian
    IEEE NETWORK, 2022, 36 (01): : 84 - 90
  • [32] FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning
    Jia, Jinyuan
    Yuan, Zhuowen
    Sahabandu, Dinuka
    Niu, Luyao
    Rajabi, Arezoo
    Ramasubramanian, Bhaskar
    Li, Bo
    Poovendran, Radha
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [33] Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning
    Yang, Jie
    Zheng, Jun
    Wang, Haochen
    Li, Jiaxing
    Sun, Haipeng
    Han, Weifeng
    Jiang, Nan
    Tan, Yu-An
    SENSORS, 2023, 23 (03)
  • [34] Invariant Aggregator for Defending against Federated Backdoor Attacks
    Wang, Xiaoyang
    Dimitriadis, Dimitrios
    Koyejo, Sanmi
    Tople, Shruti
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [35] A Random Projection Approach to Personalized Federated Learning: Enhancing Communication Efficiency, Robustness, and Fairness
    Han, Yuze
    Li, Xiang
    Lin, Shiyun
    Zhang, Zhihua
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25 : 1 - 88
  • [36] An Investigation of Recent Backdoor Attacks and Defenses in Federated Learning
    Chen, Qiuxian
    Tao, Yizheng
    2023 EIGHTH INTERNATIONAL CONFERENCE ON FOG AND MOBILE EDGE COMPUTING, FMEC, 2023, : 262 - 269
  • [37] Distributed Backdoor Attacks in Federated Learning Generated by DynamicTriggers
    Wang, Jian
    Shen, Hong
    Liu, Xuehua
    Zhou, Hua
    Li, Yuli
    INFORMATION SECURITY THEORY AND PRACTICE, WISTP 2024, 2024, 14625 : 178 - 193
  • [38] BapFL: You Can Backdoor Personalized Federated Learning
    Ye, Tiandi
    Chen, Cen
    Wang, Yinggui
    Li, Xiang
    Gao, Ming
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (07)
  • [39] Scope: On Detecting Constrained Backdoor Attacks in Federated Learning
    Huang, Siquan
    Li, Yijiang
    Yan, Xingfu
    Gao, Ying
    Chen, Chong
    Shi, Leyu
    Chen, Biao
    Ng, Wing W. Y.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 3302 - 3315
  • [40] Backdoor Attacks in Peer-to-Peer Federated Learning
    Syros, Georgios
    Yar, Gokberk
    Boboila, Simona
    Nita-Rotaru, Cristina
    Oprea, Alina
    ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2025, 28 (01)