SARS: A Personalized Federated Learning Framework Towards Fairness and Robustness against Backdoor Attacks

被引:0
|
作者
Zhang, Webin [1 ]
Li, Youpeng [1 ]
An, Lingling [2 ]
Wan, Bo [2 ]
Wang, Xuyu [3 ]
机构
[1] XiDian Univ, Guangzhou Inst Technol, Guangzhou, Peoples R China
[2] Xidian Univ, Sch Comp Sci & Technol, Xian, Peoples R China
[3] Florida Int Univ, Knight Fdn, Sch Comp & Informat Sci, Miami, FL 33199 USA
关键词
Federated Learning; Backdoor Attack; Attention Distillation; Fairness;
D O I
10.1145/3678571
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL), an emerging distributed machine learning framework that enables each client to collaboratively train a global model by sharing local knowledge without disclosing local private data, is vulnerable to backdoor model poisoning attacks. By compromising some users, the attacker manipulates their local training process, and uploads malicious gradient updates to poison the global model, resulting in the poisoned global model behaving abnormally on the sub-tasks specified by the malicious user. Prior research has proposed various strategies to mitigate backdoor attacks. However, existing FL backdoor defense methods affect the fairness of the FL system, while fair FL performance may not be robust. Motivated by these concerns, in this paper, we propose S elf-Awareness R evi S ion (SARS), a personalized FL framework designed to resist backdoor attacks and ensure the fairness of the FL system. SARS consists of two key modules: adaptation feature extraction and knowledge mapping. In the adaptation feature extraction module, benign users can adaptively extract clean global knowledge with self-awareness and self-revision of the backdoor knowledge transferred from the global model. Based on the previous module, users can effectively ensure the correct mapping of clean sample features and labels. Through extensive experimental results, SARS can defend against backdoor attacks and improve the fairness of the FL system by comparing several state-of-the-art FL backdoor defenses or fair FL methods, including FedAvg, Ditto, WeakDP, FoolsGold, and FLAME.
引用
收藏
页数:24
相关论文
共 50 条
  • [41] Backdoor Attacks against Learning Systems
    Ji, Yujie
    Zhang, Xinyang
    Wang, Ting
    2017 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY (CNS), 2017, : 191 - 199
  • [42] Evaluating Security and Robustness for Split Federated Learning Against Poisoning Attacks
    Wu, Xiaodong
    Yuan, Henry
    Li, Xiangman
    Ni, Jianbing
    Lu, Rongxing
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 175 - 190
  • [43] Defense against backdoor attack in federated learning
    Lu, Shiwei
    Li, Ruihu
    Liu, Wenbin
    Chen, Xuan
    COMPUTERS & SECURITY, 2022, 121
  • [44] FMDL: Federated Mutual Distillation Learning for Defending Backdoor Attacks
    Sun, Hanqi
    Zhu, Wanquan
    Sun, Ziyu
    Cao, Mingsheng
    Liu, Wenbin
    ELECTRONICS, 2023, 12 (23)
  • [45] Identifying Backdoor Attacks in Federated Learning via Anomaly Detection
    Mi, Yuxi
    Sun, Yiheng
    Guan, Jihong
    Zhou, Shuigeng
    WEB AND BIG DATA, PT III, APWEB-WAIM 2023, 2024, 14333 : 111 - 126
  • [46] DLP: towards active defense against backdoor attacks with decoupled learning process
    Zonghao Ying
    Bin Wu
    Cybersecurity, 6
  • [47] Universal adversarial backdoor attacks to fool vertical federated learning
    Chen, Peng
    Du, Xin
    Lu, Zhihui
    Chai, Hongfeng
    COMPUTERS & SECURITY, 2024, 137
  • [48] BADFSS: Backdoor Attacks on Federated Self-Supervised Learning
    Zhang, Jiale
    Zhu, Chengcheng
    Di Wu
    Sun, Xiaobing
    Yong, Jianming
    Long, Guodong
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 548 - 558
  • [49] CoBA: Collusive Backdoor Attacks With Optimized Trigger to Federated Learning
    Lyu, Xiaoting
    Han, Yufei
    Wang, Wei
    Liu, Jingkai
    Wang, Bin
    Chen, Kai
    Li, Yidong
    Liu, Jiqiang
    Zhang, Xiangliang
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2025, 22 (02) : 1506 - 1518
  • [50] Collusive Backdoor Attacks in Federated Learning Frameworks for IoT Systems
    Alharbi, Saier
    Guo, Yifan
    Yu, Wei
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (11): : 19694 - 19707