SupRTE: Suppressing Backdoor Injection in Federated Learning via Robust Trust Evaluation

被引:1
|
作者
Huang, Wenkai [1 ]
Li, Gaolei [1 ]
Yi, Xiaoyu [1 ]
Li, Jianhua [1 ]
Zhao, Chengcheng [1 ]
Yin, Ying [1 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai 200240, Peoples R China
关键词
Servers; Training; Intelligent systems; Feature extraction; Security; Federated learning; Task analysis;
D O I
10.1109/MIS.2024.3392334
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This article proposes a novel scheme, SupRTE, to suppress backdoor injection in federated learning via robust trust evaluation, which effectively prevents malicious updates from infiltrating the model aggregation process. The robust trust evaluation process in SupRTE consists of two components: 1) the behavior representation extractor, which creates individual profiles for each client through multidimensional information, and 2) the trust scorer, which measures the discrepancies between malicious and benign clients as trust scores by utilizing grading and clustering strategies. According to these trust scores, SupRTE can dynamically adjust the weight of each participating client to effectively suppress the malicious backdoor injection. Remarkably, SupRTE can be easily deployed on the server without requiring any auxiliary information and is highly adaptable to various nonindependent identically distributed scenarios. Extensive experiments over three datasets against two kinds of backdoor variants are conducted. Experimental results demonstrate that SupRTE can significantly reduce the attack success rate to below 2% with a minimal impact on the main task accuracy and outperforms state-of-the-art defense methods.
引用
收藏
页码:66 / 77
页数:12
相关论文
共 50 条
  • [1] Fisher Calibration for Backdoor-Robust Heterogeneous Federated Learning
    Huang, Wenke
    Ye, Mang
    Shi, Zekun
    Du, Bo
    Tao, Dacheng
    COMPUTER VISION - ECCV 2024, PT XV, 2025, 15073 : 247 - 265
  • [2] FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
    Cao, Xiaoyu
    Fang, Minghong
    Liu, Jia
    Gong, Neil Zhenqiang
    28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021), 2021,
  • [3] Federated Learning Backdoor Attack Based on Frequency Domain Injection
    Liu, Jiawang
    Peng, Changgen
    Tan, Weijie
    Shi, Chenghui
    ENTROPY, 2024, 26 (02)
  • [4] Byzantine Robust Federated Learning Scheme Based on Backdoor Triggers
    Yang, Zheng
    Gu, Ke
    Zuo, Yiming
    CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 79 (02): : 2813 - 2831
  • [5] CRFL: Certifiably Robust Federated Learning against Backdoor Attacks
    Xie, Chulin
    Chen, Minghao
    Chen, Pin-Yu
    Li, Bo
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [6] Identifying Backdoor Attacks in Federated Learning via Anomaly Detection
    Mi, Yuxi
    Sun, Yiheng
    Guan, Jihong
    Zhou, Shuigeng
    WEB AND BIG DATA, PT III, APWEB-WAIM 2023, 2024, 14333 : 111 - 126
  • [7] FLGT: label-flipping-robust federated learning via guiding trust
    Li, Hongjiao
    Shi, Zhenya
    Jin, Ming
    Yin, Anyang
    Zhao, Zhen
    KNOWLEDGE AND INFORMATION SYSTEMS, 2025,
  • [8] An adaptive robust defending algorithm against backdoor attacks in federated learning
    Wang, Yongkang
    Zhai, Di-Hua
    He, Yongping
    Xia, Yuanqing
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 143 : 118 - 131
  • [9] BaFFLe: Backdoor Detection via Feedback -based Federated Learning
    Andreina, Sebastien
    Marson, Giorgia Azzurra
    Moellering, Helen
    Karame, Ghassan
    2021 IEEE 41ST INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2021), 2021, : 852 - 863
  • [10] Privacy-Enhancing and Robust Backdoor Defense for Federated Learning on Heterogeneous Data
    Chen, Zekai
    Yu, Shengxing
    Fan, Mingyuan
    Liu, Ximeng
    Deng, Robert H.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 693 - 707