Efficient, Private and Robust Federated Learning

被引:25
|
作者
Hao, Meng [1 ]
Li, Hongwei [1 ]
Xu, Guowen [2 ]
Chen, Hanxiao [1 ]
Zhang, Tianwei [2 ]
机构
[1] Univ Elect Sci & Technol China, Chengdu, Sichuan, Peoples R China
[2] Nanyang Technol Univ, Singapore, Singapore
基金
中国国家自然科学基金;
关键词
Federated learning; Privacy protection; Byzantine robustness;
D O I
10.1145/3485832.3488014
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Federated learning (FL) has demonstrated tremendous success in various mission-critical large-scale scenarios. However, such promising distributed learning paradigm is still vulnerable to privacy inference and byzantine attacks. The former aims to infer the privacy of target participants involved in training, while the latter focuses on destroying the integrity of the constructed model. To mitigate the above two issues, a few works recently explored unified solutions by utilizing generic secure computation techniques and common byzantine-robust aggregation rules, but there are two major limitations: 1) they suffer from impracticality due to efficiency bottlenecks, and 2) they are still vulnerable to various types of attacks because of model incomprehensiveness. To approach the above problems, in this paper, we present SecureFL, an efficient, private and byzantine-robust FL framework. SecureFL follows the state-of-the-art byzantine-robust FL method (FLTrust NDSS'21), which performs comprehensive byzantine defense by normalizing the updates' magnitude and measuring directional similarity, adapting it to the privacy-preserving context. More importantly, we carefully customize a series of cryptographic components. First, we design a crypto-friendly validity checking protocol that functionally replaces the normalization operation in FLTrust, and further devise tailored cryptographic protocols on top of it. Benefiting from the above optimizations, the communication and computation costs are reduced by half without sacrificing the robustness and privacy protection. Second, we develop a novel preprocessing technique for costly matrix multiplication. With this technique, the directional similarity measurement can be evaluated securely with negligible computation overhead and zero communication cost. Extensive evaluations conducted on three real-world datasets and various neural network architectures demonstrate that SecureFL outperforms prior art up to two orders of magnitude in efficiency with state-of-the-art byzantine robustness.
引用
收藏
页码:45 / 60
页数:16
相关论文
共 50 条
  • [21] Secure and Efficient Federated Learning for Robust Intrusion Detection in IoT Networks
    Abou El Houda, Zakaria
    Moudoud, Hajar
    Khoukhi, Lyes
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 2668 - 2673
  • [22] Byzantine-Robust and Communication-Efficient Personalized Federated Learning
    Zhang, Jiaojiao
    He, Xuechao
    Huang, Yue
    Ling, Qing
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2025, 73 : 26 - 39
  • [23] Efficient and Privacy-Preserving Byzantine-robust Federated Learning
    Luan, Shijie
    Lu, Xiang
    Zhang, Zhuangzhuang
    Chang, Guangsheng
    Guo, Yunchuan
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 2202 - 2208
  • [24] SEAR: Secure and Efficient Aggregation for Byzantine-Robust Federated Learning
    Zhao, Lingchen
    Jiang, Jianlin
    Feng, Bo
    Wang, Qian
    Shen, Chao
    Li, Qi
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (05) : 3329 - 3342
  • [25] FedDMC: Efficient and Robust Federated Learning via Detecting Malicious Clients
    Mu, Xutong
    Cheng, Ke
    Shen, Yulong
    Li, Xiaoxiao
    Chang, Zhao
    Zhang, Tao
    Ma, Xindi
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (06) : 5259 - 5274
  • [26] An Efficient and Multi-Private Key Secure Aggregation Scheme for Federated Learning
    Yang, Xue
    Liu, Zifeng
    Tang, Xiaohu
    Lu, Rongxing
    Liu, Bo
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (05) : 1998 - 2011
  • [27] Robust Aggregation for Federated Learning
    Pillutla, Krishna
    Kakade, Sham M.
    Harchaoui, Zaid
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2022, 70 : 1142 - 1154
  • [28] Private Federated Submodel Learning with Sparsification
    Vithana, Sajani
    Ulukus, Sennur
    2022 IEEE INFORMATION THEORY WORKSHOP (ITW), 2022, : 410 - 415
  • [29] FederBoost: Private Federated Learning for GBDT
    Tian, Zhihua
    Zhang, Rui
    Hou, Xiaoyang
    Lyu, Lingjuan
    Zhang, Tianyi
    Liu, Jian
    Ren, Kui
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (03) : 1274 - 1285
  • [30] A Robust and Efficient Federated Learning Algorithm Against Adaptive Model Poisoning Attacks
    Yang, Han
    Gu, Dongbing
    He, Jianhua
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (09): : 16289 - 16302