Efficient, Private and Robust Federated Learning

被引:25
|
作者
Hao, Meng [1 ]
Li, Hongwei [1 ]
Xu, Guowen [2 ]
Chen, Hanxiao [1 ]
Zhang, Tianwei [2 ]
机构
[1] Univ Elect Sci & Technol China, Chengdu, Sichuan, Peoples R China
[2] Nanyang Technol Univ, Singapore, Singapore
基金
中国国家自然科学基金;
关键词
Federated learning; Privacy protection; Byzantine robustness;
D O I
10.1145/3485832.3488014
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Federated learning (FL) has demonstrated tremendous success in various mission-critical large-scale scenarios. However, such promising distributed learning paradigm is still vulnerable to privacy inference and byzantine attacks. The former aims to infer the privacy of target participants involved in training, while the latter focuses on destroying the integrity of the constructed model. To mitigate the above two issues, a few works recently explored unified solutions by utilizing generic secure computation techniques and common byzantine-robust aggregation rules, but there are two major limitations: 1) they suffer from impracticality due to efficiency bottlenecks, and 2) they are still vulnerable to various types of attacks because of model incomprehensiveness. To approach the above problems, in this paper, we present SecureFL, an efficient, private and byzantine-robust FL framework. SecureFL follows the state-of-the-art byzantine-robust FL method (FLTrust NDSS'21), which performs comprehensive byzantine defense by normalizing the updates' magnitude and measuring directional similarity, adapting it to the privacy-preserving context. More importantly, we carefully customize a series of cryptographic components. First, we design a crypto-friendly validity checking protocol that functionally replaces the normalization operation in FLTrust, and further devise tailored cryptographic protocols on top of it. Benefiting from the above optimizations, the communication and computation costs are reduced by half without sacrificing the robustness and privacy protection. Second, we develop a novel preprocessing technique for costly matrix multiplication. With this technique, the directional similarity measurement can be evaluated securely with negligible computation overhead and zero communication cost. Extensive evaluations conducted on three real-world datasets and various neural network architectures demonstrate that SecureFL outperforms prior art up to two orders of magnitude in efficiency with state-of-the-art byzantine robustness.
引用
收藏
页码:45 / 60
页数:16
相关论文
共 50 条
  • [41] Robust Federated Learning With Noisy Communication
    Ang, Fan
    Chen, Li
    Zhao, Nan
    Chen, Yunfei
    Wang, Weidong
    Yu, F. Richard
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2020, 68 (06) : 3452 - 3464
  • [42] Robust Aggregation Function in Federated Learning
    Taheri, Rahim
    Arabikhan, Farzad
    Gegov, Alexander
    Akbari, Negar
    ADVANCES IN INFORMATION SYSTEMS, ARTIFICIAL INTELLIGENCE AND KNOWLEDGE MANAGEMENT, ICIKS 2023, 2024, 486 : 168 - 175
  • [43] Robust Federated Learning With Noisy Labels
    Yang, Seunghan
    Park, Hyoungseob
    Byun, Junyoung
    Kim, Changick
    IEEE INTELLIGENT SYSTEMS, 2022, 37 (02) : 35 - 43
  • [44] Robust and Verifiable Privacy Federated Learning
    Lu Z.
    Lu S.
    Tang X.
    Wu J.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (04): : 1895 - 1908
  • [45] Robust Federated Learning with Realistic Corruption
    Zhao, Puning
    Wu, Jiafei
    Liu, Zhe
    WEB AND BIG DATA, APWEB-WAIM 2024, PT IV, 2024, 14964 : 228 - 242
  • [46] Differentially Private Federated Learning on Heterogeneous Data
    Noble, Maxence
    Bellet, Aurelien
    Dieuleveut, Aymeric
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [47] The Skellam Mechanism for Differentially Private Federated Learning
    Agarwal, Naman
    Kairouz, Peter
    Liu, Ziyu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [48] Differentially private federated learning with Laplacian smoothing
    Liang, Zhicong
    Wang, Bao
    Gu, Quanquan
    Osher, Stanley
    Yao, Yuan
    APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2024, 72
  • [49] Compression Boosts Differentially Private Federated Learning
    Kerkouche, Raouf
    Acs, Gergely
    Castelluccia, Claude
    Geneves, Pierre
    2021 IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY (EUROS&P 2021), 2021, : 304 - 318
  • [50] Random Orthogonalization for Private Wireless Federated Learning
    Zuhra, Sadaf ul
    Seif, Mohamed
    Banawan, Karim
    Poor, H. Vincent
    FIFTY-SEVENTH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, IEEECONF, 2023, : 233 - 236