Efficient, Private and Robust Federated Learning

被引:25
|
作者
Hao, Meng [1 ]
Li, Hongwei [1 ]
Xu, Guowen [2 ]
Chen, Hanxiao [1 ]
Zhang, Tianwei [2 ]
机构
[1] Univ Elect Sci & Technol China, Chengdu, Sichuan, Peoples R China
[2] Nanyang Technol Univ, Singapore, Singapore
基金
中国国家自然科学基金;
关键词
Federated learning; Privacy protection; Byzantine robustness;
D O I
10.1145/3485832.3488014
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Federated learning (FL) has demonstrated tremendous success in various mission-critical large-scale scenarios. However, such promising distributed learning paradigm is still vulnerable to privacy inference and byzantine attacks. The former aims to infer the privacy of target participants involved in training, while the latter focuses on destroying the integrity of the constructed model. To mitigate the above two issues, a few works recently explored unified solutions by utilizing generic secure computation techniques and common byzantine-robust aggregation rules, but there are two major limitations: 1) they suffer from impracticality due to efficiency bottlenecks, and 2) they are still vulnerable to various types of attacks because of model incomprehensiveness. To approach the above problems, in this paper, we present SecureFL, an efficient, private and byzantine-robust FL framework. SecureFL follows the state-of-the-art byzantine-robust FL method (FLTrust NDSS'21), which performs comprehensive byzantine defense by normalizing the updates' magnitude and measuring directional similarity, adapting it to the privacy-preserving context. More importantly, we carefully customize a series of cryptographic components. First, we design a crypto-friendly validity checking protocol that functionally replaces the normalization operation in FLTrust, and further devise tailored cryptographic protocols on top of it. Benefiting from the above optimizations, the communication and computation costs are reduced by half without sacrificing the robustness and privacy protection. Second, we develop a novel preprocessing technique for costly matrix multiplication. With this technique, the directional similarity measurement can be evaluated securely with negligible computation overhead and zero communication cost. Extensive evaluations conducted on three real-world datasets and various neural network architectures demonstrate that SecureFL outperforms prior art up to two orders of magnitude in efficiency with state-of-the-art byzantine robustness.
引用
收藏
页码:45 / 60
页数:16
相关论文
共 50 条
  • [31] EDEN: Communication-Efficient and Robust Distributed Mean Estimation for Federated Learning
    Vargaftik, Shay
    Ben Basat, Ran
    Portnoy, Amit
    Mendelson, Gal
    Ben-Itzhak, Yaniv
    Mitzenmacher, Michael
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [32] Optimal Contract Design for Efficient Federated Learning With Multi-Dimensional Private Information
    Ding, Ningning
    Fang, Zhixuan
    Huang, Jianwei
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (01) : 186 - 200
  • [33] AddShare plus : Efficient Selective Additive Secret Sharing Approach for Private Federated Learning
    Asare, Bernard Atiemo
    Branco, Paula
    Kiringa, Iluju
    Yeap, Tet
    2024 IEEE 11TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS, DSAA 2024, 2024, : 584 - 593
  • [34] FedVAE: Communication-Efficient Federated Learning With Non-IID Private Data
    Yang, Haomiao
    Ge, Mengyu
    Xiang, Kunlan
    Bai, Xuejun
    Li, Hongwei
    IEEE SYSTEMS JOURNAL, 2023, 17 (03): : 4798 - 4808
  • [35] FLOD: Oblivious Defender for Private Byzantine-Robust Federated Learning with Dishonest-Majority
    Dong, Ye
    Chen, Xiaojun
    Li, Kaiyun
    Wang, Dakui
    Zeng, Shuai
    COMPUTER SECURITY - ESORICS 2021, PT I, 2021, 12972 : 497 - 518
  • [36] Private Federated Submodel Learning via Private Set Union
    Wang, Zhusheng
    Ulukus, Sennur
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2024, 70 (04) : 2903 - 2921
  • [37] Robust Networked Federated Learning for Localization
    Mirzaeifard, Reza
    Venkategowda, Naveen K. D.
    Werner, Stefan
    2023 ASIA PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE, APSIPA ASC, 2023, : 1193 - 1198
  • [38] ARFL: Adaptive and Robust Federated Learning
    Uddin, Md Palash
    Xiang, Yong
    Cai, Borui
    Lu, Xuequan
    Yearwood, John
    Gao, Longxiang
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (05) : 5401 - 5417
  • [39] Robust federated learning with voting and scaling
    Liang, Xiang-Yu
    Zhang, Heng-Ru
    Tang, Wei
    Min, Fan
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 153 : 113 - 124
  • [40] Robust quantum federated learning with noise
    Chen, Liangjun
    Yan, Lili
    Zhang, Shibin
    PHYSICA SCRIPTA, 2024, 99 (07)