Fast Secure Aggregation for Privacy-Preserving Federated Learning

被引:4
|
作者
Liu, Yanjun [1 ]
Qian, Xinyuan [1 ]
Li, Hongwei [1 ]
Hao, Meng [1 ]
Guo, Song [2 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu, Sichuan, Peoples R China
[2] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; secure aggregation; polynomial multi-point evaluation; privacy protection;
D O I
10.1109/GLOBECOM48099.2022.10001327
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) is a new distributed learning paradigm, in which the clients cooperate to conduct the global model without exposing local private data. However, existing privacy inference attacks on FL show that adversaries can still reverse the training data from the submitted model updates. Recently, secure aggregation has been proposed and integrated into the FL framework, which effectively guarantees privacy through various cryptographic techniques, unfortunately at the cost of a large amount of communication and computation. In this paper, we propose a highly efficient secure aggregation scheme, Fast-Aggregate, which significantly reduces the communication and computation overhead while ensuring data privacy and robustness against clients' dropout. Firstly, Fast-Aggregate employs a multi-group regular graph for efficient secure aggregation to boost data parallelism. Secondly, we leverage polynomial multi-point evaluation and fast Lagrange interpolation methods to handle clients' dropout as well as reduce computational complexity. Finally, we adopt an additive mask to guarantee clients' privacy. Riding on the capabilities of Fast-Aggregate, we achieve the secure aggregation overhead of O (N log(2) N), as opposed to O (N-2) in the state-of-the-art works. Besides, Fast-Aggregate improves training speed without loss of model quality and provides flexibility to deal with client corruption at the same time.
引用
收藏
页码:3017 / 3022
页数:6
相关论文
共 50 条
  • [31] Communication-Efficient and Privacy-Preserving Aggregation in Federated Learning With Adaptability
    Sun, Xuehua
    Yuan, Zengsen
    Kong, Xianguang
    Xue, Liang
    He, Lang
    Lin, Ying
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (15): : 26430 - 26443
  • [32] Communication-Efficient and Privacy-Preserving Verifiable Aggregation for Federated Learning
    Peng, Kaixin
    Shen, Xiaoying
    Gao, Le
    Wang, Baocang
    Lu, Yichao
    ENTROPY, 2023, 25 (08)
  • [33] Frameworks for Privacy-Preserving Federated Learning
    Phong, Le Trieu
    Phuong, Tran Thi
    Wang, Lihua
    Ozawa, Seiichi
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2024, E107D (01) : 2 - 12
  • [34] Adaptive privacy-preserving federated learning
    Liu, Xiaoyuan
    Li, Hongwei
    Xu, Guowen
    Lu, Rongxing
    He, Miao
    PEER-TO-PEER NETWORKING AND APPLICATIONS, 2020, 13 (06) : 2356 - 2366
  • [35] An Efficient Federated Learning Framework for Privacy-Preserving Data Aggregation in IoT
    Shi, Rongquan
    Wei, Lifei
    Zhang, Lei
    2023 20TH ANNUAL INTERNATIONAL CONFERENCE ON PRIVACY, SECURITY AND TRUST, PST, 2023, : 385 - 391
  • [36] Privacy-Preserving Data Aggregation Scheme Based on Federated Learning for IIoT
    Fan, Hongbin
    Zhou, Zhi
    MATHEMATICS, 2023, 11 (01)
  • [37] EPPDA: An Efficient Privacy-Preserving Data Aggregation Federated Learning Scheme
    Song, Jingcheng
    Wang, Weizheng
    Gadekallu, Thippa Reddy
    Cao, Jianyu
    Liu, Yining
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2023, 10 (05): : 3047 - 3057
  • [38] Privacy-preserving Techniques in Federated Learning
    Liu Y.-X.
    Chen H.
    Liu Y.-H.
    Li C.-P.
    Ruan Jian Xue Bao/Journal of Software, 2022, 33 (03): : 1057 - 1092
  • [39] Adaptive privacy-preserving federated learning
    Xiaoyuan Liu
    Hongwei Li
    Guowen Xu
    Rongxing Lu
    Miao He
    Peer-to-Peer Networking and Applications, 2020, 13 : 2356 - 2366
  • [40] Federated learning for privacy-preserving AI
    Cheng, Yong
    Liu, Yang
    Chen, Tianjian
    Yang, Qiang
    COMMUNICATIONS OF THE ACM, 2020, 63 (12) : 33 - 36