Efficient federated learning privacy protection scheme

被引:0
|
作者
Cheng S. [1 ]
Daochen C. [1 ]
Weiping P. [1 ]
机构
[1] School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo
关键词
federated learning; homomorphic encryption; natural compression; privacy-preserving techniques;
D O I
10.19665/j.issn1001-2400.20230202
中图分类号
学科分类号
摘要
Federated learning allows clients to jointly train models with only shared gradients, rather than directly feeding the training data to the server. Although federated learning avoids exposing data directly to third parties and plays a certain role in protecting data, research shows that the transmission gradient in federated learning scenarios will still lead to the disclosure of private information. However, the computing and communication overhead brought by the encryption scheme in the training process will affect the training efficiency, and it is difficult to apply to resource-constrained environments. Aiming at the security and efficiency problems of privacy protection schemes in current federated learning, a safe and efficient privacy protection scheme for federated learning is proposed by combining homomorphic encryption and compression techniques. The homomorphic encryption algorithm is optimized to ensure the security of the scheme, reduce the number of operations and improve the efficiency of operations. At the same time, a gradient filtering compression algorithm is designed to filter out the local updates that are not related to the convergence trend of the global model, and the update parameters are quantized by a computationally negligible compression operator, which ensures the accuracy of the model and increases the communication efficiency. The security analysis shows that the scheme satisfies the security characteristics such as indistinguishability, data privacy and model security. Experimental results show that the proposed scheme has not only higher model accuracy, but also obvious advantages over the existing schemes in terms of communication cost and calculation cost. © 2023 Science Press. All rights reserved.
引用
收藏
页码:178 / 187
页数:9
相关论文
共 20 条
  • [1] MCMAHAN B, MOORE E, RAMAGE D, Et al., Communication-Efficient Learning of Deep Networks from Decentralized Data[C] Artificial Intelligence and Statistics, pp. 1273-1282, (2017)
  • [2] LI Q B, WEN Z Y, WU Z M, Et al., A Survey on Federated Learning Systems: Vision, Hype and Reality for Data Privacy and Protection, IEEE Transactions on Knowledge and Data Engineering, 35, 4, pp. 3347-3366, (2023)
  • [3] YIN X F, ZHU Y M, HU J K., A Comprehensive Survey of Privacy-Preserving Federated Learning: A Taxonomy, Review, and Future Directions, ACM Computing Surveys (CSUR), 54, 6, pp. 1-36, (2021)
  • [4] WANG Z, SONG M, ZHANG Z, Et al., Beyond Inferring Class Representatives: User-Level Privacy Leakage from Federated Learning [C], IEEE INFOCOM 2019-IEEE Conference on Computer Communications, pp. 2512-2520, (2019)
  • [5] ZHU L G, LIU Z J, HAN S., Deep Leakage from Gradients [J], Advances in Neural Information Processing Systems, 32, pp. 14774-14784, (2019)
  • [6] ABADI M, CHU A, GOODFELLOW I, Et al., Deep Learning with Differential Privacy, Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308-318, (2016)
  • [7] CHEN Lujun, XIAO Di, YU Zhuyang, Et al., Communication-Efficient Federated Learning Based on Secret Sharing and Compressed Sensing, Journal of Computer Research and Development, 59, 11, pp. 2395-2407, (2022)
  • [8] LI Wenhua, DONG Lihua, ZENG Yong, Analysis and Improvement of the Security of the Key-Nets Homomorphic Encryption Scheme, Journal of Xidian University, 50, 1, pp. 192-202, (2023)
  • [9] EL OUADRHIRI A, ABDELHADI A., Differential Privacy for Deep and Federated Learning:A Survey [J], IEEE Access, 10, pp. 22359-22380, (2022)
  • [10] PHONG L T, AONO Y, HAYASHI T, Et al., Privacy-Preserving Deep Learning via Additively Homomorphic Encryption, IEEE Transactions on Information Forensics and Security, 13, 5, pp. 1333-1345, (2018)