A Communication-Efficient, Privacy-Preserving Federated Learning Algorithm Based on Two-Stage Gradient Pruning and Differentiated Differential Privacy

被引:2
|
作者
Li, Yong [1 ,2 ,3 ]
Du, Wei [1 ]
Han, Liquan [1 ]
Zhang, Zhenjian [1 ]
Liu, Tongtong [1 ]
机构
[1] Changchun Univ Technol, Sch Comp Sci & Engn, Changchun 130012, Peoples R China
[2] Changchun Univ Technol, AI Res Inst, Changchun 130012, Peoples R China
[3] Jilin Univ, Sch Comp Sci & Technol, Changchun 130012, Peoples R China
关键词
differentiated differential privacy; federated learning; gradient pruning; privacy preserving;
D O I
10.3390/s23239305
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
There are several unsolved problems in federated learning, such as the security concerns and communication costs associated with it. Differential privacy (DP) offers effective privacy protection by introducing noise to parameters based on rigorous privacy definitions. However, excessive noise addition can potentially compromise the accuracy of the model. Another challenge in federated learning is the issue of high communication costs. Training large-scale federated models can be slow and expensive in terms of communication resources. To address this, various model pruning algorithms have been proposed. To address these challenges, this paper introduces a communication-efficient, privacy-preserving FL algorithm based on two-stage gradient pruning and differentiated differential privacy, named IsmDP-FL. The algorithm leverages a two-stage approach, incorporating gradient pruning and differentiated differential privacy. In the first stage, the trained model is subject to gradient pruning, followed by the addition of differential privacy to the important parameters selected after pruning. Non-important parameters are pruned by a certain ratio, and differentiated differential privacy is applied to the remaining parameters in each network layer. In the second stage, gradient pruning is performed during the upload to the server for aggregation, and the final result is returned to the client to complete the federated learning process. Extensive experiments demonstrate that the proposed method ensures a high communication efficiency, maintains the model privacy, and reduces the unnecessary use of the privacy budget.
引用
收藏
页数:21
相关论文
共 50 条
  • [41] Privacy-preserving federated learning based on noise addition
    Wu, Xianlin
    Chen, Yuwen
    Yu, Haiyang
    Yang, Zhen
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 267
  • [42] Towards Efficient and Privacy-Preserving Federated Learning for HMM Training
    Zheng, Yandong
    Zhu, Hui
    Lu, Rongxing
    Zhang, Songnian
    Guan, Yunguo
    Wang, Fengwei
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 38 - 43
  • [43] Learning from electronic health records across multiple sites: A communication-efficient and privacy-preserving distributed algorithm
    Duan, Rui
    Boland, Mary Regina
    Liu, Zixuan
    Liu, Yue
    Chang, Howard H.
    Xu, Hua
    Chu, Haitao
    Schmid, Christopher H.
    Forrest, Christopher B.
    Holmes, John H.
    Schuemie, Martijn J.
    Berlin, Jesse A.
    Moore, Jason H.
    Chen, Yong
    JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2020, 27 (03) : 376 - 385
  • [44] Efficient and Privacy-Preserving Byzantine-robust Federated Learning
    Luan, Shijie
    Lu, Xiang
    Zhang, Zhuangzhuang
    Chang, Guangsheng
    Guo, Yunchuan
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 2202 - 2208
  • [45] ELXGB: An Efficient and Privacy-Preserving XGBoost for Vertical Federated Learning
    Xu, Wei
    Zhu, Hui
    Zheng, Yandong
    Wang, Fengwei
    Zhao, Jiaqi
    Liu, Zhe
    Li, Hui
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (03) : 878 - 892
  • [46] Privacy-preserving estimation of electric vehicle charging behavior: A federated learning approach based on differential privacy
    Kong, Xiuping
    Lu, Lin
    Xiong, Ke
    INTERNET OF THINGS, 2024, 28
  • [47] Efficient Verifiable Protocol for Privacy-Preserving Aggregation in Federated Learning
    Eltaras, Tamer
    Sabry, Farida
    Labda, Wadha
    Alzoubi, Khawla
    Malluhi, Qutaibah
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 2977 - 2990
  • [48] ESVFL: Efficient and secure verifiable federated learning with privacy-preserving
    Cai, Jiewang
    Shen, Wenting
    Qin, Jing
    INFORMATION FUSION, 2024, 109
  • [49] Privacy-Preserving Efficient Federated-Learning Model Debugging
    Li, Anran
    Zhang, Lan
    Wang, Junhao
    Han, Feng
    Li, Xiang-Yang
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (10) : 2291 - 2303
  • [50] Efficient and Privacy-Preserving Federated Learning Against Poisoning Adversaries
    Zhao, Jiaqi
    Zhu, Hui
    Wang, Fengwei
    Zheng, Yandong
    Lu, Rongxing
    Li, Hui
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (05) : 2320 - 2333