A Communication-Efficient, Privacy-Preserving Federated Learning Algorithm Based on Two-Stage Gradient Pruning and Differentiated Differential Privacy

被引:2
|
作者
Li, Yong [1 ,2 ,3 ]
Du, Wei [1 ]
Han, Liquan [1 ]
Zhang, Zhenjian [1 ]
Liu, Tongtong [1 ]
机构
[1] Changchun Univ Technol, Sch Comp Sci & Engn, Changchun 130012, Peoples R China
[2] Changchun Univ Technol, AI Res Inst, Changchun 130012, Peoples R China
[3] Jilin Univ, Sch Comp Sci & Technol, Changchun 130012, Peoples R China
关键词
differentiated differential privacy; federated learning; gradient pruning; privacy preserving;
D O I
10.3390/s23239305
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
There are several unsolved problems in federated learning, such as the security concerns and communication costs associated with it. Differential privacy (DP) offers effective privacy protection by introducing noise to parameters based on rigorous privacy definitions. However, excessive noise addition can potentially compromise the accuracy of the model. Another challenge in federated learning is the issue of high communication costs. Training large-scale federated models can be slow and expensive in terms of communication resources. To address this, various model pruning algorithms have been proposed. To address these challenges, this paper introduces a communication-efficient, privacy-preserving FL algorithm based on two-stage gradient pruning and differentiated differential privacy, named IsmDP-FL. The algorithm leverages a two-stage approach, incorporating gradient pruning and differentiated differential privacy. In the first stage, the trained model is subject to gradient pruning, followed by the addition of differential privacy to the important parameters selected after pruning. Non-important parameters are pruned by a certain ratio, and differentiated differential privacy is applied to the remaining parameters in each network layer. In the second stage, gradient pruning is performed during the upload to the server for aggregation, and the final result is returned to the client to complete the federated learning process. Extensive experiments demonstrate that the proposed method ensures a high communication efficiency, maintains the model privacy, and reduces the unnecessary use of the privacy budget.
引用
收藏
页数:21
相关论文
共 50 条
  • [21] Communication-Efficient Privacy-Preserving Federated Learning via Knowledge Distillation for Human Activity Recognition Systems
    Gad, Gad
    Fadlullah, Zubair Md
    Rabie, Khaled
    Fouda, Mostafa M.
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 1572 - 1578
  • [22] Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM
    Xie, Chulin
    Chen, Pin-Yu
    Li, Qinbin
    Nourian, Arash
    Zhang, Ce
    Li, Bo
    IEEE CONFERENCE ON SAFE AND TRUSTWORTHY MACHINE LEARNING, SATML 2024, 2024, : 443 - 471
  • [23] A Framework for Privacy-Preserving in IoV Using Federated Learning With Differential Privacy
    Adnan, Muhammad
    Syed, Madiha Haider
    Anjum, Adeel
    Rehman, Semeen
    IEEE ACCESS, 2025, 13 : 13507 - 13521
  • [24] Round efficient privacy-preserving federated learning based on MKFHE
    Liu, Wenchao
    Zhou, Tanping
    Chen, Long
    Yang, Hongjian
    Han, Jiang
    Yang, Xiaoyuan
    COMPUTER STANDARDS & INTERFACES, 2024, 87
  • [25] PPeFL: Privacy-Preserving Edge Federated Learning With Local Differential Privacy
    Wang, Baocang
    Chen, Yange
    Jiang, Hang
    Zhao, Zhen
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (17) : 15488 - 15500
  • [26] A Novel Approach for Differential Privacy-Preserving Federated Learning
    Elgabli, Anis
    Mesbah, Wessam
    IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2025, 6 : 466 - 476
  • [27] Towards Efficient and Privacy-preserving Federated Deep Learning
    Hao, Meng
    Li, Hongwei
    Xu, Guowen
    Liu, Sen
    Yang, Haomiao
    ICC 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2019,
  • [28] CLFLDP: Communication-efficient layer clipping federated learning with local differential privacy
    Chen, Shuhong
    Yang, Jiawei
    Wang, Guojun
    Wang, Zijia
    Yin, Haojie
    Feng, Yinglin
    JOURNAL OF SYSTEMS ARCHITECTURE, 2024, 148
  • [29] Efficient Privacy-Preserving Federated Learning With Unreliable Users
    Li, Yiran
    Li, Hongwei
    Xu, Guowen
    Huang, Xiaoming
    Lu, Rongxing
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (13) : 11590 - 11603
  • [30] Efficient and privacy-preserving group signature for federated learning
    Kanchan, Sneha
    Jang, Jae Won
    Yoon, Jun Yong
    Choi, Bong Jun
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 147 : 93 - 106