P2CEFL: Privacy-Preserving and Communication Efficient Federated Learning With Sparse Gradient and Dithering Quantization

被引:1
|
作者
Wang, Gang [1 ]
Qi, Qi [2 ]
Han, Rui [1 ]
Bai, Lin [1 ,3 ]
Choi, Jinho [4 ]
机构
[1] Beihang Univ, Sch Cyber Sci & Technol, Beijing 100191, Peoples R China
[2] Beihang Univ, Sch Elect & Informat Engn, Beijing 100191, Peoples R China
[3] Zhongguancun Lab, Beijing 100191, Peoples R China
[4] Deakin Univ, Sch Informat Technol, Geelong, Vic 3220, Australia
基金
中国国家自然科学基金;
关键词
Privacy; Quantization (signal); Noise; Protection; Training; Federated learning; Convergence; Communication efficiency; differential privacy; dithering quantization; federated learning;
D O I
10.1109/TMC.2024.3445957
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) offers a promising framework for obtaining a global model by aggregating trained parameters from participating clients without transmitting their local private data. To further enhance privacy, differential privacy (DP)-based FL can be considered, wherein certain amounts of noise are added to the transmitting parameters, inevitably leading to a deterioration in communication efficiency. In this paper, we propose a novel Privacy-Preserving and Communication Efficient Federated Learning (P2CEFL) algorithm to reduce communication overhead under DP guarantee, utilizing sparse gradient and dithering quantization. Through gradient sparsification, the upload overhead for clients decreases considerably. Additionally, a subtractive dithering approach is employed to quantize sparse gradient, further reducing the bits for communication. We conduct theoretical analysis on privacy protection and convergence to verify the effectiveness of the proposed algorithm. Extensive numerical simulations show that the P2CEFL algorithm can achieve a similar level of model accuracy and significantly reduce communication costs compared to existing conventional DP-based FL methods.
引用
收藏
页码:14722 / 14736
页数:15
相关论文
共 50 条
  • [21] Efficient and Privacy-Preserving Byzantine-robust Federated Learning
    Luan, Shijie
    Lu, Xiang
    Zhang, Zhuangzhuang
    Chang, Guangsheng
    Guo, Yunchuan
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 2202 - 2208
  • [22] ELXGB: An Efficient and Privacy-Preserving XGBoost for Vertical Federated Learning
    Xu, Wei
    Zhu, Hui
    Zheng, Yandong
    Wang, Fengwei
    Zhao, Jiaqi
    Liu, Zhe
    Li, Hui
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (03) : 878 - 892
  • [23] Efficient Privacy-Preserving Federated Learning With Improved Compressed Sensing
    Zhang, Yifan
    Miao, Yinbin
    Li, Xinghua
    Wei, Linfeng
    Liu, Zhiquan
    Choo, Kim-Kwang Raymond
    Deng, Robert H.
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (03) : 3316 - 3326
  • [24] Efficient Verifiable Protocol for Privacy-Preserving Aggregation in Federated Learning
    Eltaras, Tamer
    Sabry, Farida
    Labda, Wadha
    Alzoubi, Khawla
    Malluhi, Qutaibah
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 2977 - 2990
  • [25] ESVFL: Efficient and secure verifiable federated learning with privacy-preserving
    Cai, Jiewang
    Shen, Wenting
    Qin, Jing
    INFORMATION FUSION, 2024, 109
  • [26] Privacy-Preserving Efficient Federated-Learning Model Debugging
    Li, Anran
    Zhang, Lan
    Wang, Junhao
    Han, Feng
    Li, Xiang-Yang
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (10) : 2291 - 2303
  • [27] Efficient and Privacy-Preserving Federated Learning Against Poisoning Adversaries
    Zhao, Jiaqi
    Zhu, Hui
    Wang, Fengwei
    Zheng, Yandong
    Lu, Rongxing
    Li, Hui
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (05) : 2320 - 2333
  • [28] PEPFL:A framework for a practical and efficient privacy-preserving federated learning
    Yange Chen
    Baocang Wang
    Hang Jiang
    Pu Duan
    Yuan Ping
    Zhiyong Hong
    Digital Communications and Networks, 2024, 10 (02) : 355 - 368
  • [29] Communication-efficient and privacy-preserving large-scale federated learning counteracting heterogeneity
    Zhou, Xingcai
    Yang, Guang
    INFORMATION SCIENCES, 2024, 661
  • [30] Privacy-preserving and communication-efficient stochastic alternating direction method of multipliers for federated learning
    Zhang, Yi
    Lu, Yunfan
    Liu, Fengxia
    Li, Cheng
    Gong, Zixian
    Hu, Zhe
    Xu, Qun
    INFORMATION SCIENCES, 2025, 691