Differentially private stochastic gradient descent via compression and memorization

被引:9
|
作者
Phong, Le Trieu [1 ]
Phuong, Tran Thi [1 ,2 ]
机构
[1] Natl Inst Informat & Commun Technol NICT, Tokyo 1848795, Japan
[2] Meiji Univ, Kawasaki, Kanagawa 2148571, Japan
关键词
Differential privacy; Neural network; Stochastic gradient descent; Gradient compression and memorization; LOGISTIC-REGRESSION;
D O I
10.1016/j.sysarc.2022.102819
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
We propose a novel approach for achieving differential privacy for neural network training models through compression and memorization of gradients. The compression technique, which makes gradient vectors sparse, reduces the sensitivity so that differential privacy can be achieved with less noise; whereas the memorization technique, which remembers unused gradient parts, keeps track of the descent direction and thereby maintains the accuracy of the proposed algorithm. Our differentially private algorithm, called dp-memSGD for short, converges mathematically at the same rate of 1/root T as standard stochastic gradient descent (SGD) algorithm, where.. is the number of training iterations. Experimentally, we demonstrate that dp-memSGD converges with reasonable privacy losses on many benchmark datasets.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Stochastic gradient descent with differentially private updates
    Song, Shuang
    Chaudhuri, Kamalika
    Sarwate, Anand D.
    2013 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP), 2013, : 245 - 248
  • [2] Differentially Private Variance Reduced Stochastic Gradient Descent
    Lee, Jaewoo
    2017 INTERNATIONAL CONFERENCE ON NEW TRENDS IN COMPUTING SCIENCES (ICTCS), 2017, : 161 - 166
  • [3] Distributed Differentially Private Stochastic Gradient Descent: An Empirical Study
    Hegedus, Istvan
    Jelasity, Mark
    2016 24TH EUROMICRO INTERNATIONAL CONFERENCE ON PARALLEL, DISTRIBUTED, AND NETWORK-BASED PROCESSING (PDP), 2016, : 566 - 573
  • [4] Differentially private stochastic gradient descent with low-noise
    Wang, Puyu
    Lei, Yunwen
    Ying, Yiming
    Zhou, Ding-Xuan
    NEUROCOMPUTING, 2024, 587
  • [5] Differentially Private Gossip Gradient Descent
    Liu, Yang
    Liu, Ji
    Basar, Tamer
    2018 IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2018, : 2777 - 2782
  • [6] Differentially Private Stochastic Coordinate Descent
    Damaskinos, Georgios
    Mendler-Duenner, Celestine
    Guerraoui, Rachid
    Papandreou, Nikolaos
    Parnell, Thomas
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 7176 - 7184
  • [7] Removing Disparate Impact on Model Accuracy in Differentially Private Stochastic Gradient Descent
    Xu, Depeng
    Du, Wei
    Wu, Xintao
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 1924 - 1932
  • [8] DPSUR: Accelerating Differentially Private Stochastic Gradient Descent Using Selective Update and Release
    Fu, Jie
    Ye, Qingqing
    Hu, Haibo
    Chen, Zhili
    Wang, Lulu
    Wang, Kuncan
    Ran, Xun
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2024, 17 (06): : 1200 - 1213
  • [9] Differentially Private Deep Learning with Iterative Gradient Descent Optimization
    Ding, Xiaofeng
    Chen, Lin
    Zhou, Pan
    Jiang, Wenbin
    Jin, Hai
    ACM/IMS Transactions on Data Science, 2021, 2 (04):
  • [10] Private weighted random walk stochastic gradient descent
    Ayache G.
    El Rouayheb S.
    IEEE Journal on Selected Areas in Information Theory, 2021, 2 (01): : 452 - 463