Differentially private stochastic gradient descent via compression and memorization

被引:9
|
作者
Phong, Le Trieu [1 ]
Phuong, Tran Thi [1 ,2 ]
机构
[1] Natl Inst Informat & Commun Technol NICT, Tokyo 1848795, Japan
[2] Meiji Univ, Kawasaki, Kanagawa 2148571, Japan
关键词
Differential privacy; Neural network; Stochastic gradient descent; Gradient compression and memorization; LOGISTIC-REGRESSION;
D O I
10.1016/j.sysarc.2022.102819
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
We propose a novel approach for achieving differential privacy for neural network training models through compression and memorization of gradients. The compression technique, which makes gradient vectors sparse, reduces the sensitivity so that differential privacy can be achieved with less noise; whereas the memorization technique, which remembers unused gradient parts, keeps track of the descent direction and thereby maintains the accuracy of the proposed algorithm. Our differentially private algorithm, called dp-memSGD for short, converges mathematically at the same rate of 1/root T as standard stochastic gradient descent (SGD) algorithm, where.. is the number of training iterations. Experimentally, we demonstrate that dp-memSGD converges with reasonable privacy losses on many benchmark datasets.
引用
收藏
页数:9
相关论文
共 50 条
  • [31] Differentially Private Network Data Release via Stochastic Kronecker Graph
    Li, Dai
    Zhang, Wei
    Chen, Yunfang
    WEB INFORMATION SYSTEMS ENGINEERING - WISE 2016, PT II, 2016, 10042 : 290 - 297
  • [32] Learning Differentially Private Diffusion Models via Stochastic Adversarial Distillation
    Liu, Bochao
    Wang, Pengju
    Ge, Shiming
    COMPUTER VISION-ECCV 2024, PT VII, 2025, 15065 : 55 - 71
  • [33] Convergence of Stochastic Gradient Descent for PCA
    Shamir, Ohad
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [34] Stochastic Gradient Descent in Continuous Time
    Sirignano, Justin
    Spiliopoulos, Konstantinos
    SIAM JOURNAL ON FINANCIAL MATHEMATICS, 2017, 8 (01): : 933 - 961
  • [35] Bayesian Distributed Stochastic Gradient Descent
    Teng, Michael
    Wood, Frank
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [36] On the Hyperparameters in Stochastic Gradient Descent with Momentum
    Shi, Bin
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25
  • [37] On the Generalization of Stochastic Gradient Descent with Momentum
    Ramezani-Kebrya, Ali
    Antonakopoulos, Kimon
    Cevher, Volkan
    Khisti, Ashish
    Liang, Ben
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25 : 1 - 56
  • [38] Randomized Stochastic Gradient Descent Ascent
    Sebbouh, Othmane
    Cuturi, Marco
    Peyre, Gabriel
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [39] On the different regimes of stochastic gradient descent
    Sclocchi, Antonio
    Wyart, Matthieu
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2023, 121 (09)
  • [40] BACKPROPAGATION AND STOCHASTIC GRADIENT DESCENT METHOD
    AMARI, S
    NEUROCOMPUTING, 1993, 5 (4-5) : 185 - 196