Differentially private stochastic gradient descent via compression and memorization

被引:9
|
作者
Phong, Le Trieu [1 ]
Phuong, Tran Thi [1 ,2 ]
机构
[1] Natl Inst Informat & Commun Technol NICT, Tokyo 1848795, Japan
[2] Meiji Univ, Kawasaki, Kanagawa 2148571, Japan
关键词
Differential privacy; Neural network; Stochastic gradient descent; Gradient compression and memorization; LOGISTIC-REGRESSION;
D O I
10.1016/j.sysarc.2022.102819
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
We propose a novel approach for achieving differential privacy for neural network training models through compression and memorization of gradients. The compression technique, which makes gradient vectors sparse, reduces the sensitivity so that differential privacy can be achieved with less noise; whereas the memorization technique, which remembers unused gradient parts, keeps track of the descent direction and thereby maintains the accuracy of the proposed algorithm. Our differentially private algorithm, called dp-memSGD for short, converges mathematically at the same rate of 1/root T as standard stochastic gradient descent (SGD) algorithm, where.. is the number of training iterations. Experimentally, we demonstrate that dp-memSGD converges with reasonable privacy losses on many benchmark datasets.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] CD-SGD: Distributed Stochastic Gradient Descent with Compression and Delay Compensation
    Yu, Enda
    Dong, Dezun
    Xu, Yemao
    Ouyang, Shuo
    Liao, Xiangke
    50TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, 2021,
  • [22] Stochastic Reweighted Gradient Descent
    El Hanchi, Ayoub
    Stephens, David A.
    Maddison, Chris J.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [23] Stochastic gradient descent tricks
    Bottou, Léon
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2012, 7700 LECTURE NO : 421 - 436
  • [24] Byzantine Stochastic Gradient Descent
    Alistarh, Dan
    Allen-Zhu, Zeyuan
    Li, Jerry
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [25] Differentially Private Model Compression
    Mireshghallah, Fatemehsadat
    Backurs, Arturs
    Inan, Huseyin A.
    Wutschitz, Lukas
    Kulkarni, Janardhan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [26] DEEP RELAXATION OF CONTROLLED STOCHASTIC GRADIENT DESCENT VIA SINGULAR PERTURBATIONS
    Bardi, Martino
    Kouhkouh, Hicham
    arXiv, 2022,
  • [27] Parametric estimation of stochastic differential equations via online gradient descent
    Nakakita, Shogo
    JAPANESE JOURNAL OF STATISTICS AND DATA SCIENCE, 2024,
  • [28] DEEP RELAXATION OF CONTROLLED STOCHASTIC GRADIENT DESCENT VIA SINGULAR PERTURBATIONS
    Bardi, Martino
    Kouhkouh, Hicham
    SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2024, 62 (04) : 2229 - 2253
  • [29] Image Alignment by Online Robust PCA via Stochastic Gradient Descent
    Song, Wenjie
    Zhu, Jianke
    Li, Yang
    Chen, Chun
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2016, 26 (07) : 1241 - 1250
  • [30] TDOA-based Localization via Stochastic Gradient Descent Variants
    Abanto-Leon, Luis F.
    Koppelaar, Arie
    de Groot, Sonia Heemstra
    2018 IEEE 88TH VEHICULAR TECHNOLOGY CONFERENCE (VTC-FALL), 2018,