Differential Privacy Algorithm under Deep Neural Networks

被引:1
|
作者
Zhou Zhiping [1 ,2 ]
Qian Xinyu [1 ]
机构
[1] Jiangnan Univ, Sch Internet Things Engn, Wuxi 214122, Jiangsu, Peoples R China
[2] Jiangnan Univ, Engn Res Ctr Internet Things Technol Applicat, Minist Educ, Wuxi 214122, Jiangsu, Peoples R China
关键词
Differential privacy; Funk Singular Value Decomposition (Funk-SVD); Smooth sensitivity; Correlation; Gradient clipping;
D O I
10.11999/JEIT210276
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Gradient redundancy exists in the process of deep neural network gradient descent. When differential privacy mechanism is applied to resist member inference attack, excessive noise will be introduced. So, the gradient matrix is decomposed by Funk-SVD algorithm and noise is added to the low-dimensional eigen subspace matrix and residual matrix respectively. The redundant gradient noise is eliminated in the gradient reconstruction process. The decomposition matrix norm is recalculated and the smoothing sensitivity is combined to reduce the noise scale. At the same time, according to the correlation between input features and output features, more privacy budget is allocated to features with large correlation coefficients to improve the training accuracy. The noise scale is reduced by recalculating the decomposition matrix norm and the smoothing sensitivity. Moment accountant is used to calculate the cumulative privacy loss under multiple optimization strategies. The results show that Deep neural networks under differential privacy based on Funk-SVD (FSDP) can bridge the gap with the non-privacy model more effectively on MNIST and CIFAR-10.
引用
收藏
页码:1773 / 1781
页数:9
相关论文
共 16 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] Adesuyi Tosin A., 2019, 2019 IEEE 2nd International Conference on Knowledge Innovation and Invention (ICKII), P570, DOI 10.1109/ICKII46306.2019.9042653
  • [3] Research on Differentially Private Trajectory Data Publishing
    Feng Dengguo
    Zhang Min
    Ye Yutong
    [J]. JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2020, 42 (01) : 74 - 88
  • [4] Differential privacy preservation in regression analysis based on relevance
    Gong, Maoguo
    Pan, Ke
    Xie, Yu
    [J]. KNOWLEDGE-BASED SYSTEMS, 2019, 173 : 140 - 149
  • [5] Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning
    Hitaj, Briland
    Ateniese, Giuseppe
    Perez-Cruz, Fernando
    [J]. CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, : 603 - 618
  • [6] PRADA: Protecting Against DNN Model Stealing Attacks
    Juuti, Mika
    Szyller, Sebastian
    Marchal, Samuel
    Asokan, N.
    [J]. 2019 4TH IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY (EUROS&P), 2019, : 512 - 527
  • [7] [刘睿瑄 Liu Ruixuan], 2020, [软件学报, Journal of Software], V31, P866
  • [8] Comprehensive Privacy Analysis of Deep Learning Passive and Active White-box Inference Attacks against Centralized and Federated Learning
    Nasr, Milad
    Shokri, Reza
    Houmansadr, Amir
    [J]. 2019 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2019), 2019, : 739 - 753
  • [9] Phan N, 2019, PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P4753
  • [10] Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning
    Phan, NhatHai
    Wu, Xintao
    Hu, Han
    Dou, Dejing
    [J]. 2017 17TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2017, : 385 - 394