Privacy-Preserving Stochastic Gradual Learning

被引:4
|
作者
Han, Bo [1 ]
Tsang, Ivor W. [2 ]
Xiao, Xiaokui [3 ]
Chen, Ling [2 ]
Fung, Sai-Fu [4 ]
Yu, Celina P. [5 ]
机构
[1] Hong Kong Baptist Univ, Dept Comp Sci, Kowloon Tong, Hong Kong, Peoples R China
[2] Univ Technol Sydney, Ctr Artificial Intelligence, Ultimo, NSW 2007, Australia
[3] Natl Univ Singapore, Dept Comp Sci, Singapore 119077, Singapore
[4] City Univ Hong Kong, Dept Appl Social Sci, Kowloon Tong, Hong Kong, Peoples R China
[5] Global Business Coll Australia, Melbourne, Vic 3000, Australia
关键词
Privacy; Optimization; Differential privacy; Robustness; Stochastic processes; Task analysis; Stochastic optimization; differential privacy; robustness; MACHINE;
D O I
10.1109/TKDE.2020.2963977
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It is challenging for stochastic optimization to handle large-scale sensitive data safely. Duchi et al. recently proposed a private sampling strategy to solve privacy leakage in stochastic optimization. However, this strategy leads to a degeneration in robustness, since this strategy is equal to noise injection on each gradient, which adversely affects updates of the primal variable. To address this challenge, we introduce a robust stochastic optimization under the framework of local privacy, which is called Privacy-pREserving StochasTIc Gradual lEarning (PRESTIGE). PRESTIGE bridges private updates of the primal variable (by private sampling) with gradual curriculum learning (CL). The noise injection leads to similar issue from label noise, but the robust learning process of CL can combat with label noise. Thus, PRESTIGE yields "private but robust" updates of the primal variable on the curriculum, that is, a reordered label sequence provided by CL. In theory, we reveal the convergence rate and maximum complexity of PRESTIGE. Empirical results on six datasets show that PRESTIGE achieves a good tradeoff between privacy preservation and robustness over baselines.
引用
收藏
页码:3129 / 3140
页数:12
相关论文
共 50 条
  • [1] Stochastic privacy-preserving methods for nonconvex sparse learning
    Liang, Guannan
    Tong, Qianqian
    Ding, Jiahao
    Pan, Miao
    Bi, Jinbo
    INFORMATION SCIENCES, 2023, 630 : 567 - 585
  • [2] Privacy-Preserving Distributed Learning via Obfuscated Stochastic Gradients
    Gade, Shripad
    Vaidya, Nitin H.
    2018 IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2018, : 184 - 191
  • [3] Privacy-Preserving Classifier Learning
    Brickell, Justin
    Shmatikov, Vitaly
    FINANCIAL CRYPTOGRAPHY AND DATA SECURITY, 2009, 5628 : 128 - 147
  • [4] Privacy-Preserving Deep Learning
    Shokri, Reza
    Shmatikov, Vitaly
    CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, : 1310 - 1321
  • [5] Privacy-Preserving Deep Learning
    Shokri, Reza
    Shmatikov, Vitaly
    2015 53RD ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2015, : 909 - 910
  • [6] Privacy-Preserving Machine Learning
    Chow, Sherman S. M.
    FRONTIERS IN CYBER SECURITY, 2018, 879 : 3 - 6
  • [7] Efficient Privacy-Preserving Stochastic Nonconvex Optimization
    Wang, Lingxiao
    Jayaraman, Bargav
    Evans, David
    Gu, Quanquan
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 2203 - 2213
  • [8] Privacy-Preserving Machine Learning [Cryptography]
    Kerschbaum, Florian
    Lukas, Nils
    IEEE SECURITY & PRIVACY, 2023, 21 (06) : 90 - 94
  • [9] Survey on Privacy-Preserving Machine Learning
    Liu J.
    Meng X.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2020, 57 (02): : 346 - 362
  • [10] Frameworks for Privacy-Preserving Federated Learning
    Phong, Le Trieu
    Phuong, Tran Thi
    Wang, Lihua
    Ozawa, Seiichi
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2024, E107D (01) : 2 - 12