Private Stochastic Non-convex Optimization with Improved Utility Rates

被引:0
|
作者
Zhang, Qiuchen [1 ]
Ma, Jing [1 ]
Lou, Jian [1 ,2 ]
Xiong, Li [1 ]
机构
[1] Emory Univ, Atlanta, GA 30322 USA
[2] Xidian Univ, Xian, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We study the differentially private (DP) stochastic nonconvex optimization with a focus on its understudied utility measures in terms of the expected excess empirical and population risks. While the excess risks are extensively studied for convex optimization, they are rarely studied for nonconvex optimization, especially the expected population risk. For the convex case, recent studies show that it is possible for private optimization to achieve the same order of excess population risk as to the non-private optimization under certain conditions. It still remains an open question for the nonconvex case whether such ideal excess population risk is achievable. In this paper, we progress towards an affirmative answer to this open problem: DP nonconvex optimization is indeed capable of achieving the same excess population risk as to the nonprivate algorithm in most common parameter regimes, under certain conditions (i.e., well-conditioned nonconvexity). We achieve such improved utility rates compared to existing results by designing and analyzing the stagewise DP-SGD with early momentum algorithm. We obtain both excess empirical risk and excess population risk to achieve differential privacy. Our algorithm also features the first known results of excess and population risks for DP-SGD with momentum. Experiment results on both shallow and deep neural networks when respectively applied to simple and complex real datasets corroborate the theoretical results.
引用
收藏
页码:3370 / 3376
页数:7
相关论文
共 50 条
  • [1] Differentially Private Stochastic Optimization: New Results in Convex and Non-Convex Settings
    Bassily, Raef
    Guzman, Cristobal
    Menart, Michael
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [2] An Improved Convergence Analysis for Decentralized Online Stochastic Non-Convex Optimization
    Xin, Ran
    Khan, Usman A.
    Kar, Soummya
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 1842 - 1858
  • [3] Differentially Private Non-Convex Optimization under the KL Condition with Optimal Rates
    Menart, Michael
    Ullah, Enayat
    Arora, Raman
    Bassily, Raef
    Guzman, Cristobal
    INTERNATIONAL CONFERENCE ON ALGORITHMIC LEARNING THEORY, VOL 237, 2024, 237
  • [4] Stochastic Successive Convex Approximation for Non-Convex Constrained Stochastic Optimization
    Liu, An
    Lau, Vincent K. N.
    Kananian, Borna
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2019, 67 (16) : 4189 - 4203
  • [5] Natasha: Faster Non-Convex Stochastic Optimization via Strongly Non-Convex Parameter
    Allen-Zhu, Zeyuan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [6] On Graduated Optimization for Stochastic Non-Convex Problems
    Hazan, Elad
    Levy, Kfir Y.
    Shalev-Shwartz, Shai
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [7] Lower bounds for non-convex stochastic optimization
    Yossi Arjevani
    Yair Carmon
    John C. Duchi
    Dylan J. Foster
    Nathan Srebro
    Blake Woodworth
    Mathematical Programming, 2023, 199 : 165 - 214
  • [8] Lower bounds for non-convex stochastic optimization
    Arjevani, Yossi
    Carmon, Yair
    Duchi, John C.
    Foster, Dylan J.
    Srebro, Nathan
    Woodworth, Blake
    MATHEMATICAL PROGRAMMING, 2023, 199 (1-2) : 165 - 214
  • [9] Faster Rates of Private Stochastic Convex Optimization
    Su, Jinyan
    Hu, Lijie
    Wang, Di
    INTERNATIONAL CONFERENCE ON ALGORITHMIC LEARNING THEORY, VOL 167, 2022, 167
  • [10] Private Stochastic Convex Optimization with Optimal Rates
    Bassily, Raef
    Feldman, Vitaly
    Talwar, Kunal
    Thakurta, Abhradeep
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32