Private (Stochastic) Non-Convex Optimization Revisited: Second-Order Stationary Points and Excess Risks

被引:0
|
作者
Ganesh, Arun [1 ]
Liu, Daogao [2 ]
Oh, Sewoong [1 ,2 ]
Thakurta, Abhradeep [3 ]
机构
[1] Google Res, Mountain View, CA 94043 USA
[2] Univ Washington, Seattle, WA USA
[3] Google DeepMind, Mountain View, CA USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We reconsider the challenge of non-convex optimization under differential privacy constraint. Building upon the previous variance-reduced algorithm Spider-Boost, we propose a novel framework that employs two types of gradient oracles: one that estimates the gradient at a single point and a more cost-effective option that calculates the gradient difference between two points. Our framework can ensure continuous accuracy of gradient estimations and subsequently enhances the rates of identifying second-order stationary points. Additionally, we consider a more challenging task by attempting to locate the global minima of a non-convex objective via the exponential mechanism without almost any assumptions. Our preliminary results suggest that the regularized exponential mechanism can effectively emulate previous empirical and population risk bounds, negating the need for smoothness assumptions for algorithms with polynomial running time. Furthermore, with running time factors excluded, the exponential mechanism demonstrates promising population risk bound performance, and we provide a nearly matching lower bound.
引用
收藏
页数:24
相关论文
共 50 条
  • [1] Linearized ADMM Converges to Second-Order Stationary Points for Non-Convex Problems
    Lu, Songtao
    Lee, Jason D.
    Razaviyayn, Meisam
    Hong, Mingyi
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 4859 - 4874
  • [2] Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations
    Arjevani, Yossi
    Carmon, Yair
    Duchi, John C.
    Foster, Dylan J.
    Sekhari, Ayush
    Sridharan, Karthik
    CONFERENCE ON LEARNING THEORY, VOL 125, 2020, 125
  • [3] Second-order Optimization for Non-convex Machine Learning: an Empirical Study
    Xu, Peng
    Roosta, Fred
    Mahoney, Michael W.
    PROCEEDINGS OF THE 2020 SIAM INTERNATIONAL CONFERENCE ON DATA MINING (SDM), 2020, : 199 - 207
  • [4] Second-Order Step-Size Tuning of SGD for Non-Convex Optimization
    Camille Castera
    Jérôme Bolte
    Cédric Févotte
    Edouard Pauwels
    Neural Processing Letters, 2022, 54 : 1727 - 1752
  • [5] Second-Order Step-Size Tuning of SGD for Non-Convex Optimization
    Castera, Camille
    Bolte, Jerome
    Fevotte, Cedric
    Pauwels, Edouard
    NEURAL PROCESSING LETTERS, 2022, 54 (03) : 1727 - 1752
  • [6] The Computational Complexity of Finding Stationary Points in Non-Convex Optimization
    Hollender, Alexandros
    Zampetakis, Manolis
    THIRTY SIXTH ANNUAL CONFERENCE ON LEARNING THEORY, VOL 195, 2023, 195
  • [7] The computational complexity of finding stationary points in non-convex optimization
    Hollender, Alexandros
    Zampetakis, Manolis
    MATHEMATICAL PROGRAMMING, 2024,
  • [8] Private Stochastic Non-convex Optimization with Improved Utility Rates
    Zhang, Qiuchen
    Ma, Jing
    Lou, Jian
    Xiong, Li
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 3370 - 3376
  • [9] Second-Order Optimality in Non-Convex Decentralized Optimization via Perturbed Gradient Tracking
    Tziotis, Isidoros
    Caramanis, Constantine
    Mokhtari, Aryan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [10] Differentially Private Stochastic Optimization: New Results in Convex and Non-Convex Settings
    Bassily, Raef
    Guzman, Cristobal
    Menart, Michael
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34