Dynamic Regret Bounds for Constrained Online Nonconvex Optimization Based on Polyak-Lojasiewicz Regions

被引:3
|
作者
Mulvaney-Kemp, Julie [1 ]
Park, SangWoo [1 ]
Jin, Ming [2 ]
Lavaei, Javad [1 ]
机构
[1] Univ Calif Berkeley, Dept Ind Engn & Operat Res, Berkeley, CA 94720 USA
[2] Virginia Tech, Dept Elect & Comp Engn, Blacksburg, VA 24061 USA
来源
关键词
Heuristic algorithms; Optimization; Loss measurement; Target tracking; Network systems; Convergence; Control systems; Adversarial machine learning; dynamic regret; nonconvex optimization; online optimization; optimization methods; randomized algorithms; time-varying systems; STABILITY; SYSTEMS;
D O I
10.1109/TCNS.2022.3203798
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Online optimization problems are well understood in the convex case, where algorithmic performance is typically measured relative to the best fixed decision. In this article, we shed light on online nonconvex optimization problems in which algorithms are evaluated against the optimal decision at each time using the more useful notion of dynamic regret. The focus is on loss functions that are arbitrarily nonconvex but have global solutions that are slowly time-varying. We address this problem by first analyzing the region around the global solution at each time to define time-varying target sets, which contain the global solution and exhibit desirable properties under the projected gradient descent algorithm. All points in a target set satisfy the proximal Polyak-Lojasiewicz inequality, among other conditions. Then, we introduce two algorithms and prove that the dynamic regret for each algorithm is bounded by a function of the temporal variation in the optimal decision. The first algorithm assumes that the decision maker has some prior knowledge about the initial objective function and may query the gradient repeatedly at each time. This algorithm ensures that decisions are within the target set at every time. The second algorithm makes no assumption about prior knowledge. It instead relies on random sampling and memory to find and then track the target sets over time. In this case, the landscape of the loss functions determines the likelihood that the dynamic regret will be small. Numerical experiments validate these theoretical results and highlight the impact of a single low-complexity problem early in the sequence.
引用
收藏
页码:599 / 611
页数:13
相关论文
共 50 条
  • [1] Asynchronous Parallel Nonconvex Optimization Under the Polyak-Lojasiewicz Condition
    Yazdani, Kasra
    Hale, Matthew
    IEEE CONTROL SYSTEMS LETTERS, 2022, 6 : 524 - 529
  • [2] Distributed Event-Triggered Nonconvex Optimization under Polyak-Lojasiewicz Condition
    Gao, Chao
    Xu, Lei
    Zhang, Kunpeng
    Li, Yuzhe
    Liu, Zhiwei
    Yang, Tao
    2024 18TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION, ICARCV, 2024, : 930 - 935
  • [3] OVER-PARAMETERIZED MODEL OPTIMIZATION WITH POLYAK-LOJASIEWICZ CONDITION
    Chen, Yixuan
    Shi, Yubin
    Dong, Mingzhi
    Yang, Xiaochen
    Li, Dongsheng
    Wang, Yujiang
    Dick, Robert P.
    Lv, Qin
    Zhao, Yingying
    Yang, Fan
    Gu, Ning
    Shang, Li
    11th International Conference on Learning Representations, ICLR 2023, 2023,
  • [4] Faster Stochastic Algorithms for Minimax Optimization under Polyak-Lojasiewicz Conditions
    Chen, Lesi
    Yao, Boyuan
    Luo, Luo
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [5] A Generalized Alternating Method for Bilevel Optimization under the Polyak-Lojasiewicz Condition
    Rensselaer Polytechnic Institute, Troy
    NY, United States
    不详
    NY, United States
    arXiv, 1600,
  • [6] Quantized Zeroth-Order Gradient Tracking Algorithm for Distributed Nonconvex Optimization Under Polyak-Lojasiewicz Condition
    Xu, Lei
    Yi, Xinlei
    Deng, Chao
    Shi, Yang
    Chai, Tianyou
    Yang, Tao
    IEEE TRANSACTIONS ON CYBERNETICS, 2024, 54 (10) : 5746 - 5758
  • [7] Faster Stochastic Algorithms for Minimax Optimization under Polyak-Lojasiewicz Conditions
    Chen, Lesi
    Yao, Boyuan
    Luo, Luo
    Advances in Neural Information Processing Systems, 2022, 35
  • [8] A Generalized Alternating Method for Bilevel Optimization under the Polyak-Lojasiewicz Condition
    Xiao, Quan
    Lu, Songtao
    Chen, Tianyi
    Advances in Neural Information Processing Systems, 2023, 36
  • [9] Online Stochastic Gradient Methods Under Sub-Weibull Noise and the Polyak-Lojasiewicz Condition
    Kim, Seunghyun
    Madden, Liam
    Dall'Anese, Emiliano
    2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC), 2022, : 3499 - 3506
  • [10] Online Stochastic Gradient Methods Under Sub-Weibull Noise and the Polyak-Lojasiewicz Condition
    University of Colorado, Department of Applied Mathematics, Boulder, United States
    不详
    Proc IEEE Conf Decis Control, 2022, (3499-3506):