Dynamic Regret Bounds for Constrained Online Nonconvex Optimization Based on Polyak-Lojasiewicz Regions

被引:3
|
作者
Mulvaney-Kemp, Julie [1 ]
Park, SangWoo [1 ]
Jin, Ming [2 ]
Lavaei, Javad [1 ]
机构
[1] Univ Calif Berkeley, Dept Ind Engn & Operat Res, Berkeley, CA 94720 USA
[2] Virginia Tech, Dept Elect & Comp Engn, Blacksburg, VA 24061 USA
来源
关键词
Heuristic algorithms; Optimization; Loss measurement; Target tracking; Network systems; Convergence; Control systems; Adversarial machine learning; dynamic regret; nonconvex optimization; online optimization; optimization methods; randomized algorithms; time-varying systems; STABILITY; SYSTEMS;
D O I
10.1109/TCNS.2022.3203798
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Online optimization problems are well understood in the convex case, where algorithmic performance is typically measured relative to the best fixed decision. In this article, we shed light on online nonconvex optimization problems in which algorithms are evaluated against the optimal decision at each time using the more useful notion of dynamic regret. The focus is on loss functions that are arbitrarily nonconvex but have global solutions that are slowly time-varying. We address this problem by first analyzing the region around the global solution at each time to define time-varying target sets, which contain the global solution and exhibit desirable properties under the projected gradient descent algorithm. All points in a target set satisfy the proximal Polyak-Lojasiewicz inequality, among other conditions. Then, we introduce two algorithms and prove that the dynamic regret for each algorithm is bounded by a function of the temporal variation in the optimal decision. The first algorithm assumes that the decision maker has some prior knowledge about the initial objective function and may query the gradient repeatedly at each time. This algorithm ensures that decisions are within the target set at every time. The second algorithm makes no assumption about prior knowledge. It instead relies on random sampling and memory to find and then track the target sets over time. In this case, the landscape of the loss functions determines the likelihood that the dynamic regret will be small. Numerical experiments validate these theoretical results and highlight the impact of a single low-complexity problem early in the sequence.
引用
收藏
页码:599 / 611
页数:13
相关论文
共 50 条
  • [41] Randomized Gradient-Free Distributed Online Optimization via a Dynamic Regret Analysis
    Pang, Yipeng
    Hu, Guoqiang
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (11) : 6781 - 6788
  • [42] Improved dynamic regret of distributed online multiple Frank-Wolfe convex optimization
    Wentao ZHANG
    Yang SHI
    Baoyong ZHANG
    Deming YUAN
    Science China(Information Sciences), 2024, 67 (11) : 164 - 179
  • [43] Regret bounds for online-learning-based linear quadratic control under database attacks
    Chekan, Jafar Abbaszadeh
    Langbort, Cedric
    AUTOMATICA, 2023, 151
  • [44] Dual-Population Evolution Based Dynamic Constrained Multiobjective Optimization With Discontinuous and Irregular Feasible Regions
    Jiang, Xiaoxu
    Chen, Qingda
    Ding, Jinliang
    Zhang, Xingyi
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2025, 9 (02): : 1352 - 1366
  • [45] Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization
    Zhao, Peng
    Zhang, Yu-Jie
    Zhang, Lijun
    Zhou, Zhi-Hua
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25
  • [46] A Momentum-Based Linearized Augmented Lagrangian Method for Nonconvex Constrained Stochastic Optimization
    Shi, Qiankun
    Wang, Xiao
    Wang, Hao
    MATHEMATICS OF OPERATIONS RESEARCH, 2025,
  • [47] A new Lagrangian-based first-order method for nonconvex constrained optimization
    Kim, Jong Gwang
    OPERATIONS RESEARCH LETTERS, 2023, 51 (03) : 357 - 363
  • [48] A novel neural network based on NCP function for solving constrained nonconvex optimization problems
    Effati, Sohrab
    Moghaddas, Mohammad
    COMPLEXITY, 2016, 21 (06) : 130 - 141
  • [49] Chance-constrained optimization for nonconvex programs using scenario-based methods
    Yang, Yu
    Sutanto, Christie
    ISA TRANSACTIONS, 2019, 90 : 157 - 168
  • [50] The global convergence of augmented Lagrangian methods based on NCP function in constrained nonconvex optimization
    Wu, H. X.
    Luo, H. Z.
    Li, S. L.
    APPLIED MATHEMATICS AND COMPUTATION, 2009, 207 (01) : 124 - 134