Online convex optimization using coordinate descent algorithms

被引:0
|
作者
Lin, Yankai [1 ]
Shames, Iman [2 ]
Nesic, Dragan [3 ]
机构
[1] Eindhoven Univ Technol, Dept Mech Engn, Eindhoven, Netherlands
[2] Australian Natl Univ, Sch Engn, CIICADA Lab, Acton, ACT 0200, Australia
[3] Univ Melbourne, Dept Elect & Elect Engn, Parkville, Vic 3010, Australia
基金
澳大利亚研究理事会;
关键词
Online convex optimization; Coordinate descent; Online learning; Regret minimization;
D O I
10.1016/j.automatica.2024.111681
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper considers the problem of online optimization where the objective function is time-varying. In particular, we extend coordinate descent type algorithms to the online case, where the objective function varies after a finite number of iterations of the algorithm. Instead of solving the problem exactly at each time step, we only apply a finite number of iterations at each time step. Commonly used notions of regret are used to measure the performance of the online algorithm. Moreover, coordinate descent algorithms with different updating rules are considered, including both deterministic and stochastic rules that are developed in the literature of classical offline optimization. A thorough regret analysis is given for each case. Finally, numerical simulations are provided to illustrate the theoretical results. (c) 2024 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页数:13
相关论文
共 50 条
  • [41] COORDINATE DESCENT ALGORITHMS FOR LASSO PENALIZED REGRESSION
    Wu, Tong Tong
    Lange, Kenneth
    ANNALS OF APPLIED STATISTICS, 2008, 2 (01): : 224 - 244
  • [42] Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization
    Andrei Patrascu
    Ion Necoara
    Journal of Global Optimization, 2015, 61 : 19 - 46
  • [43] Accelerated Randomized Mirror Descent Algorithms for Composite Non-strongly Convex Optimization
    Le Thi Khanh Hien
    Cuong V. Nguyen
    Huan Xu
    Canyi Lu
    Jiashi Feng
    Journal of Optimization Theory and Applications, 2019, 181 : 541 - 566
  • [44] ItsDEAL: Inexact two-level smoothing descent algorithms for weakly convex optimization
    Kabgani, Alireza
    Ahookhosh, Masoud
    arXiv,
  • [45] Accelerated Randomized Mirror Descent Algorithms for Composite Non-strongly Convex Optimization
    Hien, Le Thi Khanh
    Nguyen, Cuong, V
    Xu, Huan
    Lu, Canyi
    Feng, Jiashi
    JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2019, 181 (02) : 541 - 566
  • [46] On steepest descent algorithms for discrete convex functions
    Murota, K
    SIAM JOURNAL ON OPTIMIZATION, 2004, 14 (03) : 699 - 707
  • [47] Low-complexity RLS algorithms using dichotomous coordinate descent iterations
    Zakharov, Yuriy V.
    White, George P.
    Liu, Jie
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2008, 56 (07) : 3150 - 3161
  • [48] Adaptive Algorithms for Online Convex Optimization with Long-term Constraints
    Jenatton, Rodolphe
    Huang, Jim C.
    Archambeau, Cedric
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [49] Online Gradient Descent Learning Algorithms
    Yiming Ying
    Massimiliano Pontil
    Foundations of Computational Mathematics, 2008, 8 : 561 - 596
  • [50] Efficient parallel coordinate descent algorithm for convex optimization problems with separable constraints: Application to distributed MPC
    Necoara, Ion
    Clipici, Dragos
    JOURNAL OF PROCESS CONTROL, 2013, 23 (03) : 243 - 253