Linear convergence of first order methods for non-strongly convex optimization

被引:0
|
作者
I. Necoara
Yu. Nesterov
F. Glineur
机构
[1] University Politehnica Bucharest,Automatic Control and Systems Engineering Department
[2] Universite catholique de Louvain,Center for Operations Research and Econometrics
来源
Mathematical Programming | 2019年 / 175卷
关键词
90C25; 90C06; 65K05;
D O I
暂无
中图分类号
学科分类号
摘要
The standard assumption for proving linear convergence of first order methods for smooth convex optimization is the strong convexity of the objective function, an assumption which does not hold for many practical applications. In this paper, we derive linear convergence rates of several first order methods for solving smooth non-strongly convex constrained optimization problems, i.e. involving an objective function with a Lipschitz continuous gradient that satisfies some relaxed strong convexity condition. In particular, in the case of smooth constrained convex optimization, we provide several relaxations of the strong convexity conditions and prove that they are sufficient for getting linear convergence for several first order methods such as projected gradient, fast gradient and feasible descent methods. We also provide examples of functional classes that satisfy our proposed relaxations of strong convexity conditions. Finally, we show that the proposed relaxed strong convexity conditions cover important applications ranging from solving linear systems, Linear Programming, and dual formulations of linearly constrained convex problems.
引用
收藏
页码:69 / 107
页数:38
相关论文
共 50 条
  • [1] Linear convergence of first order methods for non-strongly convex optimization
    Necoara, I.
    Nesterov, Yu.
    Glineur, F.
    MATHEMATICAL PROGRAMMING, 2019, 175 (1-2) : 69 - 107
  • [2] Methodology and first-order algorithms for solving nonsmooth and non-strongly convex bilevel optimization problems
    Lior Doron
    Shimrit Shtern
    Mathematical Programming, 2023, 201 : 521 - 558
  • [3] Methodology and first-order algorithms for solving nonsmooth and non-strongly convex bilevel optimization problems
    Doron, Lior
    Shtern, Shimrit
    MATHEMATICAL PROGRAMMING, 2023, 201 (1-2) : 521 - 558
  • [4] On the rate of convergence of alternating minimization for non-smooth non-strongly convex optimization in Banach spaces
    Both, Jakub Wiktor
    OPTIMIZATION LETTERS, 2022, 16 (02) : 729 - 743
  • [5] On the rate of convergence of alternating minimization for non-smooth non-strongly convex optimization in Banach spaces
    Jakub Wiktor Both
    Optimization Letters, 2022, 16 : 729 - 743
  • [6] Stochastic quasi-Newton methods for non-strongly convex problems: convergence and rate analysis
    Yousefian, Farzad
    Nedic, Angelia
    Shanbhag, Uday V.
    2016 IEEE 55TH CONFERENCE ON DECISION AND CONTROL (CDC), 2016, : 4496 - 4503
  • [7] Distributed Stochastic Optimization with Compression for Non-Strongly Convex Objectives
    Li, Xuanjie
    Xu, Yuedong
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2024, 139 (01): : 459 - 481
  • [8] An aggressive reduction on the complexity of optimization for non-strongly convex objectives
    Luo, Zhijian
    Chen, Siyu
    Hou, Yueen
    Gao, Yanzeng
    Qian, Yuntao
    INTERNATIONAL JOURNAL OF WAVELETS MULTIRESOLUTION AND INFORMATION PROCESSING, 2023, 21 (05)
  • [9] On the Q-Linear Convergence of Distributed Generalized ADMM Under Non-Strongly Convex Function Components
    Maros, Marie
    Jalden, Joakim
    IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2019, 5 (03): : 442 - 453
  • [10] Convergence Results of a Nested Decentralized Gradient Method for Non-strongly Convex Problems
    Choi, Woocheol
    Kim, Doheon
    Yun, Seok-Bae
    JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2022, 195 (01) : 172 - 204