Optimal complexity and certification of Bregman first-order methods

被引:0
|
作者
Radu-Alexandru Dragomir
Adrien B. Taylor
Alexandre d’Aspremont
Jérôme Bolte
机构
[1] Université Toulouse I Capitole,INRIA
[2] D.I. Ecole Normale Supérieure,CNRS and D.I., UMR 8548
[3] D.I. École Normale Supérieure,TSE
[4] École Normale Supérieure,undefined
[5] Université Toulouse 1 Capitole,undefined
来源
Mathematical Programming | 2022年 / 194卷
关键词
90C25; 90C06; 90C60; 90C22; 68Q25;
D O I
暂无
中图分类号
学科分类号
摘要
We provide a lower bound showing that the O(1/k) convergence rate of the NoLips method (a.k.a. Bregman Gradient or Mirror Descent) is optimal for the class of problems satisfying the relative smoothness assumption. This assumption appeared in the recent developments around the Bregman Gradient method, where acceleration remained an open issue. The main inspiration behind this lower bound stems from an extension of the performance estimation framework of Drori and Teboulle (Mathematical Programming, 2014) to Bregman first-order methods. This technique allows computing worst-case scenarios for NoLips in the context of relatively-smooth minimization. In particular, we used numerically generated worst-case examples as a basis for obtaining the general lower bound.
引用
收藏
页码:41 / 83
页数:42
相关论文
共 50 条
  • [31] Alternating complexity of counting first-order logic for the subword order
    Dietrich Kuske
    Christian Schwarz
    Acta Informatica, 2023, 60 : 79 - 100
  • [32] Alternating complexity of counting first-order logic for the subword order
    Kuske, Dietrich
    Schwarz, Christian
    ACTA INFORMATICA, 2023, 60 (01) : 79 - 100
  • [33] A Simple Nearly Optimal Restart Scheme For Speeding Up First-Order Methods
    Renegar, James
    Grimmer, Benjamin
    FOUNDATIONS OF COMPUTATIONAL MATHEMATICS, 2022, 22 (01) : 211 - 256
  • [34] A Simple Nearly Optimal Restart Scheme For Speeding Up First-Order Methods
    James Renegar
    Benjamin Grimmer
    Foundations of Computational Mathematics, 2022, 22 : 211 - 256
  • [35] First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization
    Mokhtari, Aryan
    Ribeiro, Alejandro
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [36] Low-Complexity First-Order Constraint Linearization Methods for Efficient Nonlinear MPC
    Torrisi, Giampaolo
    Grammatico, Sergio
    Frick, Damian
    Robbiani, Tommaso
    Smith, Roy S.
    Morari, Manfred
    2017 IEEE 56TH ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2017,
  • [37] First-Order Methods for Convex Optimization
    Dvurechensky, Pavel
    Shtern, Shimrit
    Staudigl, Mathias
    EURO JOURNAL ON COMPUTATIONAL OPTIMIZATION, 2021, 9
  • [38] ITERATION-COMPLEXITY OF FIRST-ORDER AUGMENTED LAGRANGIAN METHODS FOR CONVEX CONIC PROGRAMMING*
    Lu, Zhaosong
    Zhou, Zirui
    SIAM JOURNAL ON OPTIMIZATION, 2023, 33 (02) : 1159 - 1190
  • [39] Bregman primal–dual first-order method and application to sparse semidefinite programming
    Xin Jiang
    Lieven Vandenberghe
    Computational Optimization and Applications, 2022, 81 : 127 - 159
  • [40] Optimal First-Order Algorithms as a Function of Inequalities
    Park, Chanwoo
    Ryu, Ernest K.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25