An acceleration procedure for optimal first-order methods

被引:4
|
作者
Baes, Michel [1 ]
Buergisser, Michael [1 ]
机构
[1] ETH, Inst Operat Res, CH-8092 Zurich, Switzerland
来源
OPTIMIZATION METHODS & SOFTWARE | 2014年 / 29卷 / 03期
关键词
convex optimization; first-order methods; eigenvalue optimization; OPTIMIZATION; GRADIENT;
D O I
10.1080/10556788.2013.835812
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We introduce an optimal first-order method that allows an easy and cheap evaluation of the local Lipschitz constant of the objective's gradient. This constant must ideally be chosen at every iteration as small as possible, while serving in an indispensable upper bound for the value of the objective function. In the previously existing variants of optimal first-order methods, this upper bound inequality was constructed from points computed during the current iteration. It was thus not possible to select the optimal value for this Lipschitz constant at the beginning of the iteration. In our variant, the upper bound inequality is constructed from points available before the current iteration, offering us the possibility to set the Lipschitz constant to its optimal value at once. This procedure, even if efficient in practice, presents a higher worse-case complexity than standard optimal first-order methods. We propose an alternative strategy that retains the practical efficiency of this procedure, while having an optimal worse-case complexity. Our generic scheme can be adapted for smoothing techniques. We perform numerical experiments on large-scale eigenvalue minimization problems, allowing us to reduce computation times by two to three orders of magnitude for the largest problems we considered over standard optimal methods.
引用
收藏
页码:610 / 628
页数:19
相关论文
共 50 条