ASYMPTOTIC OPTIMALITY IN STOCHASTIC OPTIMIZATION

被引:16
|
作者
Duchi, John C. [1 ]
Ruan, Feng [1 ]
机构
[1] Stanford Univ, Dept Stat, Stanford, CA 94305 USA
来源
ANNALS OF STATISTICS | 2021年 / 49卷 / 01期
关键词
Local asymptotic minimax theory; convex analysis; stochastic gradients; manifold identification; TILT STABILITY; APPROXIMATION;
D O I
10.1214/19-AOS1831
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
We study local complexity measures for stochastic convex optimization problems, providing a local minimax theory analogous to that of Hajek and Le Cam for classical statistical problems. We give complementary optimality results, developing fully online methods that adaptively achieve optimal convergence guarantees. Our results provide function-specific lower bounds and convergence results that make precise a correspondence between statistical difficulty and the geometric notion of tilt-stability from optimization. As part of this development, we show how variants of Nesterov's dual averaging-a stochastic gradient-based procedure-guarantee finite time identification of constraints in optimization problems, while stochastic gradient procedures fail. Additionally, we highlight a gap between problems with linear and nonlinear constraints: standard stochastic-gradient-based procedures are suboptimal even for the simplest nonlinear constraints, necessitating the development of asymptotically optimal Riemannian stochastic gradient methods.
引用
收藏
页码:21 / 48
页数:28
相关论文
共 50 条