The effect of deterministic noise in subgradient methods

被引:0
|
作者
Angelia Nedić
Dimitri P. Bertsekas
机构
[1] UIUC,Department of Industrial and Enterprise Systems Engineering
[2] M.I.T.,Department of Electrical Engineering and Computer Science
来源
Mathematical Programming | 2010年 / 125卷
关键词
90C25;
D O I
暂无
中图分类号
学科分类号
摘要
In this paper, we study the influence of noise on subgradient methods for convex constrained optimization. The noise may be due to various sources, and is manifested in inexact computation of the subgradients and function values. Assuming that the noise is deterministic and bounded, we discuss the convergence properties for two cases: the case where the constraint set is compact, and the case where this set need not be compact but the objective function has a sharp set of minima (for example the function is polyhedral). In both cases, using several different stepsize rules, we prove convergence to the optimal value within some tolerance that is given explicitly in terms of the errors. In the first case, the tolerance is nonzero, but in the second case, the optimal value can be obtained exactly, provided the size of the error in the subgradient computation is below some threshold. We then extend these results to objective functions that are the sum of a large number of convex functions, in which case an incremental subgradient method can be used.
引用
收藏
页码:75 / 99
页数:24
相关论文
共 50 条
  • [41] SUPPRESSION OF DETERMINISTIC DIFFUSION BY NOISE
    REIMANN, P
    PHYSICAL REVIEW E, 1994, 50 (02): : 727 - 735
  • [42] Asynchronous Subgradient Methods with Unbounded Delays for Communication Networks
    Gatsis, Nikolaos
    Giannakis, Georgios B.
    2012 IEEE 51ST ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2012, : 5870 - 5875
  • [43] The efficiency of ballstep subgradient level methods for convex optimization
    Kiwiel, KC
    Larsson, T
    Lindberg, PO
    MATHEMATICS OF OPERATIONS RESEARCH, 1999, 24 (01) : 237 - 254
  • [44] Adaptive subgradient methods for online learning and stochastic optimization
    Duchi, John
    Hazan, Elad
    Singer, Yoram
    Journal of Machine Learning Research, 2011, 12 : 2121 - 2159
  • [45] Revisiting subgradient extragradient methods for solving variational inequalities
    Tan, Bing
    Qin, Xiaolong
    Cho, Sun Young
    NUMERICAL ALGORITHMS, 2022, 90 (04) : 1593 - 1615
  • [46] Extragradient subgradient methods for solving bilevel equilibrium problems
    Tadchai Yuying
    Bui Van Dinh
    Do Sang Kim
    Somyot Plubtieng
    Journal of Inequalities and Applications, 2018
  • [47] Distributed Subgradient Methods for Multi-Agent Optimization
    Nedic, Angelia
    Ozdaglar, Asurrian
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2009, 54 (01) : 48 - 61
  • [48] COMBINED SUBGRADIENT METHODS OF SADDLE-POINTS SEARCH
    KONNOV, IV
    IZVESTIYA VYSSHIKH UCHEBNYKH ZAVEDENII MATEMATIKA, 1992, (10): : 30 - 33
  • [49] CONVERGENCE ANALYSIS OF DEFLECTED CONDITIONAL APPROXIMATE SUBGRADIENT METHODS
    D'Antonio, Giacomo
    Frangioni, Antonio
    SIAM JOURNAL ON OPTIMIZATION, 2009, 20 (01) : 357 - 386
  • [50] On stochastic gradient and subgradient methods with adaptive steplength sequences
    Yousefian, Farzad
    Nedic, Angelia
    Shanbhag, Uday V.
    AUTOMATICA, 2012, 48 (01) : 56 - 67