Stochastic subgradient projection methods for composite optimization with functional constraints

被引:0
|
作者
Necoara, Ion [1 ,2 ]
Singh, Nitesh Kumar [1 ]
机构
[1] Univ Politehn Bucuresti, Automat Control & Syst Engn Dept, Spl Independentei 313, Bucharest 060042, Romania
[2] Romanian Acad, Gheorghe Mihoc Caius Iacob Inst Math Stat & Appl, Bucharest 050711, Romania
关键词
Stochastic optimization; convex functional constraints; stochastic subgradi-ent; rate of convergence; constrained least-squares; robust; sparse svm; CONVEX; CONVERGENCE;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper we consider optimization problems with stochastic composite objective func-tion subject to (possibly) infinite intersection of constraints. The objective function is expressed in terms of expectation operator over a sum of two terms satisfying a stochastic bounded gradient condition, with or without strong convexity type properties. In contrast to the classical approach, where the constraints are usually represented as intersection of simple sets, in this paper we consider that each constraint set is given as the level set of a convex but not necessarily differentiable function. Based on the flexibility offered by our general optimization model we consider a stochastic subgradient method with random fea-sibility updates. At each iteration, our algorithm takes a stochastic proximal (sub)gradient step aimed at minimizing the objective function and then a subsequent subgradient step minimizing the feasibility violation of the observed random constraint. We analyze the convergence behavior of the proposed algorithm for diminishing stepsizes and for the case when the objective function is convex or has a quadratic functional growth, unifying the nonsmooth and smooth cases. We prove sublinear convergence rates for this stochastic subgradient algorithm, which are known to be optimal for subgradient methods on this class of problems. When the objective function has a linear least-square form and the constraints are polyhedral, it is shown that the algorithm converges linearly. Numerical evidence supports the effectiveness of our method in real problems.
引用
收藏
页数:35
相关论文
共 50 条
  • [31] INCREMENTAL STOCHASTIC SUBGRADIENT ALGORITHMS FOR CONVEX OPTIMIZATION
    Ram, S. Sundhar
    Nedic, A.
    Veeravalli, V. V.
    SIAM JOURNAL ON OPTIMIZATION, 2009, 20 (02) : 691 - 717
  • [32] APPLICATION OF STOCHASTIC OPTIMIZATION METHODS IN THE GAS TURBINE PROJECTION PROBLEMS
    Afanasjevska, V. E.
    Tronchuk, A. A.
    Ugryumov, M. L.
    PROCEEDINGS OF THE ASME FLUIDS ENGINEERING DIVISION SUMMER CONFERENCE - 2010 - VOL 1, PTS A-C, 2010, : 679 - 682
  • [33] CONVERGENCE RATES OF SUBGRADIENT OPTIMIZATION METHODS
    GOFFIN, JL
    OPERATIONS RESEARCH, 1975, 23 : B386 - B386
  • [34] Incremental subgradient methods for nondifferentiable optimization
    Nedic, A
    Bertsekas, DP
    SIAM JOURNAL ON OPTIMIZATION, 2001, 12 (01) : 109 - 138
  • [35] CONVERGENCE RATES OF SUBGRADIENT OPTIMIZATION METHODS
    GOFFIN, JL
    MATHEMATICAL PROGRAMMING, 1977, 13 (03) : 329 - 347
  • [36] SQP-based Projection SPSA Algorithm for Stochastic Optimization with Inequality Constraints
    Shi, Jiahao
    Spall, James C.
    2021 AMERICAN CONTROL CONFERENCE (ACC), 2021, : 1244 - 1249
  • [37] On stochastic gradient and subgradient methods with adaptive steplength sequences
    Yousefian, Farzad
    Nedic, Angelia
    Shanbhag, Uday V.
    AUTOMATICA, 2012, 48 (01) : 56 - 67
  • [38] Optimal methods for convex nested stochastic composite optimization
    Zhang, Zhe
    Lan, Guanghui
    MATHEMATICAL PROGRAMMING, 2024,
  • [39] STOCHASTIC METHODS FOR COMPOSITE AND WEAKLY CONVEX OPTIMIZATION PROBLEMS
    Duchi, John C.
    Ruan, Feng
    SIAM JOURNAL ON OPTIMIZATION, 2018, 28 (04) : 3229 - 3259
  • [40] Distributed primal-dual stochastic subgradient algorithms for multi-agent optimization under inequality constraints
    Yuan, Deming
    Xu, Shengyuan
    Zhang, Baoyong
    Rong, Lina
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2013, 23 (16) : 1846 - 1868