Stochastic Primal-Dual Proximal ExtraGradient descent for compositely regularized optimization

被引:6
|
作者
Lin, Tianyi [1 ]
Qiao, Linbo [2 ]
Zhang, Teng [3 ]
Feng, Jiashi [4 ]
Zhang, Bofeng [2 ]
机构
[1] Univ Calif Berkeley, Dept Ind Engn & Operat Res, Berkeley, CA USA
[2] Natl Univ Def Technol, Coll Comp, Changsha, Hunan, Peoples R China
[3] Stanford Univ, Dept Management Sci & Engn, Stanford, CA 94305 USA
[4] Natl Univ Singapore, Dept ECE, Singapore, Singapore
关键词
Compositely regularized optimization; Stochastic Primal-Dual Proximal; ExtraGradient descent; SADDLE-POINT; COMPLEXITY; INEQUALITIES;
D O I
10.1016/j.neucom.2017.07.066
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider a wide range of regularized stochastic minimization problems with two regularization terms, one of which is composed with a linear function. This optimization model abstracts a number of important applications in artificial intelligence and machine learning, such as fused Lasso, fused logistic regression, and a class of graph-guided regularized minimization. The computational challenges of this model are in two folds. On one hand, the closed-form solution of the proximal mapping associated with the composed regularization term or the expected objective function is not available. On the other hand, the calculation of the full gradient of the expectation in the objective is very expensive when the number of input data samples is considerably large. To address these issues, we propose a stochastic variant of extra-gradient type methods, namely Stochastic Primal-Dual Proximal ExtraGradient descent (SPDPEG), and analyze its convergence property for both convex and strongly convex objectives. For general convex objectives, the uniformly average iterates generated by SPDPEG converge in expectation with O (1/root t) rate. While for strongly convex objectives, the uniformly and non-uniformly average iterates generated by SPDPEG converge with O (log (t)/t) and O (1/t) rates, respectively. The order of the rate of the proposed algorithm is known to match the best convergence rate for first-order stochastic algorithms. Experiments on fused logistic regression and graph-guided regularized logistic regression problems show that the proposed algorithm performs very efficiently and consistently outperforms other competing algorithms. (C) 2017 Elsevier B.V. All rights reserved.
引用
收藏
页码:516 / 525
页数:10
相关论文
共 50 条
  • [1] On Stochastic Primal-Dual Hybrid Gradient Approach for Compositely Regularized Minimization
    Qiao, Linbo
    Lin, Tianyi
    Jiang, Yu-Gang
    Yang, Fan
    Liu, Wei
    Lu, Xicheng
    ECAI 2016: 22ND EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, 285 : 167 - 174
  • [2] Primal-Dual Stochastic Mirror Descent for MDPs
    Tiapkin, Daniil
    Gasnikov, Alexander
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [3] Primal-Dual Mirror Descent Method for Constraint Stochastic Optimization Problems
    Bayandina, A. S.
    Gasnikov, A. V.
    Gasnikova, E. V.
    Matsievskii, S. V.
    COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS, 2018, 58 (11) : 1728 - 1736
  • [4] A STOCHASTIC COORDINATE DESCENT PRIMAL-DUAL ALGORITHM AND APPLICATIONS
    Bianchi, Pascal
    Hachem, Walid
    Franck, Iutzeler
    2014 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2014,
  • [5] Online Primal-Dual Mirror Descent under Stochastic Constraints
    Wei, Xiaohan
    Yu, Hao
    Neely, Michael J.
    PROCEEDINGS OF THE ACM ON MEASUREMENT AND ANALYSIS OF COMPUTING SYSTEMS, 2020, 4 (02)
  • [6] Online Primal-Dual Mirror Descent under Stochastic Constraints
    Wei X.
    Yu H.
    Neely M.J.
    Performance Evaluation Review, 2020, 48 (01): : 3 - 4
  • [7] PRIMAL-DUAL ALGORITHMS FOR OPTIMIZATION WITH STOCHASTIC DOMINANCE
    Haskell, William B.
    Shanthikumar, J. George
    Shen, Z. Max
    SIAM JOURNAL ON OPTIMIZATION, 2017, 27 (01) : 34 - 66
  • [8] Distributed Primal-Dual Proximal Method for Regularized Empirical Risk Minimization
    Khuzani, Masoud Badiei
    2018 17TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA), 2018, : 938 - 945
  • [9] New Primal-Dual Proximal Algorithm for Distributed Optimization
    Latafat, Puya
    Stella, Lorenzo
    Patrinos, Panagiotis
    2016 IEEE 55TH CONFERENCE ON DECISION AND CONTROL (CDC), 2016, : 1959 - 1964
  • [10] Regularized Primal-Dual Subgradient Method for Distributed Constrained Optimization
    Yuan, Deming
    Ho, Daniel W. C.
    Xu, Shengyuan
    IEEE TRANSACTIONS ON CYBERNETICS, 2016, 46 (09) : 2109 - 2118