SVR-Primal Dual Method of Multipliers (PDMM) for Large-Scale Problems

被引:0
|
作者
Sinha, Lijanshu [1 ]
Rajawat, Ketan [2 ]
Kumar, Chirag [2 ]
机构
[1] Intel, C2DG Big Core India, Bengaluru 560103, India
[2] IIT Kanpur, SPiN Lab, Dept EE, Kanpur 208016, Uttar Pradesh, India
关键词
distributed optimization; PDMM; ADMM; SVRG; DISTRIBUTED OPTIMIZATION;
D O I
10.1109/ncc48643.2020.9056014
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
With the advent of big data scenarios, centralized processing is no more feasible and is on the verge of getting obsolete. With this shift in paradigm, distributed processing is becoming more relevant, i.e., instead of burdening the central processor, sharing the load between the multiple processing units. The decentralization capability of the ADMM algorithm made it popular since the recent past. Another recent algorithm PDMM paved its way kir distributed processing, which is still in its development state. Both the algorithms work well with the medium-scale problems, but dealing with large scale problems is still a challenging task. This work is an effort towards handling large scale data with reduced computation load. To this end, the proposed framework tries to combine the advantages of the SVRG and PDMM algorithms. The algorithm is proved to converge with rate O(1/K) for strongly convex loss functions, which is faster than the existing algorithms. Experimental evaluations on the real data prove the efficacy of the proposed algorithm over the state of the art methodologies.
引用
收藏
页数:5
相关论文
共 50 条
  • [32] An efficient primal simplex method for solving large-scale support vector machines
    Brahmi, Belkacem
    NEUROCOMPUTING, 2024, 599
  • [33] CONVERGENCE AND ITERATION-COMPLEXITY OF A PRIMAL-DUAL MAJORIZATION-MINIMIZATION METHOD FOR LARGE-SCALE LINEAR PROGRAMMING
    Liu, Xin-Wei
    Dai, Yu-Hong
    Huang, Ya-Kui
    PACIFIC JOURNAL OF OPTIMIZATION, 2024, 20 (03): : 513 - 535
  • [35] ON CONVERGENCE ANALYSIS OF GRADIENT BASED PRIMAL-DUAL METHOD OF MULTIPLIERS
    Zhang, Guoqiang
    O'Connor, Matthew
    Li, Le
    2018 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP (SSP), 2018, : 11 - 15
  • [36] Dual subgradient algorithms for large-scale nonsmooth learning problems
    Bruce Cox
    Anatoli Juditsky
    Arkadi Nemirovski
    Mathematical Programming, 2014, 148 : 143 - 180
  • [37] Dual subgradient algorithms for large-scale nonsmooth learning problems
    Cox, Bruce
    Juditsky, Anatoli
    Nemirovski, Arkadi
    MATHEMATICAL PROGRAMMING, 2014, 148 (1-2) : 143 - 180
  • [38] Cost-Sensitive Alternating Direction Method of Multipliers for Large-Scale Classification
    Wang, Huihui
    Shi, Yinghuan
    Chen, Xingguo
    Gao, Yang
    INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2017, 2017, 10585 : 315 - 325
  • [39] A fully distributed dual gradient method with linear convergence for large-scale separable convex problems
    Necoara, Ion
    Nedich, Angelia
    2015 EUROPEAN CONTROL CONFERENCE (ECC), 2015, : 304 - 309
  • [40] RPCGB Method for Large-Scale Global Optimization Problems
    Ettahiri, Abderrahmane
    El Mouatasim, Abdelkrim
    AXIOMS, 2023, 12 (06)