MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization

被引:0
|
作者
Condat, Laurent [1 ]
Richtarik, Peter [1 ]
机构
[1] King Abdullah Univ Sci & Technol, Thuwal 239556900, Saudi Arabia
关键词
convex optimization; distributed optimization; randomized algorithm; stochastic gradient; variance reduction; communication; sampling; compression;
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
We propose a generic variance-reduced algorithm, which we call MUltiple RANdomized Algorithm (MURANA), for minimizing a sum of several smooth functions plus a regularizer, in a sequential or distributed manner. Our method is formulated with general stochastic operators, which allow us to model various strategies for reducing the computational complexity. For example, MURANA supports sparse activation of the gradients, and also reduction of the communication load via compression of the update vectors. This versatility allows MURANA to cover many existing randomization mechanisms within a unified framework, which also makes it possible to design new methods as special cases.
引用
收藏
页数:26
相关论文
共 50 条
  • [31] Cocoercivity, smoothness and bias in variance-reduced stochastic gradient methods
    Morin, Martin
    Giselsson, Pontus
    NUMERICAL ALGORITHMS, 2022, 91 (02) : 749 - 772
  • [32] Variance-reduced HMM for Stochastic Slow-Fast Systems
    Melis, Ward
    Samaey, Giovanni
    INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE 2016 (ICCS 2016), 2016, 80 : 1255 - 1266
  • [33] A Variance-Reduced and Stabilized Proximal Stochastic Gradient Method with Support Identification Guarantees for Structured Optimization
    Dai, Yutong
    Wang, Guanyi
    Curtis, Frank E.
    Robinson, Daniel P.
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 206, 2023, 206
  • [34] Communication-efficient Variance-reduced Stochastic Gradient Descent
    Ghadikolaei, Hossein S.
    Magnusson, Sindri
    IFAC PAPERSONLINE, 2020, 53 (02): : 2648 - 2653
  • [35] On the step size selection in variance-reduced algorithm for nonconvex optimization
    Yang, Zhuang
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 169
  • [36] A unified variance-reduced accelerated gradient method for convex optimization
    Lan, Guanghui
    Li, Zhize
    Zhou, Yi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [37] Momentum-based variance-reduced stochastic Bregman proximal gradient methods for nonconvex nonsmooth optimization
    Liao, Shichen
    Liu, Yan
    Han, Congying
    Guo, Tiande
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 266
  • [38] Variance-Reduced Stochastic Quasi-Newton Methods for Decentralized Learning
    Zhang, Jiaojiao
    Liu, Huikang
    So, Anthony Man-Cho
    Ling, Qing
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2023, 71 : 311 - 326
  • [39] Federated Variance-Reduced Stochastic Gradient Descent With Robustness to Byzantine Attacks
    Wu, Zhaoxian
    Ling, Qing
    Chen, Tianyi
    Giannakis, Georgios B.
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2020, 68 (68) : 4583 - 4596
  • [40] Variance-Reduced Stochastic Learning by Networked Agents Under Random Reshuffling
    Yuan, Kun
    Ying, Bicheng
    Liu, Jiageng
    Sayed, Ali H.
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2019, 67 (02) : 351 - 366