Distributed Stochastic Gradient Descent Using LDGM Codes

被引:0
|
作者
Horii, Shunsuke [1 ]
Yoshida, Takahiro [2 ]
Kobayashi, Manabu [1 ]
Matsushima, Toshiyasu [1 ]
机构
[1] Waseda Univ, Tokyo, Japan
[2] Yokohama Coll Commerce, Yokohama, Kanagawa, Japan
基金
日本学术振兴会;
关键词
D O I
10.1109/isit.2019.8849580
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We consider a distributed learning problem in which the computation is carried out on a system consisting of a master node and multiple worker nodes. In such systems, the existence of slow-running machines called stragglers will cause a significant decrease in performance. Recently, coding theoretic framework, which is named Gradient Coding (GC), for mitigating stragglers in distributed learning has been established by Tandon et al. Most studies on GC are aiming at recovering the gradient information completely assuming that the Gradient Descent (GD) algorithm is used as a learning algorithm. On the other hand, if the Stochastic Gradient Descent (SGD) algorithm is used, it is not necessary to completely recover the gradient information, and its unbiased estimator is sufficient for the learning. In this paper, we propose a distributed SGD scheme using Low Density Generator Matrix (LDGM) codes. In the proposed system, it may take longer time than existing GC methods to recover the gradient information completely, however, it enables the master node to obtain a high-quality unbiased estimator of the gradient at low computational cost and it leads to overall performance improvement.
引用
收藏
页码:1417 / 1421
页数:5
相关论文
共 50 条
  • [31] Distributed stochastic gradient descent for link prediction in signed social networks
    Han Zhang
    Gang Wu
    Qing Ling
    EURASIP Journal on Advances in Signal Processing, 2019
  • [32] Elastic Consistency: A Practical Consistency Model for Distributed Stochastic Gradient Descent
    Nadiradze, Giorgi
    Markov, Ilia
    Chatterjee, Bapi
    Kungurtsev, Vyacheslav
    Alistarh, Dan
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 9037 - 9045
  • [33] Leader Stochastic Gradient Descent for Distributed Training of Deep Learning Models
    Teng, Yunfei
    Gao, Wenbo
    Chalus, Francois
    Choromanska, Anna
    Goldfarb, Donald
    Weller, Adrian
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [34] Local Stochastic Factored Gradient Descent for Distributed Quantum State Tomography
    Kim, Junhyung Lyle
    Toghani, Mohammad Taha
    Uribe, Cesar A.
    Kyrillidis, Anastasios
    IEEE CONTROL SYSTEMS LETTERS, 2022, 7 : 199 - 204
  • [35] Asymptotic Network Independence in Distributed Stochastic Optimization for Machine Learning: Examining Distributed and Centralized Stochastic Gradient Descent
    Pu, Shi
    Olshevsky, Alex
    Paschalidis, Ioannis Ch.
    IEEE SIGNAL PROCESSING MAGAZINE, 2020, 37 (03) : 114 - 122
  • [36] Brain Source Localization Using Stochastic Gradient Descent
    Al-Momani, Sajedah
    Mir, Hasan
    Al-Nashash, Hasan
    Al-Kaylani, Muhammad
    IEEE SENSORS JOURNAL, 2021, 21 (06) : 8375 - 8383
  • [37] LARGE SCALE RANKING USING STOCHASTIC GRADIENT DESCENT
    Tas, Engin
    COMPTES RENDUS DE L ACADEMIE BULGARE DES SCIENCES, 2022, 75 (10): : 1419 - 1427
  • [38] Unforgeability in Stochastic Gradient Descent
    Baluta, Teodora
    Nikolic, Ivica
    Jain, Racchit
    Aggarwal, Divesh
    Saxena, Prateek
    PROCEEDINGS OF THE 2023 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, CCS 2023, 2023, : 1138 - 1152
  • [39] Preconditioned Stochastic Gradient Descent
    Li, Xi-Lin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (05) : 1454 - 1466
  • [40] Stochastic Reweighted Gradient Descent
    El Hanchi, Ayoub
    Stephens, David A.
    Maddison, Chris J.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,