Communication Efficient Distributed Learning with Feature Partitioned Data

被引:0
|
作者
Zhang, Bingwen [1 ]
Geng, Jun [2 ]
Xu, Weiyu [3 ]
Lai, Lifeng [4 ]
机构
[1] Worcester Polytech Inst, Dept Elect & Comp Engn, Worcester, MA 01609 USA
[2] Harbin Inst Tech, Sch Elect & Info Engn, Harbin, Peoples R China
[3] Univ Iowa, Dept Elect & Comp Engn, Iowa City, IA 52242 USA
[4] Univ Calif Davis, Dept Elect & Comp Engn, Davis, CA 95616 USA
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
Distributed learning; Feature partitioned data; Communication efficiency; Inexact update; REGRESSION;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
One major bottleneck in the design of large scale distributed machine learning algorithms is the communication cost. In this paper, we propose and analyze a distributed learning scheme for reducing the amount of communication in distributed learning problems under the feature partition scenario. The motivating observation of our scheme is that, in the existing schemes for the feature partition scenario, large amount of data exchange is needed for calculating gradients. In our proposed scheme, instead of calculating the exact gradient at each iteration, we only calculate the exact gradient sporadically. We provide precise conditions to determine when to perform the exact update, and characterize the convergence rate and bounds for total iterations and communication iterations. We further test our algorithm on real data sets and show that the proposed scheme can substantially reduce the amount of data transferred between distributed nodes.
引用
收藏
页数:6
相关论文
共 50 条
  • [41] Scalable inductive learning on partitioned data
    Chen, QJ
    Wu, XD
    Zhu, XQ
    FOUNDATIONS OF INTELLIGENT SYSTEMS, PROCEEDINGS, 2005, 3488 : 391 - 403
  • [42] LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning
    Chen, Tianyi
    Giannakis, Georgios B.
    Sun, Tao
    Yin, Wotao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [43] Communication-Efficient Distributed Deep Metric Learning with Hybrid Synchronization
    Su, Yuxin
    Lyu, Michael
    King, Irwin
    CIKM'18: PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2018, : 1463 - 1472
  • [44] UbiNN: A Communication Efficient Framework for Distributed Machine Learning in Edge Computing
    Li, Ke
    Chen, Kexun
    Luo, Shouxi
    Zhang, Honghao
    Fan, Pingzhi
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2023, 10 (06): : 3368 - 3383
  • [45] Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters
    Zhang, Hao
    Zheng, Zeyu
    Xu, Shizhen
    Dai, Wei
    Hoe, Qirong
    Liang, Xiaodan
    Hu, Zhiting
    Weil, Jinliang
    Xie, Pengtao
    Xing, Eric P.
    2017 USENIX ANNUAL TECHNICAL CONFERENCE (USENIX ATC '17), 2017, : 181 - 193
  • [46] Communication-Efficient Coded Distributed Multi-Task Learning
    Tang, Hua
    Hu, Haoyang
    Yuan, Kai
    Wu, Youlong
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [47] Intermittent Pulling With Local Compensation for Communication-Efficient Distributed Learning
    Wang, Haozhao
    Qu, Zhihao
    Guo, Song
    Gao, Xin
    Li, Ruixuan
    Ye, Baoliu
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2022, 10 (02) : 779 - 791
  • [48] Communication-Efficient Gradient Coding for Straggler Mitigation in Distributed Learning
    Kadhe, Swanand
    Koyluoglu, O. Ozan
    Ramchandran, Kannan
    2020 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2020, : 2634 - 2639
  • [49] CE-SGD: Communication-Efficient Distributed Machine Learning
    Tao, Zeyi
    Xia, Qi
    Li, Qun
    Cheng, Songqing
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [50] Neural Network Coding of Difference Updates for Efficient Distributed Learning Communication
    Becking, Daniel
    Mueller, Karsten
    Haase, Paul
    Kirchhoffer, Heiner
    Tech, Gerhard
    Samek, Wojciech
    Schwarz, Heiko
    Marpe, Detlev
    Wiegand, Thomas
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 6848 - 6863