Distributed Byzantine Tolerant Stochastic Gradient Descent in the Era of Big Data

被引:0
|
作者
Jin, Richeng [1 ]
He, Xiaofan [2 ]
Dai, Huaiyu [1 ]
机构
[1] North Carolina State Univ, Dept ECE, Raleigh, NC 27695 USA
[2] Wuhan Univ, Elect Informat Sch, Wuhan, Hubei, Peoples R China
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The recent advances in sensor technologies and smart devices enable the collaborative collection of a sheer volume of data from multiple information sources. As a promising tool to efficiently extract useful information from such big data, machine learning has been pushed to the forefront and seen great success in a wide range of relevant areas such as computer vision, health care, and financial market analysis. To accommodate the large volume of data, there is a surge of interest in the design of distributed machine learning, among which stochastic gradient descent (SGD) is one of the mostly adopted methods. Nonetheless, distributed machine learning methods may be vulnerable to Byzantine attack, in which the adversary can deliberately share falsified information to disrupt the intended machine learning procedures. In this work, two asynchronous Byzantine tolerant SGD algorithms are proposed, in which the honest collaborative workers are assumed to store the model parameters derived from their own local data and use them as the ground truth. The proposed algorithms can deal with an arbitrary number of Byzantine attackers and are provably convergent. Simulation results based on a real-world dataset are presented to verify the theoretical results and demonstrate the effectiveness of the proposed algorithms.
引用
收藏
页数:6
相关论文
共 50 条
  • [41] Improving Stochastic Gradient Descent Initializing with Data Summarization
    Varghese, Robin
    Ordonez, Carlos
    BIG DATA ANALYTICS AND KNOWLEDGE DISCOVERY, DAWAK 2023, 2023, 14148 : 212 - 223
  • [42] Stochastic Gradient Descent for Linear Systems with Missing Data
    Ma, Anna
    Needell, Deanna
    NUMERICAL MATHEMATICS-THEORY METHODS AND APPLICATIONS, 2019, 12 (01) : 1 - 20
  • [43] BAYESIAN STOCHASTIC GRADIENT DESCENT FOR STOCHASTIC OPTIMIZATION WITH STREAMING INPUT DATA
    Liu, Tianyi
    Lin, Yifan
    Zhou, Enlu
    SIAM JOURNAL ON OPTIMIZATION, 2024, 34 (01) : 389 - 418
  • [44] Distributed stochastic gradient descent for link prediction in signed social networks
    Zhang, Han
    Wu, Gang
    Ling, Qing
    EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING, 2019, 2019 (1)
  • [45] Distributed Stochastic Gradient Descent: Nonconvexity, Nonsmoothness, and Convergence to Local Minima
    Swenson, Brian
    Murray, Ryan
    Poor, H. Vincent
    Kar, Soummya
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23
  • [46] Distributed Stochastic Gradient Descent with Cost-Sensitive and Strategic Agents
    Akbay, Abdullah Basar
    Tepedelenlioglu, Cihan
    2022 56TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2022, : 1238 - 1242
  • [47] Privacy-Preserving Stochastic Gradient Descent with Multiple Distributed Trainers
    Le Trieu Phong
    NETWORK AND SYSTEM SECURITY, 2017, 10394 : 510 - 518
  • [48] Adaptive Distributed Stochastic Gradient Descent for Minimizing Delay in the Presence of Stragglers
    Hanna, Serge Kas
    Bitar, Rawad
    Parag, Parimal
    Dasari, Venkat
    El Rouayheb, Salim
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 4262 - 4266
  • [49] ColumnSGD: A Column-oriented Framework for Distributed Stochastic Gradient Descent
    Zhang, Zhipeng
    Wu, Wentao
    Jiang, Jiawei
    Yu, Lele
    Cui, Bin
    Zhang, Ce
    2020 IEEE 36TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2020), 2020, : 1513 - 1524
  • [50] A DAG Model of Synchronous Stochastic Gradient Descent in Distributed Deep Learning
    Shi, Shaohuai
    Wang, Qiang
    Chu, Xiaowen
    Li, Bo
    2018 IEEE 24TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS 2018), 2018, : 425 - 432