Tolerating Adversarial Attacks and Byzantine Faults in Distributed Machine Learning

被引:11
|
作者
Wu, Yusen [1 ]
Chen, Hao [1 ]
Wang, Xin [1 ]
Liu, Chao [1 ]
Nguyen, Phuong [1 ,2 ]
Yesha, Yelena [1 ,3 ]
机构
[1] Univ Maryland, Baltimore, MD 21201 USA
[2] 0PenKneck Inc, Halethorpe, MD USA
[3] Univ Miami, Coral Gables, FL 33124 USA
关键词
Data security; Byzantine-resilient SGD; Distributed ML;
D O I
10.1109/BigData52589.2021.9671583
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial attacks attempt to disrupt the training, retraining, and utilizing of artificial intelligence (AI) and machine learning models in large-scale distributed machine learning systems. This causes security risks on its prediction outcome. For example, attackers attempt to poison the model by either presenting inaccurate misrepresentative data or altering the models' parameters. In addition, Byzantine faults including software, hardware, network issues occur in distributed systems which also lead to a negative impact on the prediction outcome. In this paper, we propose a novel distributed training algorithm, partial synchronous stochastic gradient descent (ParSGD), which defends adversarial attacks and/or tolerates Byzantine faults. We demonstrate the effectiveness of our algorithm under three common adversarial attacks again the ML models and a Byzantine fault during the training phase. Our results show that using ParSGD, ML models can still produce accurate predictions as if it is not being attacked nor having failures at all when almost half of the nodes are being compromised or failed. We will report the experimental evaluations of ParSGD in comparison with other algorithms.
引用
收藏
页码:3380 / 3389
页数:10
相关论文
共 50 条
  • [21] Incrementing Adversarial Robustness with Autoencoding for Machine Learning Model Attacks
    Sivaslioglu, Salved
    Catak, Ferhat Ozgur
    Gul, Ensar
    2019 27TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2019,
  • [22] Trojan Attacks on Wireless Signal Classification with Adversarial Machine Learning
    Davaslioglu, Kemal
    Sagduyu, Yalin E.
    2019 IEEE INTERNATIONAL SYMPOSIUM ON DYNAMIC SPECTRUM ACCESS NETWORKS (DYSPAN), 2019, : 515 - 520
  • [23] Adversarial Machine Learning: Attacks From Laboratories to the Real World
    Lin, Hsiao-Ying
    Biggio, Battista
    COMPUTER, 2021, 54 (05) : 56 - 60
  • [24] Adversarial attacks for machine learning denoisers and how to resist them
    Jain, Saiyam B.
    Shao, Zongru
    Veettil, Sachin K. T.
    Hecht, Michael
    EMERGING TOPICS IN ARTIFICIAL INTELLIGENCE (ETAI) 2022, 2022, 12204
  • [25] Countering PUF Modeling Attacks through Adversarial Machine Learning
    Ebrahimabadi, Mohammad
    Lalouani, Wassila
    Younis, Mohamed
    Karimi, Naghmeh
    2021 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI 2021), 2021, : 356 - 361
  • [26] Reliable Broadcast Tolerating Byzantine Faults in a Message-Bounded Radio Network
    Bertier, Marin
    Kermarrec, Anne-Marie
    Tan, Guang
    DISTRIBUTED COMPUTING, PROCEEDINGS, 2008, 5218 : 516 - 517
  • [27] Darknet traffic classification and adversarial attacks using machine learning
    Rust-Nguyen, Nhien
    Sharma, Shruti
    Stamp, Mark
    COMPUTERS & SECURITY, 2023, 127
  • [28] Bridging Machine Learning and Cryptography in Defence Against Adversarial Attacks
    Taran, Olga
    Rezaeifar, Shideh
    Voloshynovskiy, Slava
    COMPUTER VISION - ECCV 2018 WORKSHOPS, PT II, 2019, 11130 : 267 - 279
  • [29] A Holistic Review of Machine Learning Adversarial Attacks in IoT Networks
    Khazane, Hassan
    Ridouani, Mohammed
    Salahdine, Fatima
    Kaabouch, Naima
    FUTURE INTERNET, 2024, 16 (01)
  • [30] Adversarial Machine Learning Attacks and Defences in Multi-Agent Reinforcement Learning
    Standen, Maxwell
    Kim, Junae
    Szabo, Claudia
    ACM COMPUTING SURVEYS, 2025, 57 (05)