SLC: A Permissioned Blockchain for Secure Distributed Machine Learning against Byzantine Attacks

被引:1
|
作者
Liang, Lun [1 ]
Cao, Xianghui [1 ]
Zhang, Jun [2 ]
Sun, Changyin [1 ]
机构
[1] Southeast Univ, Sch Automat, Nanjing, Peoples R China
[2] Wuhan Univ, Sch Elect Engn & Automat, Wuhan, Peoples R China
关键词
Distributed Machine Learning; Byzantine Attacks; Secure Learning Chain; INTERNET;
D O I
10.1109/CAC51589.2020.9327384
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As data volume and complexity of the machine learning model increase, designing a secure and effective distributed machine learning (DML) algorithm is in direct need. Most traditional master-worker type of DML algorithms assume a trusted central server and study security issues on workers. Several researchers bridged DML and blockchain to defend against malicious central servers. However, some critical challenges remain, such as not being able to identify Byzantine nodes, not being robust to Byzantine attacks, requiring large communication overhead. To address these issues, in this paper, we propose a permissioned blockchain framework for secure DML. called Secure Learning Chain (SLC). Specifically, we design an Identifiable Practical Byzantine Fault Tolerance (IPBFT) consensus algorithm to defend against malicious central servers. This algorithm can also identify malicious central servers and reduce communication complexity. In addition, we propose a Mixed Ace-based multi-Krum Aggregation (MAKA) algorithm to prevent Byzantine attacks from malicious workers. Finally, our experiment results demonstrate our proposed model's efficiency and effectiveness.
引用
收藏
页码:7073 / 7078
页数:6
相关论文
共 50 条
  • [41] A secure distributed machine learning protocol against static semi-honest adversaries
    Sun, Maohua
    Yang, Ruidi
    Hu, Lei
    APPLIED SOFT COMPUTING, 2021, 102
  • [42] Backdoor attacks against distributed swarm learning
    Chen, Kongyang
    Zhang, Huaiyuan
    Feng, Xiangyu
    Zhang, Xiaoting
    Mi, Bing
    Jin, Zhiping
    ISA TRANSACTIONS, 2023, 141 : 59 - 72
  • [43] FABA: An Algorithm for Fast Aggregation against Byzantine Attacks in Distributed Neural Networks
    Xia, Qi
    Tao, Zeyi
    Hao, Zijiang
    Li, Qun
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 4824 - 4830
  • [44] Poisoning Attacks Against Machine Learning: Can Machine Learning Be Trustworthy?
    Oprea, Alina
    Singhal, Anoop
    Vassilev, Apostol
    COMPUTER, 2022, 55 (11) : 94 - 99
  • [45] A Four-Pronged Defense Against Byzantine Attacks in Federated Learning
    Wan, Wei
    Hu, Shengshan
    Li, Minghui
    Lu, Jianrong
    Zhang, Longling
    Zhang, Leo Yu
    Jin, Hai
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 7394 - 7402
  • [46] Robust Federated Learning: Maximum Correntropy Aggregation Against Byzantine Attacks
    Luan, Zhirong
    Li, Wenrui
    Liu, Meiqin
    Chen, Badong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (01) : 62 - 75
  • [47] Robust Federated Learning: Maximum Correntropy Aggregation Against Byzantine Attacks
    Luan, Zhirong
    Li, Wenrui
    Liu, Meiqin
    Chen, Badong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (01) : 62 - 75
  • [48] Machine Learning Attacks Against the Asirra CAPTCHA
    Golle, Philippe
    CCS'08: PROCEEDINGS OF THE 15TH ACM CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2008, : 535 - 542
  • [49] A taxonomy and survey of attacks against machine learning
    Pitropakis, Nikolaos
    Panaousis, Emmanouil
    Giannetsos, Thanassis
    Anastasiadis, Eleftherios
    Loukas, George
    COMPUTER SCIENCE REVIEW, 2019, 34
  • [50] A trust distributed learning (D-NWDAF) against poisoning and byzantine attacks in B5G networks
    Ben Saad, Sabra
    SECURITY AND PRIVACY, 2024, 7 (05):