Toward Byzantine-Robust Distributed Learning for Sentiment Classification on Social Media Platform

被引:1
|
作者
Zhang, Heyi [1 ]
Wu, Jun [2 ]
Pan, Qianqian [3 ]
Bashir, Ali Kashif [4 ,5 ,6 ]
Omar, Marwan [7 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai 200240, Peoples R China
[2] Waseda Univ, Grad Sch Informat Prod & Syst, Tokyo 1698050, Japan
[3] Univ Tokyo, Sch Engn, Tokyo 1130033, Japan
[4] Manchester Metropolitan Univ, Dept Comp & Math, Manchester M15 6BH, England
[5] Woxsen Univ, Woxsen Sch Business, Hyderabad 502345, India
[6] Lebanese Amer Univ, Dept Comp Sci & Math, Beirut 11022801, Lebanon
[7] Illinois Inst Technol, Dept Informat Technol & Management, Chicago, IL 60616 USA
基金
中国国家自然科学基金;
关键词
Blockchains; Training; Blockchain; Byzantine robust; coded computing; distributed learning; sentiment classification; social media platform; BLOCKCHAIN;
D O I
10.1109/TCSS.2024.3361465
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Distributed learning empowers social media platforms to handle massive data for image sentiment classification and deliver intelligent services. However, with the increase of privacy threats and malicious activities, three major challenges are emerging: securing privacy, alleviating straggler problems, and mitigating Byzantine attacks. Although recent studies explore coded computing for privacy and straggler problems, as well as Byzantine-robust aggregation for poisoning attacks, they are not well-designed against both threats simultaneously. To tackle these obstacles and achieve an efficient Byzantine-robust and straggler-resilient distributed learning framework, in this article, we present Byzantine-robust and cost-effective distributed machine learning (BCML), a codesign of coded computing and Byzantine-robust aggregation. To balance the Byzantine resilience and efficiency, we design a cosine-similarity-based Byzantine-robust aggregation method tailored for coded computing to filter out malicious gradients efficiently in real time. Furthermore, trust scores derived from similarity are published to the blockchain for the reliability and traceability of social users. Experimental results show that our BCML can tolerate Byzantine attacks without compromising convergence accuracy with lower time consumption, compared with the state-of-the-art approaches. Specifically, it is 6x faster than the uncoded approach and 2x faster than the Lagrange coded computing (LCC) approach. Besides, the cosine-similarity-based aggregation method can effectively detect and filter out malicious social users in real time.
引用
收藏
页码:1 / 11
页数:11
相关论文
共 50 条
  • [31] Byzantine-Robust Federated Learning through Dynamic Clustering
    Wang, Hanyu
    Wang, Liming
    Li, Hongjia
    2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 222 - 230
  • [32] Byzantine-Robust and Efficient Federated Learning for the Internet of Things
    Jin R.
    Hu J.
    Min G.
    Lin H.
    IEEE Internet of Things Magazine, 2022, 5 (01): : 114 - 118
  • [33] Communication-Efficient and Byzantine-Robust Distributed Stochastic Learning with Arbitrary Number of Corrupted Workers
    Jian Xu
    Tong, Xinyi
    Huang, Shao-Lun
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 5415 - 5420
  • [34] Byzantine-Robust Federated Learning Based on Dynamic Gradient Filtering
    Colosimo, Francesco
    De Rango, Floriano
    20TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE, IWCMC 2024, 2024, : 1062 - 1067
  • [35] FedCom: Byzantine-Robust Federated Learning Using Data Commitment
    Zhao, Bo
    Wang, Tao
    Fang, Liming
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 33 - 38
  • [36] SIREN: Byzantine-robust Federated Learning via Proactive Alarming
    Guo, Hanxi
    Wang, Hao
    Song, Tao
    Hua, Yang
    Lv, Zhangcheng
    Jin, Xiulang
    Xue, Zhengui
    Ma, Ruhui
    Guan, Haibing
    PROCEEDINGS OF THE 2021 ACM SYMPOSIUM ON CLOUD COMPUTING (SOCC '21), 2021, : 47 - 60
  • [37] Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
    Fang, Minghong
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Nenqiang
    PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, 2020, : 1623 - 1640
  • [38] Efficient and Privacy-Preserving Byzantine-robust Federated Learning
    Luan, Shijie
    Lu, Xiang
    Zhang, Zhuangzhuang
    Chang, Guangsheng
    Guo, Yunchuan
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 2202 - 2208
  • [39] FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
    Cao, Xiaoyu
    Fang, Minghong
    Liu, Jia
    Gong, Neil Zhenqiang
    28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021), 2021,
  • [40] Byzantine-Robust Aggregation in Federated Learning Empowered Industrial IoT
    Li, Shenghui
    Ngai, Edith
    Voigt, Thiemo
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (02) : 1165 - 1175