Toward Byzantine-Robust Distributed Learning for Sentiment Classification on Social Media Platform

被引:1
|
作者
Zhang, Heyi [1 ]
Wu, Jun [2 ]
Pan, Qianqian [3 ]
Bashir, Ali Kashif [4 ,5 ,6 ]
Omar, Marwan [7 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai 200240, Peoples R China
[2] Waseda Univ, Grad Sch Informat Prod & Syst, Tokyo 1698050, Japan
[3] Univ Tokyo, Sch Engn, Tokyo 1130033, Japan
[4] Manchester Metropolitan Univ, Dept Comp & Math, Manchester M15 6BH, England
[5] Woxsen Univ, Woxsen Sch Business, Hyderabad 502345, India
[6] Lebanese Amer Univ, Dept Comp Sci & Math, Beirut 11022801, Lebanon
[7] Illinois Inst Technol, Dept Informat Technol & Management, Chicago, IL 60616 USA
基金
中国国家自然科学基金;
关键词
Blockchains; Training; Blockchain; Byzantine robust; coded computing; distributed learning; sentiment classification; social media platform; BLOCKCHAIN;
D O I
10.1109/TCSS.2024.3361465
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Distributed learning empowers social media platforms to handle massive data for image sentiment classification and deliver intelligent services. However, with the increase of privacy threats and malicious activities, three major challenges are emerging: securing privacy, alleviating straggler problems, and mitigating Byzantine attacks. Although recent studies explore coded computing for privacy and straggler problems, as well as Byzantine-robust aggregation for poisoning attacks, they are not well-designed against both threats simultaneously. To tackle these obstacles and achieve an efficient Byzantine-robust and straggler-resilient distributed learning framework, in this article, we present Byzantine-robust and cost-effective distributed machine learning (BCML), a codesign of coded computing and Byzantine-robust aggregation. To balance the Byzantine resilience and efficiency, we design a cosine-similarity-based Byzantine-robust aggregation method tailored for coded computing to filter out malicious gradients efficiently in real time. Furthermore, trust scores derived from similarity are published to the blockchain for the reliability and traceability of social users. Experimental results show that our BCML can tolerate Byzantine attacks without compromising convergence accuracy with lower time consumption, compared with the state-of-the-art approaches. Specifically, it is 6x faster than the uncoded approach and 2x faster than the Lagrange coded computing (LCC) approach. Besides, the cosine-similarity-based aggregation method can effectively detect and filter out malicious social users in real time.
引用
收藏
页码:1 / 11
页数:11
相关论文
共 50 条
  • [41] An Experimental Study of Byzantine-Robust Aggregation Schemes in Federated Learning
    Li, Shenghui
    Ngai, Edith
    Voigt, Thiemo
    IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (06) : 975 - 988
  • [42] Lightweight Byzantine-Robust and Privacy-Preserving Federated Learning
    Lu, Zhi
    Lu, Songfeng
    Cui, Yongquan
    Wu, Junjun
    Nie, Hewang
    Xiao, Jue
    Yi, Zepu
    EURO-PAR 2024: PARALLEL PROCESSING, PART II, EURO-PAR 2024, 2024, 14802 : 274 - 287
  • [43] SEAR: Secure and Efficient Aggregation for Byzantine-Robust Federated Learning
    Zhao, Lingchen
    Jiang, Jianlin
    Feng, Bo
    Wang, Qian
    Shen, Chao
    Li, Qi
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (05) : 3329 - 3342
  • [44] Byzantine-Robust Federated Learning with Variance Reduction and Differential Privacy
    Zhang, Zikai
    Hu, Rui
    2023 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY, CNS, 2023,
  • [45] FLForest: Byzantine-robust Federated Learning through Isolated Forest
    Wang, Tao
    Zhao, Bo
    Fang, Liming
    2022 IEEE 28TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS, ICPADS, 2022, : 296 - 303
  • [46] Byzantine-robust Federated Learning via Cosine Similarity Aggregation
    Zhu, Tengteng
    Guo, Zehua
    Yao, Chao
    Tan, Jiaxin
    Dou, Songshi
    Wang, Wenrun
    Han, Zhenzhen
    COMPUTER NETWORKS, 2024, 254
  • [47] Byzantine-Robust and Communication-Efficient Personalized Federated Learning
    Zhang, Jiaojiao
    He, Xuechao
    Huang, Yue
    Ling, Qing
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2025, 73 : 26 - 39
  • [48] Byzantine-Robust and Privacy-Preserving Federated Learning With Irregular Participants
    Chen, Yinuo
    Tan, Wuzheng
    Zhong, Yijian
    Kang, Yulin
    Yang, Anjia
    Weng, Jian
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (21): : 35193 - 35205
  • [49] Byzantine-Robust Decentralized Learning via Remove-then-Clip Aggregation
    Yang, Caiyi
    Ghaderi, Javad
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 19, 2024, : 21735 - 21743
  • [50] Variance reduced median-of-means estimator for byzantine-robust distributed inference
    Tu, Jiyuan
    Liu, Weidong
    Mao, Xiaojun
    Chen, Xi
    Journal of Machine Learning Research, 2021, 22