Toward Byzantine-Robust Distributed Learning for Sentiment Classification on Social Media Platform

被引:1
|
作者
Zhang, Heyi [1 ]
Wu, Jun [2 ]
Pan, Qianqian [3 ]
Bashir, Ali Kashif [4 ,5 ,6 ]
Omar, Marwan [7 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai 200240, Peoples R China
[2] Waseda Univ, Grad Sch Informat Prod & Syst, Tokyo 1698050, Japan
[3] Univ Tokyo, Sch Engn, Tokyo 1130033, Japan
[4] Manchester Metropolitan Univ, Dept Comp & Math, Manchester M15 6BH, England
[5] Woxsen Univ, Woxsen Sch Business, Hyderabad 502345, India
[6] Lebanese Amer Univ, Dept Comp Sci & Math, Beirut 11022801, Lebanon
[7] Illinois Inst Technol, Dept Informat Technol & Management, Chicago, IL 60616 USA
基金
中国国家自然科学基金;
关键词
Blockchains; Training; Blockchain; Byzantine robust; coded computing; distributed learning; sentiment classification; social media platform; BLOCKCHAIN;
D O I
10.1109/TCSS.2024.3361465
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Distributed learning empowers social media platforms to handle massive data for image sentiment classification and deliver intelligent services. However, with the increase of privacy threats and malicious activities, three major challenges are emerging: securing privacy, alleviating straggler problems, and mitigating Byzantine attacks. Although recent studies explore coded computing for privacy and straggler problems, as well as Byzantine-robust aggregation for poisoning attacks, they are not well-designed against both threats simultaneously. To tackle these obstacles and achieve an efficient Byzantine-robust and straggler-resilient distributed learning framework, in this article, we present Byzantine-robust and cost-effective distributed machine learning (BCML), a codesign of coded computing and Byzantine-robust aggregation. To balance the Byzantine resilience and efficiency, we design a cosine-similarity-based Byzantine-robust aggregation method tailored for coded computing to filter out malicious gradients efficiently in real time. Furthermore, trust scores derived from similarity are published to the blockchain for the reliability and traceability of social users. Experimental results show that our BCML can tolerate Byzantine attacks without compromising convergence accuracy with lower time consumption, compared with the state-of-the-art approaches. Specifically, it is 6x faster than the uncoded approach and 2x faster than the Lagrange coded computing (LCC) approach. Besides, the cosine-similarity-based aggregation method can effectively detect and filter out malicious social users in real time.
引用
收藏
页码:1 / 11
页数:11
相关论文
共 50 条
  • [1] Byzantine-Robust Distributed Learning With Compression
    Zhu, Heng
    Ling, Qing
    IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2023, 9 : 280 - 294
  • [2] STOCHASTIC ADMM FOR BYZANTINE-ROBUST DISTRIBUTED LEARNING
    Lin, Feng
    Ling, Qing
    Li, Weiyu
    Xiong, Zhiwei
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 3172 - 3176
  • [3] Byzantine-Robust Online and Offline Distributed Reinforcement Learning
    Chen, Yiding
    Zhang, Xuezhou
    Zhang, Kaiqing
    Wang, Mengdi
    Zhu, Xiaojin
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 206, 2023, 206
  • [4] Communication-Efficient and Byzantine-Robust Distributed Learning
    Ghosh, Avishek
    Maity, Raj Kumar
    Kadhe, Swanand
    Mazumdar, Arya
    Ramchandran, Kannan
    2020 INFORMATION THEORY AND APPLICATIONS WORKSHOP (ITA), 2020,
  • [5] Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates
    Yin, Dong
    Chen, Yudong
    Ramchandran, Kannan
    Bartlett, Peter
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [6] Byzantine-robust distributed sparse learning for M-estimation
    Jiyuan Tu
    Weidong Liu
    Xiaojun Mao
    Machine Learning, 2023, 112 : 3773 - 3804
  • [7] Byzantine-robust distributed sparse learning for M-estimation
    Tu, Jiyuan
    Liu, Weidong
    Mao, Xiaojun
    MACHINE LEARNING, 2023, 112 (10) : 3773 - 3804
  • [8] Stochastic alternating direction method of multipliers for Byzantine-robust distributed learning
    Lin, Feng
    Li, Weiyu
    Ling, Qing
    SIGNAL PROCESSING, 2022, 195
  • [9] Communication-Efficient and Byzantine-Robust Distributed Learning with Error Feedback
    Ghosh A.
    Maity R.K.
    Kadhe S.
    Mazumdar A.
    Ramchandran K.
    IEEE Journal on Selected Areas in Information Theory, 2021, 2 (03): : 942 - 953
  • [10] Defending Against Saddle Point Attack in Byzantine-Robust Distributed Learning
    Yin, Dong
    Chen, Yudong
    Ramchandran, Kannan
    Bartlett, Peter
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97