Towards multi-party targeted model poisoning attacks against federated learning systems

被引:20
|
作者
Chen, Zheyi [1 ]
Tian, Pu [1 ]
Liao, Weixian [1 ]
Yu, Wei [1 ]
机构
[1] Towson Univ, Dept Comp & Informat Sci, Towson, MD 21252 USA
来源
HIGH-CONFIDENCE COMPUTING | 2021年 / 1卷 / 01期
关键词
Adversarial federated learning; Perfect knowledge; Limited knowledge; Boosting strategy; High-confidence computing; BIG DATA; INTERNET; THINGS; IOT;
D O I
10.1016/j.hcc.2021.100002
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The federated learning framework builds a deep learning model collaboratively by a group of connected devices via only sharing local parameter updates to the central parameter server. Nonetheless, the lack of transparency in the local data resource makes it prone to adversarial federated attacks, which have shown increasing ability to reduce learning performance. Existing research efforts either focus on the single-party attack with impractical perfect knowledge setting , limited stealthy ability or the random attack that has no control on attack effects. In this paper, we investigate a new multi-party adversarial attack with the imperfect knowledge of the target system. Controlled by an adversary, a number of compromised devices collaboratively launch targeted model poisoning attacks, intending to misclassify the targeted samples while maintaining stealthy under different de-tection strategies. Specifically, the compromised devices jointly minimize the loss function of model training in different scenarios. To overcome the update scaling problem, we develop a new boosting strategy by introducing two stealthy metrics. Via experimental results, we show that under both perfect knowledge and limited knowl-edge settings, the multi-party attack is capable of successfully evading detection strategies while guaranteeing the convergence. We also demonstrate that the learned model achieves the high accuracy on the targeted samples, which confirms the significant impact of the multi-party attack on federated learning systems.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] FedATM: Adaptive Trimmed Mean based Federated Learning against Model Poisoning Attacks
    Nishimoto, Kenji
    Chiang, Yi-Han
    Lin, Hai
    Jit, Yusheng
    2023 IEEE 97TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-SPRING, 2023,
  • [42] Defense against local model poisoning attacks to byzantine-robust federated learning
    Shiwei Lu
    Ruihu Li
    Xuan Chen
    Yuena Ma
    Frontiers of Computer Science, 2022, 16
  • [43] EFMVFL: An Efficient and Flexible Multi-party Vertical Federated Learning without a Third Party
    Huang, Yimin
    Wang, Wanwan
    Zhao, Xingying
    Wang, Yukun
    Feng, Xinyu
    He, Hao
    Yao, Ming
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (03)
  • [44] A Privacy-Preserving Scheme for Multi-Party Vertical Federated Learning
    FAN Mochan
    ZHANG Zhipeng
    LI Difei
    ZHANG Qiming
    YAO Haidong
    ZTE Communications, 2024, 22 (04) : 89 - 96
  • [45] Secure Byzantine resilient federated learning based on multi-party computation
    Gao, Hongfeng
    Huang, Hao
    Tian, Youliang
    Tongxin Xuebao/Journal on Communications, 2025, 46 (02): : 108 - 122
  • [46] A Verifiable Federated Learning Scheme Based on Secure Multi-party Computation
    Mou, Wenhao
    Fu, Chunlei
    Lei, Yan
    Hu, Chunqiang
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, WASA 2021, PT II, 2021, 12938 : 198 - 209
  • [47] Multi-Party Federated Recommendation Based on Semi-Supervised Learning
    Liu, Xin
    Lv, Jiuluan
    Chen, Feng
    Wei, Qingjie
    He, Hangxuan
    Qian, Ying
    IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (04) : 356 - 370
  • [48] Robust Aggregation Technique Against Poisoning Attacks in Multi-Stage Federated Learning Applications
    Siriwardhana, Yushan
    Porambage, Pawani
    Liyanage, Madhusanka
    Marchal, Samuel
    Ylianttila, Mika
    2024 IEEE 21ST CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2024, : 956 - 962
  • [49] FedMP: A multi-pronged defense algorithm against Byzantine poisoning attacks in federated learning
    Zhao, Kai
    Wang, Lina
    Yu, Fangchao
    Zeng, Bo
    Pang, Zhi
    COMPUTER NETWORKS, 2025, 257
  • [50] DeFL: Defending against Model Poisoning Attacks in Federated Learning via Critical Learning Periods Awareness
    Yan, Gang
    Wang, Hao
    Yuan, Xu
    Li, Jian
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 10711 - 10719