Towards multi-party targeted model poisoning attacks against federated learning systems

被引:20
|
作者
Chen, Zheyi [1 ]
Tian, Pu [1 ]
Liao, Weixian [1 ]
Yu, Wei [1 ]
机构
[1] Towson Univ, Dept Comp & Informat Sci, Towson, MD 21252 USA
来源
HIGH-CONFIDENCE COMPUTING | 2021年 / 1卷 / 01期
关键词
Adversarial federated learning; Perfect knowledge; Limited knowledge; Boosting strategy; High-confidence computing; BIG DATA; INTERNET; THINGS; IOT;
D O I
10.1016/j.hcc.2021.100002
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The federated learning framework builds a deep learning model collaboratively by a group of connected devices via only sharing local parameter updates to the central parameter server. Nonetheless, the lack of transparency in the local data resource makes it prone to adversarial federated attacks, which have shown increasing ability to reduce learning performance. Existing research efforts either focus on the single-party attack with impractical perfect knowledge setting , limited stealthy ability or the random attack that has no control on attack effects. In this paper, we investigate a new multi-party adversarial attack with the imperfect knowledge of the target system. Controlled by an adversary, a number of compromised devices collaboratively launch targeted model poisoning attacks, intending to misclassify the targeted samples while maintaining stealthy under different de-tection strategies. Specifically, the compromised devices jointly minimize the loss function of model training in different scenarios. To overcome the update scaling problem, we develop a new boosting strategy by introducing two stealthy metrics. Via experimental results, we show that under both perfect knowledge and limited knowl-edge settings, the multi-party attack is capable of successfully evading detection strategies while guaranteeing the convergence. We also demonstrate that the learned model achieves the high accuracy on the targeted samples, which confirms the significant impact of the multi-party attack on federated learning systems.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Model poisoning attacks against distributed machine learning systems
    Tomsett, Richard
    Chan, Kevin
    Chakraborty, Supriyo
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [32] Federated Learning: A Comparative Study of Defenses Against Poisoning Attacks
    Carvalho, Ines
    Huff, Kenton
    Gruenwald, Le
    Bernardino, Jorge
    APPLIED SCIENCES-BASEL, 2024, 14 (22):
  • [33] Dynamic defense against byzantine poisoning attacks in federated learning
    Rodriguez-Barroso, Nuria
    Martinez-Camara, Eugenio
    Victoria Luzon, M.
    Herrera, Francisco
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2022, 133 : 1 - 9
  • [34] FLCert: Provably Secure Federated Learning Against Poisoning Attacks
    Cao, Xiaoyu
    Zhang, Zaixi
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 3691 - 3705
  • [35] Secure and verifiable federated learning against poisoning attacks in IoMT
    Niu, Shufen
    Zhou, Xusheng
    Wang, Ning
    Kong, Weiying
    Chen, Lihua
    COMPUTERS & ELECTRICAL ENGINEERING, 2025, 122
  • [36] DPFLA: Defending Private Federated Learning Against Poisoning Attacks
    Feng, Xia
    Cheng, Wenhao
    Cao, Chunjie
    Wang, Liangmin
    Sheng, Victor S.
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (04) : 1480 - 1491
  • [37] A Novel Approach for Securing Federated Learning: Detection and Defense Against Model Poisoning Attacks
    Cristiano, Giovanni Maria
    D'Antonio, Salvatore
    Uccello, Federica
    2024 IEEE INTERNATIONAL CONFERENCE ON CYBER SECURITY AND RESILIENCE, CSR, 2024, : 664 - 669
  • [38] FLOW: A Robust Federated Learning Framework to Defend Against Model Poisoning Attacks in IoT
    Liu, Shukan
    Li, Zhenyu
    Sun, Qiao
    Chen, Lin
    Zhang, Xianfeng
    Duan, Li
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (09): : 15075 - 15086
  • [39] Defense against local model poisoning attacks to byzantine-robust federated learning
    Lu, Shiwei
    Li, Ruihu
    Chen, Xuan
    Ma, Yuena
    FRONTIERS OF COMPUTER SCIENCE, 2022, 16 (06)
  • [40] Defense against local model poisoning attacks to byzantine-robust federated learning
    LU Shiwei
    LI Ruihu
    CHEN Xuan
    MA Yuena
    Frontiers of Computer Science, 2022, 16 (06)