An Equivalence Between Data Poisoning and Byzantine Gradient Attacks

被引:0
|
作者
Farhadkhani, Sadegh [1 ]
Guerraoui, Rachid [1 ]
Hoang, Le-Nguyen [1 ]
Villemaud, Oscar [1 ]
机构
[1] Ecole Polytech Fed Lausanne, IC Schoold, Lausanne, Switzerland
基金
瑞士国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To study the resilience of distributed learning, the "Byzantine" literature considers a strong threat model where workers can report arbitrary gradients to the parameter server. Whereas this model helped obtain several fundamental results, it has sometimes been considered unrealistic, when the workers are mostly trustworthy machines. In this paper, we show a surprising equivalence between this model and data poisoning, a threat considered much more realistic. More specifically, we prove that every gradient attack can be reduced to data poisoning, in any personalized federated learning system with PAC guarantees (which we show are both desirable and realistic). This equivalence makes it possible to obtain new impossibility results on the resilience of any "robust" learning algorithm to data poisoning in highly heterogeneous applications, as corollaries of existing impossibility theorems on Byzantine machine learning. Moreover, using our equivalence, we derive a practical attack that we show (theoretically and empirically) can be very effective against classical personalized federated learning models.
引用
收藏
页数:40
相关论文
共 50 条
  • [31] Stronger data poisoning attacks break data sanitization defenses
    Pang Wei Koh
    Jacob Steinhardt
    Percy Liang
    Machine Learning, 2022, 111 : 1 - 47
  • [32] FedMP: A multi-pronged defense algorithm against Byzantine poisoning attacks in federated learning
    Zhao, Kai
    Wang, Lina
    Yu, Fangchao
    Zeng, Bo
    Pang, Zhi
    COMPUTER NETWORKS, 2025, 257
  • [33] Data poisoning attacks on traffic state estimation and prediction
    Wang, Feilong
    Wang, Xin
    Hong, Yuan
    Rockafellar, R. Tyrrell
    Ban, Xuegang
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2024, 168
  • [34] Data Poisoning and Backdoor Attacks on Audio Intelligence Systems
    Ge, Yunjie
    Wang, Qian
    Yu, Jiayuan
    Shen, Chao
    Li, Qi
    IEEE COMMUNICATIONS MAGAZINE, 2023, 61 (12) : 176 - 182
  • [35] Data Poisoning Attacks to Local Differential Privacy Protocols
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 30TH USENIX SECURITY SYMPOSIUM, 2021, : 947 - 964
  • [36] Data Poisoning Attacks Against Federated Learning Systems
    Tolpegin, Vale
    Truex, Stacey
    Gursoy, Mehmet Emre
    Liu, Ling
    COMPUTER SECURITY - ESORICS 2020, PT I, 2020, 12308 : 480 - 501
  • [37] Crowdsourcing Under Data Poisoning Attacks: A Comparative Study
    Tahmasebian, Farnaz
    Xiong, Li
    Sotoodeh, Mani
    Sunderam, Vaidy
    DATA AND APPLICATIONS SECURITY AND PRIVACY XXXIV, DBSEC 2020, 2020, 12122 : 310 - 332
  • [38] Data Poisoning Attacks on Cross-domain Recommendation
    Chen, Huiyuan
    Li, Jing
    PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM '19), 2019, : 2177 - 2180
  • [39] Data poisoning attacks against machine learning algorithms
    Yerlikaya, Fahri Anil
    Bahtiyar, Serif
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 208
  • [40] Towards Data Poisoning Attacks in Crowd Sensing Systems
    Miao, Chenglin
    Li, Qi
    Xiao, Houping
    Jiang, Wenjun
    Huai, Mengdi
    Su, Lu
    PROCEEDINGS OF THE 2018 THE NINETEENTH INTERNATIONAL SYMPOSIUM ON MOBILE AD HOC NETWORKING AND COMPUTING (MOBIHOC '18), 2018, : 111 - 120