An Equivalence Between Data Poisoning and Byzantine Gradient Attacks

被引:0
|
作者
Farhadkhani, Sadegh [1 ]
Guerraoui, Rachid [1 ]
Hoang, Le-Nguyen [1 ]
Villemaud, Oscar [1 ]
机构
[1] Ecole Polytech Fed Lausanne, IC Schoold, Lausanne, Switzerland
基金
瑞士国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To study the resilience of distributed learning, the "Byzantine" literature considers a strong threat model where workers can report arbitrary gradients to the parameter server. Whereas this model helped obtain several fundamental results, it has sometimes been considered unrealistic, when the workers are mostly trustworthy machines. In this paper, we show a surprising equivalence between this model and data poisoning, a threat considered much more realistic. More specifically, we prove that every gradient attack can be reduced to data poisoning, in any personalized federated learning system with PAC guarantees (which we show are both desirable and realistic). This equivalence makes it possible to obtain new impossibility results on the resilience of any "robust" learning algorithm to data poisoning in highly heterogeneous applications, as corollaries of existing impossibility theorems on Byzantine machine learning. Moreover, using our equivalence, we derive a practical attack that we show (theoretically and empirically) can be very effective against classical personalized federated learning models.
引用
收藏
页数:40
相关论文
共 50 条
  • [1] On Equivalence for Networks of Noisy Channels under Byzantine Attacks
    Bakshi, Mayank
    Effros, Michelle
    Ho, Tracey
    2011 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY PROCEEDINGS (ISIT), 2011, : 973 - 977
  • [2] Dynamic defense against byzantine poisoning attacks in federated learning
    Rodriguez-Barroso, Nuria
    Martinez-Camara, Eugenio
    Victoria Luzon, M.
    Herrera, Francisco
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2022, 133 : 1 - 9
  • [3] Subpopulation Data Poisoning Attacks
    Jagielski, Matthew
    Severi, Giorgio
    Harger, Niklas Pousette
    Oprea, Mina
    CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 3104 - 3122
  • [4] Online Data Poisoning Attacks
    Zhang, Xuezhou
    Zhu, Xiaojin
    Lessard, Laurent
    LEARNING FOR DYNAMICS AND CONTROL, VOL 120, 2020, 120 : 201 - 210
  • [5] Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
    Fang, Minghong
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Nenqiang
    PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, 2020, : 1623 - 1640
  • [6] Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning
    Shejwalkar, Virat
    Houmansadr, Amir
    28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021), 2021,
  • [7] A Separation Result Between Data-oblivious and Data-aware Poisoning Attacks
    Deng, Samuel
    Garg, Sanjam
    Jha, Somesh
    Mahloujifar, Saeed
    Mahmoody, Mohammad
    Thakurta, Abhradeep
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [8] Robust Learning for Data Poisoning Attacks
    Wang, Yunjuan
    Mianjy, Poorya
    Arora, Raman
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139 : 7872 - 7881
  • [9] DATA POISONING ATTACKS AGAINST MRMR
    Liu, Heng
    Ditzler, Gregory
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 2517 - 2521
  • [10] Data Poisoning Attacks on Stochastic Bandits
    Liu, Fang
    Shroff, Ness
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97