An Equivalence Between Data Poisoning and Byzantine Gradient Attacks

被引:0
|
作者
Farhadkhani, Sadegh [1 ]
Guerraoui, Rachid [1 ]
Hoang, Le-Nguyen [1 ]
Villemaud, Oscar [1 ]
机构
[1] Ecole Polytech Fed Lausanne, IC Schoold, Lausanne, Switzerland
基金
瑞士国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To study the resilience of distributed learning, the "Byzantine" literature considers a strong threat model where workers can report arbitrary gradients to the parameter server. Whereas this model helped obtain several fundamental results, it has sometimes been considered unrealistic, when the workers are mostly trustworthy machines. In this paper, we show a surprising equivalence between this model and data poisoning, a threat considered much more realistic. More specifically, we prove that every gradient attack can be reduced to data poisoning, in any personalized federated learning system with PAC guarantees (which we show are both desirable and realistic). This equivalence makes it possible to obtain new impossibility results on the resilience of any "robust" learning algorithm to data poisoning in highly heterogeneous applications, as corollaries of existing impossibility theorems on Byzantine machine learning. Moreover, using our equivalence, we derive a practical attack that we show (theoretically and empirically) can be very effective against classical personalized federated learning models.
引用
收藏
页数:40
相关论文
共 50 条
  • [21] Data Poisoning Attacks and Defenses to Crowdsourcing Systems
    Fang, Minghong
    Sun, Minghao
    Li, Qi
    Gong, Neil Zhenqiang
    Tian, Jin
    Liu, Jia
    PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, : 969 - 980
  • [22] FLRAM: Robust Aggregation Technique for Defense against Byzantine Poisoning Attacks in Federated Learning
    Chen, Haitian
    Chen, Xuebin
    Peng, Lulu
    Ma, Ruikui
    ELECTRONICS, 2023, 12 (21)
  • [23] Defense against local model poisoning attacks to byzantine-robust federated learning
    Shiwei Lu
    Ruihu Li
    Xuan Chen
    Yuena Ma
    Frontiers of Computer Science, 2022, 16
  • [24] Data Poisoning Attacks against Autoregressive Models
    Alfeld, Scott
    Zhu, Xiaojin
    Barford, Paul
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1452 - 1458
  • [25] Concealed Data Poisoning Attacks on NLP Models
    Wallace, Eric
    Zhao, Tony Z.
    Feng, Shi
    Singh, Sameer
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 139 - 150
  • [26] Defense against local model poisoning attacks to byzantine-robust federated learning
    Lu, Shiwei
    Li, Ruihu
    Chen, Xuan
    Ma, Yuena
    FRONTIERS OF COMPUTER SCIENCE, 2022, 16 (06)
  • [27] LEGATO: A LayerwisE Gradient AggregaTiOn Algorithm for Mitigating Byzantine Attacks in Federated Learning
    Varma, Kamala
    Zhou, Yi
    Baracaldo, Nathalie
    Anwar, Ali
    2021 IEEE 14TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING (CLOUD 2021), 2021, : 272 - 277
  • [28] Federated Variance-Reduced Stochastic Gradient Descent With Robustness to Byzantine Attacks
    Wu, Zhaoxian
    Ling, Qing
    Chen, Tianyi
    Giannakis, Georgios B.
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2020, 68 (68) : 4583 - 4596
  • [29] Towards Poisoning of Federated Support Vector Machines with Data Poisoning Attacks
    Mouri, Israt Jahan
    Ridowan, Muhammad
    Adnan, Muhammad Abdullah
    PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING AND SERVICES SCIENCE, CLOSER 2023, 2023, : 24 - 33
  • [30] Stronger data poisoning attacks break data sanitization defenses
    Koh, Pang Wei
    Steinhardt, Jacob
    Liang, Percy
    MACHINE LEARNING, 2022, 111 (01) : 1 - 47