An Equivalence Between Data Poisoning and Byzantine Gradient Attacks

被引:0
|
作者
Farhadkhani, Sadegh [1 ]
Guerraoui, Rachid [1 ]
Hoang, Le-Nguyen [1 ]
Villemaud, Oscar [1 ]
机构
[1] Ecole Polytech Fed Lausanne, IC Schoold, Lausanne, Switzerland
基金
瑞士国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To study the resilience of distributed learning, the "Byzantine" literature considers a strong threat model where workers can report arbitrary gradients to the parameter server. Whereas this model helped obtain several fundamental results, it has sometimes been considered unrealistic, when the workers are mostly trustworthy machines. In this paper, we show a surprising equivalence between this model and data poisoning, a threat considered much more realistic. More specifically, we prove that every gradient attack can be reduced to data poisoning, in any personalized federated learning system with PAC guarantees (which we show are both desirable and realistic). This equivalence makes it possible to obtain new impossibility results on the resilience of any "robust" learning algorithm to data poisoning in highly heterogeneous applications, as corollaries of existing impossibility theorems on Byzantine machine learning. Moreover, using our equivalence, we derive a practical attack that we show (theoretically and empirically) can be very effective against classical personalized federated learning models.
引用
收藏
页数:40
相关论文
共 50 条
  • [41] Data Poisoning based Backdoor Attacks to Contrastive Learning
    Zhang, Jinghuai
    Liu, Hongbin
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 24357 - 24366
  • [42] Data Poisoning Attacks on Regression Learning and Corresponding Defenses
    Mueller, Nicolas
    Kowatsch, Daniel
    Boettinger, Konstantin
    2020 IEEE 25TH PACIFIC RIM INTERNATIONAL SYMPOSIUM ON DEPENDABLE COMPUTING (PRDC 2020), 2020, : 80 - 89
  • [43] Data poisoning attacks in intelligent transportation systems: A survey
    Wang, Feilong
    Wang, Xin
    Ban, Xuegang
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2024, 165
  • [44] Demystifying Data Poisoning Attacks in Distributed Learning as a Service
    Wei, Wenqi
    Chow, Ka-Ho
    Wu, Yanzhao
    Liu, Ling
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (01) : 237 - 250
  • [45] Securing Machine Learning Against Data Poisoning Attacks
    Allheeib, Nasser
    INTERNATIONAL JOURNAL OF DATA WAREHOUSING AND MINING, 2024, 20 (01)
  • [46] Accumulative Poisoning Attacks on Real-time Data
    Pang, Tianyu
    Yang, Xiao
    Dong, Yinpeng
    Su, Hang
    Zhu, Jun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [47] Black-Box Data Poisoning Attacks on Crowdsourcing
    Chen, Pengpeng
    Yang, Yongqiang
    Yang, Dingqi
    Sun, Hailong
    Chen, Zhijun
    Lin, Peng
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 2975 - 2983
  • [48] Data Poisoning Attacks on Graph Convolutional Matrix Completion
    Zhou, Qi
    Ren, Yizhi
    Xia, Tianyu
    Yuan, Lifeng
    Chen, Linqiang
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2019, PT II, 2020, 11945 : 427 - 439
  • [49] Reliable Data Fusion in Wireless Sensor Networks under Byzantine Attacks
    Abdelhakim, Mai
    Lightfoot, Leonard E.
    Li, Tongtong
    2011 - MILCOM 2011 MILITARY COMMUNICATIONS CONFERENCE, 2011, : 810 - 815
  • [50] Distributed Inference with Byzantine Data [State-of-the-art review on data falsification attacks]
    Vempaty, Aditya
    Tong, Lang
    Varshney, Pramod K.
    IEEE SIGNAL PROCESSING MAGAZINE, 2013, 30 (05) : 65 - 75