Fair Detection of Poisoning Attacks in Federated Learning

被引:11
|
作者
Singh, Ashneet Khandpur [1 ]
Blanco-Justicia, Alberto [1 ]
Domingo-Ferrer, Josep [1 ]
Sanchez, David [1 ]
Rebollo-Monedero, David [1 ]
机构
[1] Univ Rovira & Virgili, Dept Comp Engn & Math, CYBERCAT Ctr Cybersecur Res Catalonia, UNESCO Chair Data Privacy, Av Paisos Catalans 26, E-43007 Tarragona, Catalonia, Spain
基金
欧盟地平线“2020”;
关键词
Federated learning; Security; Privacy; Fairness;
D O I
10.1109/ICTAI50040.2020.00044
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning is a decentralized machine learning technique that aggregates partial models trained by a set of clients on their own private data to obtain a global model. This technique is vulnerable to security attacks, such as model poisoning, whereby malicious clients submit bad updates in order to prevent the model from converging or to introduce artificial bias in the classification. Applying anti-poisoning techniques might lead to the discrimination of minority groups whose data are significantly and legitimately different from those of the majority of clients. In this work, we strive to strike a balance between fighting poisoning and accommodating diversity to help learning fairer and less discriminatory federated learning models. In this way, we forestall the exclusion of diverse clients while still ensuring detection of poisoning attacks. Empirical work on a standard machine learning data set shows that employing our approach to tell legitimate from malicious updates produces models that are more accurate than those obtained with standard poisoning detection techniques.
引用
收藏
页码:224 / 229
页数:6
相关论文
共 50 条
  • [41] Collusion-Based Poisoning Attacks Against Blockchained Federated Learning
    Zhang, Xiaohui
    Shen, Tao
    Bai, Fenhua
    Zhang, Chi
    IEEE NETWORK, 2023, 37 (06): : 50 - 57
  • [42] Defending against Poisoning Backdoor Attacks on Federated Meta-learning
    Chen, Chien-Lun
    Babakniya, Sara
    Paolieri, Marco
    Golubchik, Leana
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (05)
  • [43] DeMAC: Towards detecting model poisoning attacks in federated learning system
    Yang, Han
    Gu, Dongbing
    He, Jianhua
    INTERNET OF THINGS, 2023, 23
  • [44] Evaluating Security and Robustness for Split Federated Learning Against Poisoning Attacks
    Wu, Xiaodong
    Yuan, Henry
    Li, Xiangman
    Ni, Jianbing
    Lu, Rongxing
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 175 - 190
  • [45] FEDCLEAN: A DEFENSE MECHANISM AGAINST PARAMETER POISONING ATTACKS IN FEDERATED LEARNING
    Kumar, Abhishek
    Khimani, Vivek
    Chatzopoulos, Dimitris
    Hui, Pan
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4333 - 4337
  • [46] Poisoning Attacks on Federated Learning-based Wireless Traffic Prediction
    Zhang, Zifan
    Fang, Minghong
    Huang, Jiayuan
    Liu, Yuchen
    2024 23RD IFIP NETWORKING CONFERENCE, IFIP NETWORKING 2024, 2024, : 423 - 431
  • [47] Defense Strategies Toward Model Poisoning Attacks in Federated Learning: A Survey
    Wang, Zhilin
    Kang, Qiao
    Zhang, Xinyi
    Hu, Qin
    2022 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2022, : 548 - 553
  • [48] Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning
    Shejwalkar, Virat
    Houmansadr, Amir
    28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021), 2021,
  • [49] Precision Guided Approach to Mitigate Data Poisoning Attacks in Federated Learning
    Kumar, K. Naveen
    Mohan, C. Krishna
    Machiry, Aravind
    PROCEEDINGS OF THE FOURTEENTH ACM CONFERENCE ON DATA AND APPLICATION SECURITY AND PRIVACY, CODASPY 2024, 2024, : 233 - 244
  • [50] DUPS: Data poisoning attacks with uncertain sample selection for federated learning
    Zhang, Heng-Ru
    Wang, Ke-Xiong
    Liang, Xiang-Yu
    Yu, Yi-Fan
    COMPUTER NETWORKS, 2025, 256