Fair Detection of Poisoning Attacks in Federated Learning

被引:11
|
作者
Singh, Ashneet Khandpur [1 ]
Blanco-Justicia, Alberto [1 ]
Domingo-Ferrer, Josep [1 ]
Sanchez, David [1 ]
Rebollo-Monedero, David [1 ]
机构
[1] Univ Rovira & Virgili, Dept Comp Engn & Math, CYBERCAT Ctr Cybersecur Res Catalonia, UNESCO Chair Data Privacy, Av Paisos Catalans 26, E-43007 Tarragona, Catalonia, Spain
基金
欧盟地平线“2020”;
关键词
Federated learning; Security; Privacy; Fairness;
D O I
10.1109/ICTAI50040.2020.00044
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning is a decentralized machine learning technique that aggregates partial models trained by a set of clients on their own private data to obtain a global model. This technique is vulnerable to security attacks, such as model poisoning, whereby malicious clients submit bad updates in order to prevent the model from converging or to introduce artificial bias in the classification. Applying anti-poisoning techniques might lead to the discrimination of minority groups whose data are significantly and legitimately different from those of the majority of clients. In this work, we strive to strike a balance between fighting poisoning and accommodating diversity to help learning fairer and less discriminatory federated learning models. In this way, we forestall the exclusion of diverse clients while still ensuring detection of poisoning attacks. Empirical work on a standard machine learning data set shows that employing our approach to tell legitimate from malicious updates produces models that are more accurate than those obtained with standard poisoning detection techniques.
引用
收藏
页码:224 / 229
页数:6
相关论文
共 50 条
  • [1] Fair detection of poisoning attacks in federated learning on non-i.i.d. data
    Singh, Ashneet Khandpur
    Blanco-Justicia, Alberto
    Domingo-Ferrer, Josep
    DATA MINING AND KNOWLEDGE DISCOVERY, 2023, 37 (05) : 1998 - 2023
  • [2] Fair detection of poisoning attacks in federated learning on non-i.i.d. data
    Ashneet Khandpur Singh
    Alberto Blanco-Justicia
    Josep Domingo-Ferrer
    Data Mining and Knowledge Discovery, 2023, 37 (5) : 1998 - 2023
  • [3] Privacy-Preserving Detection of Poisoning Attacks in Federated Learning
    Muhr, Trent
    Zhang, Wensheng
    2022 19TH ANNUAL INTERNATIONAL CONFERENCE ON PRIVACY, SECURITY & TRUST (PST), 2022,
  • [4] Detection and Mitigation of Targeted Data Poisoning Attacks in Federated Learning
    Erbil, Pinar
    Gursoy, M. Emre
    2022 IEEE INTL CONF ON DEPENDABLE, AUTONOMIC AND SECURE COMPUTING, INTL CONF ON PERVASIVE INTELLIGENCE AND COMPUTING, INTL CONF ON CLOUD AND BIG DATA COMPUTING, INTL CONF ON CYBER SCIENCE AND TECHNOLOGY CONGRESS (DASC/PICOM/CBDCOM/CYBERSCITECH), 2022, : 271 - 278
  • [5] Perception Poisoning Attacks in Federated Learning
    Chow, Ka-Ho
    Liu, Ling
    2021 THIRD IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2021), 2021, : 146 - 155
  • [6] Poisoning Attacks in Federated Learning: A Survey
    Xia, Geming
    Chen, Jian
    Yu, Chaodong
    Ma, Jun
    IEEE ACCESS, 2023, 11 : 10708 - 10722
  • [7] Mitigating Poisoning Attacks in Federated Learning
    Ganjoo, Romit
    Ganjoo, Mehak
    Patil, Madhura
    INNOVATIVE DATA COMMUNICATION TECHNOLOGIES AND APPLICATION, ICIDCA 2021, 2022, 96 : 687 - 699
  • [8] Poisoning Attacks on Fair Machine Learning
    Minh-Hao Van
    Du, Wei
    Wu, Xintao
    Lu, Aidong
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2022, PT I, 2022, : 370 - 386
  • [9] Dependable federated learning for IoT intrusion detection against poisoning attacks
    Yang, Run
    He, Hui
    Wang, Yulong
    Qu, Yue
    Zhang, Weizhe
    COMPUTERS & SECURITY, 2023, 132
  • [10] Parameterizing poisoning attacks in federated learning-based intrusion detection
    Merzouk, Mohamed Amine
    Cuppens, Frederic
    Boulahia-Cuppens, Nora
    Yaich, Reda
    18TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY & SECURITY, ARES 2023, 2023,