Fair Detection of Poisoning Attacks in Federated Learning

被引:11
|
作者
Singh, Ashneet Khandpur [1 ]
Blanco-Justicia, Alberto [1 ]
Domingo-Ferrer, Josep [1 ]
Sanchez, David [1 ]
Rebollo-Monedero, David [1 ]
机构
[1] Univ Rovira & Virgili, Dept Comp Engn & Math, CYBERCAT Ctr Cybersecur Res Catalonia, UNESCO Chair Data Privacy, Av Paisos Catalans 26, E-43007 Tarragona, Catalonia, Spain
基金
欧盟地平线“2020”;
关键词
Federated learning; Security; Privacy; Fairness;
D O I
10.1109/ICTAI50040.2020.00044
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning is a decentralized machine learning technique that aggregates partial models trained by a set of clients on their own private data to obtain a global model. This technique is vulnerable to security attacks, such as model poisoning, whereby malicious clients submit bad updates in order to prevent the model from converging or to introduce artificial bias in the classification. Applying anti-poisoning techniques might lead to the discrimination of minority groups whose data are significantly and legitimately different from those of the majority of clients. In this work, we strive to strike a balance between fighting poisoning and accommodating diversity to help learning fairer and less discriminatory federated learning models. In this way, we forestall the exclusion of diverse clients while still ensuring detection of poisoning attacks. Empirical work on a standard machine learning data set shows that employing our approach to tell legitimate from malicious updates produces models that are more accurate than those obtained with standard poisoning detection techniques.
引用
收藏
页码:224 / 229
页数:6
相关论文
共 50 条
  • [21] Data Poisoning Detection in Federated Learning
    Khuu, Denise-Phi
    Sober, Michael
    Kaaser, Dominik
    Fischer, Mathias
    Schulte, Stefan
    39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 1549 - 1558
  • [22] Federated Learning: A Comparative Study of Defenses Against Poisoning Attacks
    Carvalho, Ines
    Huff, Kenton
    Gruenwald, Le
    Bernardino, Jorge
    APPLIED SCIENCES-BASEL, 2024, 14 (22):
  • [23] On the Performance Impact of Poisoning Attacks on Load Forecasting in Federated Learning
    Qureshi, Naik Bakht Sania
    Kim, Dong-Hoon
    Lee, Jiwoo
    Lee, Eun-Kyu
    UBICOMP/ISWC '21 ADJUNCT: PROCEEDINGS OF THE 2021 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2021 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2021, : 64 - 66
  • [24] Dynamic defense against byzantine poisoning attacks in federated learning
    Rodriguez-Barroso, Nuria
    Martinez-Camara, Eugenio
    Victoria Luzon, M.
    Herrera, Francisco
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2022, 133 : 1 - 9
  • [25] FedEqual: Defending Model Poisoning Attacks in Heterogeneous Federated Learning
    Chen, Ling-Yuan
    Chiu, Te-Chuan
    Pang, Ai-Chun
    Cheng, Li-Chen
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [26] FLCert: Provably Secure Federated Learning Against Poisoning Attacks
    Cao, Xiaoyu
    Zhang, Zaixi
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 3691 - 3705
  • [27] Clean-label poisoning attacks on federated learning for IoT
    Yang, Jie
    Zheng, Jun
    Baker, Thar
    Tang, Shuai
    Tan, Yu-an
    Zhang, Quanxin
    EXPERT SYSTEMS, 2023, 40 (05)
  • [28] SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification
    Panda, Ashwinee
    Mahloujifar, Saeed
    Bhagoji, Arjun N.
    Chakraborty, Supriyo
    Mittal, Prateek
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [29] DPFLA: Defending Private Federated Learning Against Poisoning Attacks
    Feng, Xia
    Cheng, Wenhao
    Cao, Chunjie
    Wang, Liangmin
    Sheng, Victor S.
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (04) : 1480 - 1491
  • [30] Secure and verifiable federated learning against poisoning attacks in IoMT
    Niu, Shufen
    Zhou, Xusheng
    Wang, Ning
    Kong, Weiying
    Chen, Lihua
    COMPUTERS & ELECTRICAL ENGINEERING, 2025, 122