Mitigate Data Poisoning Attack by Partially Federated Learning

被引:0
|
作者
Dam, Khanh Huu The [1 ]
Legay, Axel [1 ]
机构
[1] UCLouvain, Louvain, Belgium
关键词
Data poisoning attack; Federated Learning; Data Privacy; Malware detection;
D O I
10.1145/3600160.3605032
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
An effcient machine learning model for malware detection requires a large dataset to train. Yet it is not easy to collect such a large dataset without violating or leaving vulnerable to potential viola-tion various aspects of data privacy. Our work proposes a federated learning framework that permits multiple parties to collaborate on learning behavioral graphs for malware detection. Our proposed graph classification framework allows the participating parties to freely decide their preferred classifier model without acknowledg-ing their preferences to the others involved. This mitigates the chance of any data poisoning attacks. In our experiments, our clas-sification model using the partially federated learning achieved the F1-score of 0.97, close to the performance of the centralized data training models. Moreover, the impact of the label flipping attack against our model is less than 0.02.
引用
收藏
页数:19
相关论文
共 50 条
  • [21] LoMar: A Local Defense Against Poisoning Attack on Federated Learning
    Li, Xingyu
    Qu, Zhe
    Zhao, Shangqing
    Tang, Bo
    Lu, Zhuo
    Liu, Yao
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (01) : 437 - 450
  • [22] Bandit-based data poisoning attack against federated learning for autonomous driving models
    Wang, Shuo
    Li, Qianmu
    Cui, Zhiyong
    Hou, Jun
    Huang, Chanying
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 227
  • [23] Federated Learning Under Attack: Exposing Vulnerabilities Through Data Poisoning Attacks in Computer Networks
    Nowroozi, Ehsan
    Haider, Imran
    Taheri, Rahim
    Conti, Mauro
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2025, 22 (01): : 822 - 831
  • [24] Data Poisoning in Sequential and Parallel Federated Learning*
    Nuding, Florian
    Mayer, Rudolf
    PROCEEDINGS OF THE 2022 ACM INTERNATIONAL WORKSHOP ON SECURITY AND PRIVACY ANALYTICS (IWSPA '22), 2022, : 24 - 34
  • [25] Data Poisoning Attacks on Federated Machine Learning
    Sun, Gan
    Cong, Yang
    Dong, Jiahua
    Wang, Qiang
    Lyu, Lingjuan
    Liu, Ji
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (13) : 11365 - 11375
  • [26] FEDGUARD: Selective Parameter Aggregation for Poisoning Attack Mitigation in Federated Learning
    Chelli, Melvin
    Prigent, Cedric
    Schubotz, Rene
    Costan, Alexandru
    Antoniu, Gabriel
    Cudennec, Loic
    Slusallek, Philipp
    2023 IEEE INTERNATIONAL CONFERENCE ON CLUSTER COMPUTING, CLUSTER, 2023, : 72 - 81
  • [27] Mitigation of a poisoning attack in federated learning by using historical distance detection
    Zhaosen Shi
    Xuyang Ding
    Fagen Li
    Yingni Chen
    Canran Li
    Annals of Telecommunications, 2023, 78 : 135 - 147
  • [28] Poisoning-Assisted Property Inference Attack Against Federated Learning
    Wang, Zhibo
    Huang, Yuting
    Song, Mengkai
    Wu, Libing
    Xue, Feng
    Ren, Kui
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (04) : 3328 - 3340
  • [29] Mitigation of a poisoning attack in federated learning by using historical distance detection
    Shi, Zhaosen
    Ding, Xuyang
    Li, Fagen
    Chen, Yingni
    Li, Canran
    ANNALS OF TELECOMMUNICATIONS, 2023, 78 (3-4) : 135 - 147
  • [30] Efficiently Achieving Privacy Preservation and Poisoning Attack Resistance in Federated Learning
    Li, Xueyang
    Yang, Xue
    Zhou, Zhengchun
    Lu, Rongxing
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 4358 - 4373