Mitigate Data Poisoning Attack by Partially Federated Learning

被引:0
|
作者
Dam, Khanh Huu The [1 ]
Legay, Axel [1 ]
机构
[1] UCLouvain, Louvain, Belgium
关键词
Data poisoning attack; Federated Learning; Data Privacy; Malware detection;
D O I
10.1145/3600160.3605032
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
An effcient machine learning model for malware detection requires a large dataset to train. Yet it is not easy to collect such a large dataset without violating or leaving vulnerable to potential viola-tion various aspects of data privacy. Our work proposes a federated learning framework that permits multiple parties to collaborate on learning behavioral graphs for malware detection. Our proposed graph classification framework allows the participating parties to freely decide their preferred classifier model without acknowledg-ing their preferences to the others involved. This mitigates the chance of any data poisoning attacks. In our experiments, our clas-sification model using the partially federated learning achieved the F1-score of 0.97, close to the performance of the centralized data training models. Moreover, the impact of the label flipping attack against our model is less than 0.02.
引用
收藏
页数:19
相关论文
共 50 条
  • [31] Untargeted Poisoning Attack Detection in Federated Learning via Behavior AttestationAl
    Mallah, Ranwa Al
    Lopez, David
    Badu-Marfo, Godwin
    Farooq, Bilal
    IEEE ACCESS, 2023, 11 : 125064 - 125079
  • [32] Model poisoning attack in differential privacy-based federated learning
    Yang, Ming
    Cheng, Hang
    Chen, Fei
    Liu, Ximeng
    Wang, Meiqing
    Li, Xibin
    INFORMATION SCIENCES, 2023, 630 : 158 - 172
  • [33] Poisoning with Cerberus: Stealthy and Colluded Backdoor Attack against Federated Learning
    Lyu, Xiaoting
    Han, Yufei
    Wang, Wei
    Liu, Jingkai
    Wang, Bin
    Liu, Jiqiang
    Zhang, Xiangliang
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 7, 2023, : 9020 - 9028
  • [34] Pocket Diagnosis: Secure Federated Learning Against Poisoning Attack in the Cloud
    Ma, Zhuoran
    Ma, Jianfeng
    Miao, Yinbin
    Liu, Ximeng
    Choo, Kim-Kwang Raymond
    Deng, Robert H.
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2022, 15 (06) : 3429 - 3442
  • [35] Logits Poisoning Attack in Federated Distillation
    Tang, Yuhan
    Wu, Zhiyuan
    Gao, Bo
    Wen, Tian
    Wang, Yuwei
    Sun, Sheng
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT III, KSEM 2024, 2024, 14886 : 286 - 298
  • [36] Data Poisoning Attacks Against Federated Learning Systems
    Tolpegin, Vale
    Truex, Stacey
    Gursoy, Mehmet Emre
    Liu, Ling
    COMPUTER SECURITY - ESORICS 2020, PT I, 2020, 12308 : 480 - 501
  • [37] Fabricated Flips: Poisoning Federated Learning without Data
    Huang, Jiyue
    Zhao, Zilong
    Chen, Lydia Y.
    Roos, Stefanie
    2023 53RD ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS, DSN, 2023, : 274 - 287
  • [38] Data Reconstruction Attack with Label Guessing for Federated Learning
    Jang, Jinhyeok
    Oh, Yoonju
    Ryu, Gwonsang
    Choi, Daeseon
    JOURNAL OF INTERNET TECHNOLOGY, 2023, 24 (04): : 893 - 903
  • [39] A Meta-Reinforcement Learning-Based Poisoning Attack Framework Against Federated Learning
    Zhou, Wei
    Zhang, Donglai
    Wang, Hongjie
    Li, Jinliang
    Jiang, Mingjian
    IEEE ACCESS, 2025, 13 : 28628 - 28644
  • [40] FedRecAttack: Model Poisoning Attack to Federated Recommendation
    Rong, Dazhong
    Ye, Shuai
    Zhao, Ruoyan
    Yuen, Hon Ning
    Chen, Jianhai
    He, Qinming
    2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022), 2022, : 2643 - 2655