Two-phase Defense Against Poisoning Attacks on Federated Learning-based Intrusion Detection

被引:20
|
作者
Lai, Yuan-Cheng [1 ]
Lin, Jheng-Yan [1 ]
Lin, Ying-Dar [2 ]
Hwang, Ren-Hung [3 ]
Lin, Po-Chin [4 ]
Wu, Hsiao-Kuang [5 ]
Chen, Chung-Kuan [6 ]
机构
[1] Natl Taiwan Univ Sci & Technol, Dept Informat Management, Taipei, Taiwan
[2] Natl Yang Ming Chiao Tung Univ, Dept Comp Sci, Hsinchu, Taiwan
[3] Natl Yang Ming Chiao Tung Univ, Coll Artificial Intelligence, Tainan, Taiwan
[4] Natl Chung Cheng Univ, Dept Comp Sci & Informat Engn, Chiayi, Taiwan
[5] Natl Cent Univ, Dept Comp Sci & Informat Engn, Taoyuan, Taiwan
[6] Cycraft Technol, Taipei, Taiwan
关键词
Federated Learning; Intrusion Detection; Poisoning Attack; Backdoor Attack; Local Outlier Factor;
D O I
10.1016/j.cose.2023.103205
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The Machine Learning-based Intrusion Detection System (ML-IDS) becomes more popular because it doesn't need to manually update the rules and can recognize variants better, However, due to the data privacy issue in ML-IDS, the Federated Learning-based IDS (FL-IDS) was proposed. In each round of federated learning, each participant first trains its local model and sends the model's weights to the global server, which then aggregates the received weights and distributes the aggregated global model to participants. An attacker will use poisoning attacks, including label-flipping attacks and backdoor attacks, to directly generate a malicious local model and indirectly pollute the global model. Currently, a few studies defend against poisoning attacks, but they only discuss label-flipping attacks in the image field. Therefore, we propose a two-phase defense mechanism, called Defending Poisoning Attacks in Federated Learning (DPA-FL), applied to intrusion detection. The first phase employs relative differences to quickly compare weights between participants because the local models of attackers and benign participants are quite different. The second phase tests the aggregated model with the dataset and tries to find the attackers when its accuracy is low. Experiment results show that DPA-FL can reach 96.5% accuracy in defending against poisoning attacks. Compared with other defense mechanisms, DPA-FL can improve F1-score by 20 similar to 64% under backdoor attacks. Also, DPA-FL can exclude the attackers within twelve rounds when the attackers are few.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] FederatedReverse: A Detection and Defense Method Against Backdoor Attacks in Federated Learning
    Zhao, Chen
    Wen, Yu
    Li, Shuailou
    Liu, Fucheng
    Meng, Dan
    PROCEEDINGS OF THE 2021 ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY, IH&MMSEC 2021, 2021, : 51 - 62
  • [22] FLRAM: Robust Aggregation Technique for Defense against Byzantine Poisoning Attacks in Federated Learning
    Chen, Haitian
    Chen, Xuebin
    Peng, Lulu
    Ma, Ruikui
    ELECTRONICS, 2023, 12 (21)
  • [23] Defense against local model poisoning attacks to byzantine-robust federated learning
    Lu, Shiwei
    Li, Ruihu
    Chen, Xuan
    Ma, Yuena
    FRONTIERS OF COMPUTER SCIENCE, 2022, 16 (06)
  • [24] Defense against local model poisoning attacks to byzantine-robust federated learning
    LU Shiwei
    LI Ruihu
    CHEN Xuan
    MA Yuena
    Frontiers of Computer Science, 2022, 16 (06)
  • [25] Defense against local model poisoning attacks to byzantine-robust federated learning
    Shiwei Lu
    Ruihu Li
    Xuan Chen
    Yuena Ma
    Frontiers of Computer Science, 2022, 16
  • [26] Fair Detection of Poisoning Attacks in Federated Learning
    Singh, Ashneet Khandpur
    Blanco-Justicia, Alberto
    Domingo-Ferrer, Josep
    Sanchez, David
    Rebollo-Monedero, David
    2020 IEEE 32ND INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI), 2020, : 224 - 229
  • [27] Federated Learning-Based Intrusion Detection Method for Smart Grid
    Bin Dongmei
    Li Xin
    Yang Chunyan
    Han Songming
    Ling Ying
    2023 2ND ASIA CONFERENCE ON ALGORITHMS, COMPUTING AND MACHINE LEARNING, CACML 2023, 2023, : 316 - 322
  • [28] An optimal federated learning-based intrusion detection for IoT environment
    Karunamurthy, A.
    Vijayan, K.
    Kshirsagar, Pravin R.
    Tan, Kuan Tak
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [29] The Evolution of Federated Learning-Based Intrusion Detection and Mitigation: A Survey
    Lavaur, Leo
    Pahl, Marc-Oliver
    Busnel, Yann
    Autrel, Fabien
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2022, 19 (03): : 2309 - 2332
  • [30] Federated learning-based intrusion detection system for Internet of Things
    Najet Hamdi
    International Journal of Information Security, 2023, 22 : 1937 - 1948