Always be Pre-Training: Representation Learning for Network Intrusion Detection with GNNs

被引:1
|
作者
Gu, Zhengyao [1 ]
Lopez, Diego Troy [2 ]
Alrahis, Lilas [3 ]
Sinanoglu, Ozgur [3 ]
机构
[1] NYU, Ctr Data Sci, New York, NY 10012 USA
[2] NYU, Res Technol Serv, New York, NY USA
[3] New York Univ Abu Dhabi, Abu Dhabi, U Arab Emirates
关键词
Intrusion detection; machine learning; graph neural network; NIDS; few-shot learning; self-supervised learning; INTERNET; THINGS; ATTACK; IOT;
D O I
10.1109/ISQED60706.2024.10528371
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Graph neural network-based network intrusion detection systems have recently demonstrated state-of-the-art performance on benchmark datasets. Nevertheless, these methods suffer from a reliance on target encoding for data pre-processing, limiting widespread adoption due to the associated need for annotated labels-a cost-prohibitive requirement. In this work, we propose a solution involving in-context pre-training and the utilization of dense representations for categorical features to jointly overcome the label-dependency limitation. Our approach exhibits remarkable data efficiency, achieving over 98% of the performance of the supervised state-of-the-art with less than 4% labeled data on the NF-UQ-NIDS-V2 dataset.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] Learning to Sample Replacements for ELECTRA Pre-Training
    Hao, Yaru
    Dong, Li
    Bao, Hangbo
    Xu, Ke
    Wei, Furu
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 4495 - 4506
  • [32] Meta-Learning to Improve Pre-Training
    Raghu, Aniruddh
    Lorraine, Jonathan
    Kornblith, Simon
    McDermott, Matthew
    Duvenaud, David
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [33] Robust Pre-Training by Adversarial Contrastive Learning
    Jiang, Ziyu
    Chen, Tianlong
    Chen, Ting
    Wang, Zhangyang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [34] Graph Representation Learning for Context-Aware Network Intrusion Detection
    Premkumar, Augustine
    Schneider, Madeleine
    Spivey, Carlton
    Pavlik, John A.
    Bastian, Nathaniel D.
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS V, 2023, 12538
  • [35] Multilingual Pre-training with Universal Dependency Learning
    Sun, Kailai
    Li, Zuchao
    Zhao, Hai
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [36] Learning Chemical Rules of Retrosynthesis with Pre-training
    Jiang, Yinjie
    Wei, Ying
    Wu, Fei
    Huang, Zhengxing
    Kuang, Kun
    Wang, Zhihua
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 4, 2023, : 5113 - 5121
  • [37] Efficient Conditional Pre-training for Transfer Learning
    Chakraborty, Shuvam
    Uzkent, Burak
    Ayush, Kumar
    Tanmay, Kumar
    Sheehan, Evan
    Ermon, Stefano
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 4240 - 4249
  • [38] AN ADAPTER BASED PRE-TRAINING FOR EFFICIENT AND SCALABLE SELF-SUPERVISED SPEECH REPRESENTATION LEARNING
    Kessler, Samuel
    Thomas, Bethan
    Karout, Salah
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3179 - 3183
  • [39] Unified route representation learning for multi-modal transportation recommendation with spatiotemporal pre-training
    Hao Liu
    Jindong Han
    Yanjie Fu
    Yanyan Li
    Kai Chen
    Hui Xiong
    The VLDB Journal, 2023, 32 : 325 - 342
  • [40] Unified route representation learning for multi-modal transportation recommendation with spatiotemporal pre-training
    Liu, Hao
    Han, Jindong
    Fu, Yanjie
    Li, Yanyan
    Chen, Kai
    Xiong, Hui
    VLDB JOURNAL, 2023, 32 (02): : 325 - 342