Scalable and Autonomous Network Defense Using Reinforcement Learning

被引:0
|
作者
Campbell, Robert G. [1 ]
Eirinaki, Magdalini [1 ]
Park, Younghee [1 ]
机构
[1] San Joe State Univ, Dept Comp Engn, San Jose, CA 95192 USA
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Training; Games; Reinforcement learning; Topology; Network topology; Game theory; Optimization; Markov processes; Graph neural networks; Convolutional neural networks; network defense; Markov games; deep learning; graph convolutional networks;
D O I
10.1109/ACCESS.2024.3418931
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
An autonomous network defense method under attack is a critical part of preventing network infrastructure from potential damage in real time. Despite various network intrusion detection techniques, our network space is not safe enough due to the increasing exploitation of software vulnerabilities. Thus, timely response and defense methods under network intrusion are important techniques given the large scope of cyberattacks in recent years. In this paper, we design a scalable and autonomous network defense method by using the model of a zero-sum Markov game between an attacker and a defender agent. To scale up the proposed defense model, we utilize a graph convolutional network (GCN) along with framestacking to address the partial observability of the environment. The agents are trained using Proximal Policy Optimization (PPO) which allows for good convergence in a reasonable timeframe. In experiments, we evaluate the proposed model under the large network size while simulating network dynamics including link failures and other network events. The experimental results demonstrate that the proposed method scales well for larger networks and achieves state of the art results on various threat scenarios.
引用
收藏
页码:92919 / 92930
页数:12
相关论文
共 50 条
  • [41] Scalable Power Management Using Multilevel Reinforcement Learning for Multiprocessors
    Pan, Gung-Yu
    Jou, Jing-Yang
    Lai, Bo-Cheng
    ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS, 2014, 19 (04)
  • [42] Deep Reinforcement Learning on Autonomous Driving Policy With Auxiliary Critic Network
    Wu, Yuanqing
    Liao, Siqin
    Liu, Xiang
    Li, Zhihang
    Lu, Renquan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (07) : 3680 - 3690
  • [43] Autonomous network cyber offence strategy through deep reinforcement learning
    Sultana, Madeena
    Taylor, Adrian
    Li, Li
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS III, 2021, 11746
  • [44] Reinforcement learning for hierarchical and modular neural network in autonomous robot navigation
    Calvo, R
    Figueiredo, M
    PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS 2003, VOLS 1-4, 2003, : 1340 - 1345
  • [45] Safe reinforcement learning with mixture density network, with application to autonomous driving
    Baheri, Ali
    RESULTS IN CONTROL AND OPTIMIZATION, 2022, 6
  • [46] Reinforcement Learning in Tower Defense
    Dias, Augusto
    Foleiss, Juliano
    Lopes, Rui Pedro
    VIDEOGAME SCIENCES AND ARTS, VJ 2020, 2022, 1531 : 127 - 139
  • [47] An Intelligent Path Planning Scheme of Autonomous Vehicles Platoon Using Deep Reinforcement Learning on Network Edge
    Chen, Chen
    Jiang, Jiange
    Lv, Ning
    Li, Siyu
    IEEE Access, 2020, 8 : 99059 - 99069
  • [48] An Intelligent Path Planning Scheme of Autonomous Vehicles Platoon Using Deep Reinforcement Learning on Network Edge
    Chen, Chen
    Jiang, Jiange
    Lv, Ning
    Li, Siyu
    IEEE ACCESS, 2020, 8 : 99059 - 99069
  • [49] JANUS: A Simple and Efficient Speculative Defense using Reinforcement Learning
    Aimoniotis, Pavlos
    Kaxiras, Stefanos
    Proceedings - Symposium on Computer Architecture and High Performance Computing, 2024, : 25 - 36
  • [50] SINET: Enabling Scalable Network Routing with Deep Reinforcement Learning on Partial Nodes
    Sun, Penghao
    Li, Junfei
    Guo, Zehua
    Xu, Yang
    Lan, Julong
    Hu, Yuxiang
    PROCEEDINGS OF THE 2019 ACM SIGCOMM CONFERENCE POSTERS AND DEMOS (SIGCOMM '19), 2019, : 88 - 89