Learning Games for Defending Advanced Persistent Threats in Cyber Systems

被引:9
|
作者
Zhu, Tianqing [1 ]
Ye, Dayong [2 ,3 ]
Cheng, Zishuo [2 ,3 ]
Zhou, Wanlei [4 ]
Yu, Philip S. [5 ]
机构
[1] China Univ Geosci, Sch Comp Sci, Wuhan 430074, Peoples R China
[2] Univ Technol Sydney, Ctr Cyber Secur & Privacy, Ultimo, NSW 2007, Australia
[3] Univ Technol Sydney, Sch Comp Sci, Ultimo, NSW 2007, Australia
[4] City Univ Macau, Inst Data Sci, Macau, Peoples R China
[5] Univ Illinois, Dept Comp Sci, Chicago, IL 60607 USA
基金
美国国家科学基金会;
关键词
Advanced persistent threats (APTs); cyber system security; deep reinforcement learning; game theory; SECURITY;
D O I
10.1109/TSMC.2022.3211866
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A cyber system may face to multiple attackers from diverse adversaries, who usually employ sophisticated techniques to both continuously steal sensitive data and avoid being detected by defense strategies. This continuous process is typically involved in an advanced persistent threat (APT). Since the game theory is an ideal mathematical model for investigating continuous decision making of competing players, it is broadly used to research the interaction between defenders and APT attackers. Although many researchers are now using the game theory to defend against APT attacks, most of the existing solutions are limited to single-defender, single-attacker scenarios. In the real world, threats by multiple attackers are not uncommon and multiple defenders can be put in place. Therefore, to overcome the limitation of the existing solutions, we develop a multiagent deep reinforcement learning (MADRL) method with a novel sampling approach. The MADRL method allows defenders to create strategies on the fly and share their experience with other defenders. To develop this method, we create a multidefender, multiattacker game model and analyze the equilibrium of this model. The results of a series of experiments demonstrate that, with MADRL, defenders can quickly learn efficient strategies against attackers.
引用
收藏
页码:2410 / 2422
页数:13
相关论文
共 50 条
  • [1] A Game-Theoretic Method for Defending Against Advanced Persistent Threats in Cyber Systems
    Zhang, Lefeng
    Zhu, Tianqing
    Hussain, Farookh Khadeer
    Ye, Dayong
    Zhou, Wanlei
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 1349 - 1364
  • [2] Robust Federated Learning for Mitigating Advanced Persistent Threats in Cyber-Physical Systems
    Hallaji, Ehsan
    Razavi-Far, Roozbeh
    Saif, Mehrdad
    APPLIED SCIENCES-BASEL, 2024, 14 (19):
  • [3] A dynamic games approach to proactive defense strategies against Advanced Persistent Threats in cyber-physical systems
    Huang, Linan
    Zhu, Quanyan
    COMPUTERS & SECURITY, 2020, 89
  • [4] Flip the Cloud: Cyber-Physical Signaling Games in the Presence of Advanced Persistent Threats
    Pawlick, Jeffrey
    Farhang, Sadegh
    Zhu, Quanyan
    DECISION AND GAME THEORY FOR SECURITY, GAMESEC 2015, 2015, 9406 : 289 - 308
  • [5] Defending against cyber threats
    Canan, James W.
    AEROSPACE AMERICA, 2011, 49 (09) : 22 - +
  • [6] Defending Against Advanced Persistent Threats Using Game-Theory
    Rass, Stefan
    Koenig, Sandra
    Schauer, Stefan
    PLOS ONE, 2017, 12 (01):
  • [7] Security Evaluation of the Cyber Networks Under Advanced Persistent Threats
    Yang, Lu-Xing
    Li, Pengdeng
    Yang, Xiaofan
    Tang, Yuan Yan
    IEEE ACCESS, 2017, 5 : 20111 - 20123
  • [8] A Cyber Kill Chain Approach for Detecting Advanced Persistent Threats
    Ahmed, Yussuf
    Asyhari, A. Taufiq
    Rahman, Md Arafatur
    CMC-COMPUTERS MATERIALS & CONTINUA, 2021, 67 (02): : 2497 - 2513
  • [9] Machine Learning for Human-Machine Systems With Advanced Persistent Threats
    Chen, Long
    Zhang, Wei
    Song, Yanqing
    Chen, Jianguo
    IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2024, 54 (06) : 753 - 761
  • [10] DIFT Games: Dynamic Information Flow Tracking Games for Advanced Persistent Threats
    Sahabandu, Dinuka
    Xiao, Baicen
    Clark, Andrew
    Lee, Sangho
    Lee, Wenke
    Poovendran, Radha
    2018 IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2018, : 1136 - 1143