Eavesdropping Game Based on Multi-Agent Deep Reinforcement Learning

被引:0
|
作者
Guo, Delin [1 ]
Tang, Lan [1 ]
Yang, Lvxi [2 ]
Liang, Ying-Chang [2 ]
机构
[1] Nanjing Univ, Nanjing, Peoples R China
[2] Southeast Univ, Nanjing, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Physical layer security; proactive eavesdropping; stochastic game; multi-agent reinforcement learning; WIRETAP CHANNEL;
D O I
10.1109/SPAWC51304.2022.9833927
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
This paper considers an adversarial scenario between a legitimate eavesdropper and a suspicious communication pair. All three nodes are equipped with multiple antennas. The eavesdropper, which operates in a full-duplex model, aims to wiretap the dubious communication pair via proactive jamming. On the other hand, the suspicious transmitter, which can send artificial noise (AN) to disturb the wiretap channel, aims to guarantee secrecy. More specifically, the eavesdropper adjusts jamming power to enhance the wiretap rate, while the suspicious transmitter jointly adapts the transmit power and noise power against the eavesdropping. Considering the partial observation and complicated interactions between the eavesdropper and the suspicious pair in unknown system dynamics, we model the problem as an imperfect-information stochastic game. To approach the Nash equilibrium solution of the eavesdropping game, we develop a multi-agent reinforcement learning (MARL) algorithm, termed neural fictitious self-play with soft actor-critic (NFSP-SAC), by combining the fictitious self-play (FSP) with a deep reinforcement learning algorithm, SAC. The introduction of SAC enables FSP to handle the problems with continuous and high dimension observation and action space. The simulation results demonstrate that the power allocation policies learned by our method empirically converge to a Nash equilibrium, while the compared reinforcement learning algorithms suffer from severe fluctuations during the learning process.
引用
收藏
页数:5
相关论文
共 50 条
  • [41] Multi-Agent Deep Reinforcement Learning for Walker Systems
    Park, Inhee
    Moh, Teng-Sheng
    20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 490 - 495
  • [42] Action Markets in Deep Multi-Agent Reinforcement Learning
    Schmid, Kyrill
    Belzner, Lenz
    Gabor, Thomas
    Phan, Thomy
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2018, PT II, 2018, 11140 : 240 - 249
  • [43] Strategic Interaction Multi-Agent Deep Reinforcement Learning
    Zhou, Wenhong
    Li, Jie
    Chen, Yiting
    Shen, Lin-Cheng
    IEEE Access, 2020, 8 : 119000 - 119009
  • [44] Multi-Agent Deep Reinforcement Learning in Vehicular OCC
    Islam, Amirul
    Musavian, Leila
    Thomos, Nikolaos
    2022 IEEE 95TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2022-SPRING), 2022,
  • [45] Teaching on a Budget in Multi-Agent Deep Reinforcement Learning
    Ilhan, Ercument
    Gow, Jeremy
    Perez-Liebana, Diego
    2019 IEEE CONFERENCE ON GAMES (COG), 2019,
  • [46] Research Progress of Multi-Agent Deep Reinforcement Learning
    Ding, Shi-Feiu
    Du, Weiu
    Zhang, Jianu
    Guo, Li-Liu
    Ding, Ding
    Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (07): : 1547 - 1567
  • [47] Hierarchical Architecture for Multi-Agent Reinforcement Learning in Intelligent Game
    Li, Bin
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [48] Offline Multi-Agent Reinforcement Learning in Custom Game Scenario
    Shukla, Indu
    Wilson, William R.
    Henslee, Althea C.
    Dozier, Haley R.
    2023 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE, CSCI 2023, 2023, : 329 - 331
  • [49] Meta-game equilibrium for multi-agent reinforcement learning
    Gao, Y
    Huang, JZ
    Rong, HQ
    Zhou, ZH
    AI 2004: ADVANCES IN ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2004, 3339 : 930 - 936
  • [50] Reinforcement learning based on multi-agent in RoboCup
    Zhang, W
    Li, JG
    Ruan, XG
    ADVANCES IN INTELLIGENT COMPUTING, PT 1, PROCEEDINGS, 2005, 3644 : 967 - 975