Eavesdropping Game Based on Multi-Agent Deep Reinforcement Learning

被引:0
|
作者
Guo, Delin [1 ]
Tang, Lan [1 ]
Yang, Lvxi [2 ]
Liang, Ying-Chang [2 ]
机构
[1] Nanjing Univ, Nanjing, Peoples R China
[2] Southeast Univ, Nanjing, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Physical layer security; proactive eavesdropping; stochastic game; multi-agent reinforcement learning; WIRETAP CHANNEL;
D O I
10.1109/SPAWC51304.2022.9833927
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
This paper considers an adversarial scenario between a legitimate eavesdropper and a suspicious communication pair. All three nodes are equipped with multiple antennas. The eavesdropper, which operates in a full-duplex model, aims to wiretap the dubious communication pair via proactive jamming. On the other hand, the suspicious transmitter, which can send artificial noise (AN) to disturb the wiretap channel, aims to guarantee secrecy. More specifically, the eavesdropper adjusts jamming power to enhance the wiretap rate, while the suspicious transmitter jointly adapts the transmit power and noise power against the eavesdropping. Considering the partial observation and complicated interactions between the eavesdropper and the suspicious pair in unknown system dynamics, we model the problem as an imperfect-information stochastic game. To approach the Nash equilibrium solution of the eavesdropping game, we develop a multi-agent reinforcement learning (MARL) algorithm, termed neural fictitious self-play with soft actor-critic (NFSP-SAC), by combining the fictitious self-play (FSP) with a deep reinforcement learning algorithm, SAC. The introduction of SAC enables FSP to handle the problems with continuous and high dimension observation and action space. The simulation results demonstrate that the power allocation policies learned by our method empirically converge to a Nash equilibrium, while the compared reinforcement learning algorithms suffer from severe fluctuations during the learning process.
引用
收藏
页数:5
相关论文
共 50 条
  • [31] A Transfer Learning Framework for Deep Multi-Agent Reinforcement Learning
    Liu, Yi
    Wu, Xiang
    Bo, Yuming
    Wang, Jiacun
    Ma, Lifeng
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2024, 11 (11) : 2346 - 2348
  • [32] A review of cooperative multi-agent deep reinforcement learning
    Afshin Oroojlooy
    Davood Hajinezhad
    Applied Intelligence, 2023, 53 : 13677 - 13722
  • [33] Experience Selection in Multi-Agent Deep Reinforcement Learning
    Wang, Yishen
    Zhang, Zongzhang
    2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019), 2019, : 864 - 870
  • [34] Multi-Agent Deep Reinforcement Learning with Emergent Communication
    Simoes, David
    Lau, Nuno
    Reis, Luis Paulo
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [35] Sparse communication in multi-agent deep reinforcement learning
    Han, Shuai
    Dastani, Mehdi
    Wang, Shihan
    NEUROCOMPUTING, 2025, 625
  • [36] Multi-Agent Deep Reinforcement Learning with Human Strategies
    Thanh Nguyen
    Ngoc Duy Nguyen
    Nahavandi, Saeid
    2019 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY (ICIT), 2019, : 1357 - 1362
  • [37] Cooperative Exploration for Multi-Agent Deep Reinforcement Learning
    Liu, Iou-Jen
    Jain, Unnat
    Yeh, Raymond A.
    Schwing, Alexander G.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [38] Competitive Evolution Multi-Agent Deep Reinforcement Learning
    Zhou, Wenhong
    Chen, Yiting
    Li, Jie
    PROCEEDINGS OF THE THIRD INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND APPLICATION ENGINEERING (CSAE2019), 2019,
  • [39] Strategic Interaction Multi-Agent Deep Reinforcement Learning
    Zhou, Wenhong
    Li, Jie
    Chen, Yiting
    Shen, Lin-Cheng
    IEEE ACCESS, 2020, 8 : 119000 - 119009
  • [40] A review of cooperative multi-agent deep reinforcement learning
    Oroojlooy, Afshin
    Hajinezhad, Davood
    APPLIED INTELLIGENCE, 2023, 53 (11) : 13677 - 13722