Multi-Agent Transfer Reinforcement Learning for Resource Management in Underwater Acoustic Communication Networks

被引:0
|
作者
Wang, Hui [1 ,2 ]
Wu, Hongrun [1 ,2 ]
Chen, Yingpin [1 ,2 ]
Ma, Biyang [3 ]
机构
[1] Minnan Normal Univ, Sch Phys & Informat Engn, Zhangzhou 363000, Peoples R China
[2] Minnan Normal Univ, Key Lab Light Field Manipulat & Syst Integrat Appl, Zhangzhou 363000, Peoples R China
[3] Minnan Normal Univ, Sch Comp Sci, Zhangzhou 363000, Peoples R China
基金
中国国家自然科学基金;
关键词
Underwater acoustic communication networks (UACNs); transfer Dyna-Q; multi-agent; resource management; user service quality; DEEP NEURAL-NETWORKS; POWER ALLOCATION; PROTOCOL; INTERNET; DESIGN;
D O I
10.1109/TNSE.2023.3335973
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
This paper investigates the application of self-organizing networks in solving the interference problem in underwater acoustic communication networks (UACNs) with the coexistence of multi-node. In this network, each node autonomously adjusts its power based on locally observed information without central controller intervention. Considering the non-convexity of the optimization problem with quality-of-service constraints and the dynamic nature of the underwater environment, we propose a reinforcement learning (RL)-based approach coupled with a distributed coordination mechanism, namely the multi-agent-based transfer Dyna-Q algorithm (MA-TDQ). This algorithm combines Q-learning with Dyna structure and transfer learning, and can quickly obtain optimal intelligent resource management strategies. Furthermore, we rigorously demonstrate the convergence of the MA-TDQ algorithm to Nash equilibrium. Simulation results indicate that the proposed distributed coordination learning algorithm outperforms other existing learning algorithms in terms of learning efficiency, network transmission rate, and communication service quality.
引用
收藏
页码:2012 / 2023
页数:12
相关论文
共 50 条
  • [31] Multi-agent Reinforcement Learning in Network Management
    Bagnasco, Ricardo
    Serrat, Joan
    SCALABILITY OF NETWORKS AND SERVICES, PROCEEDINGS, 2009, 5637 : 199 - 202
  • [32] Learning of Communication Codes in Multi-Agent Reinforcement Learning Problem
    Kasai, Tatsuya
    Tenmoto, Hiroshi
    Kamiya, Akimoto
    2008 IEEE CONFERENCE ON SOFT COMPUTING IN INDUSTRIAL APPLICATIONS SMCIA/08, 2009, : 1 - +
  • [33] Multi-agent reinforcement learning based on local communication
    Zhang, Wenxu
    Ma, Lei
    Li, Xiaonan
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2019, 22 (Suppl 6): : 15357 - 15366
  • [34] Multi-Agent Deep Reinforcement Learning with Emergent Communication
    Simoes, David
    Lau, Nuno
    Reis, Luis Paulo
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [35] Sparse communication in multi-agent deep reinforcement learning
    Han, Shuai
    Dastani, Mehdi
    Wang, Shihan
    NEUROCOMPUTING, 2025, 625
  • [36] Multi-Agent Reinforcement Learning for Coordinating Communication and Control
    Mason, Federico
    Chiariotti, Federico
    Zanella, Andrea
    Popovski, Petar
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2024, 10 (04) : 1566 - 1581
  • [37] Universally Expressive Communication in Multi-Agent Reinforcement Learning
    Morris, Matthew
    Barrett, Thomas D.
    Pretorius, Arnu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [38] Improving coordination with communication in multi-agent reinforcement learning
    Szer, D
    Charpillet, F
    ICTAI 2004: 16TH IEEE INTERNATIONALCONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2004, : 436 - 440
  • [39] Low Entropy Communication in Multi-Agent Reinforcement Learning
    Yu, Lebin
    Qiu, Yunbo
    Wang, Qiexiang
    Zhang, Xudong
    Wang, Jian
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 5173 - 5178
  • [40] Multi-agent reinforcement learning based on local communication
    Wenxu Zhang
    Lei Ma
    Xiaonan Li
    Cluster Computing, 2019, 22 : 15357 - 15366