Design of a Feasible Wireless MAC Communication Protocol via Multi-Agent Reinforcement Learning

被引:0
|
作者
Miuccio, Luciano [1 ]
Riolo, Salvatore [1 ]
Bennis, Mehdi [2 ]
Panno, Daniela [1 ]
机构
[1] Univ Catania, Dept Elect Elect & Comp Engn, Catania, Italy
[2] Univ Oulu, Ctr Wireless Commun, Oulu, Finland
来源
2024 IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING FOR COMMUNICATION AND NETWORKING, ICMLCN 2024 | 2024年
关键词
6G; multi-agent reinforcement learning; feasibility; protocol learning; MAC; industrial networks;
D O I
10.1109/ICMLCN59089.2024.10624759
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the future beyond 5G (B5G) and 6G wireless networks, the topic of automatically learning a medium access control (MAC) communication protocol via the multi-agent reinforcement learning (MARL) paradigm has been receiving much attention. The proposals available in the literature show promising simulation results. However, they have been designed to run in computer simulations, where an environment gives observations and rewards to the agents neglecting the communications overhead. As a result, these solutions cannot be implemented in real-world scenarios as they are or require huge additional costs. In this paper, we focus on this feasibility problem. First, we provide a new description of the main learning schemes available in the literature from the perspective of feasibility in practical scenarios. Then, we propose a new feasible MARL-based learning framework that goes beyond the concept of an omniscient environment. We properly model a feasible Markov decision process (MDP), identify which physical entity calculates the reward, and how the reward is provided to the learning agents. The proposed learning framework is designed to reduce impact on the communication resources, while better exploiting the available information to learn efficient MAC protocols. Finally, we compare the proposed feasible framework against other solutions in terms of training convergence and communication performance achieved by the learned MAC protocols. The simulation results show that our feasible system exhibits performance in line with the unfeasible solutions.
引用
收藏
页码:94 / 100
页数:7
相关论文
共 50 条
  • [41] Battlefield Environment Design for Multi-agent Reinforcement Learning
    Do, Seungwon
    Baek, Jaeuk
    Jun, Sungwoo
    Lee, Changeun
    2022 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (IEEE BIGCOMP 2022), 2022, : 318 - 319
  • [42] Learning multi-agent communication with double attentional deep reinforcement learning
    Hangyu Mao
    Zhengchao Zhang
    Zhen Xiao
    Zhibo Gong
    Yan Ni
    Autonomous Agents and Multi-Agent Systems, 2020, 34
  • [43] Multi-Agent Reinforcement Learning
    Stankovic, Milos
    2016 13TH SYMPOSIUM ON NEURAL NETWORKS AND APPLICATIONS (NEUREL), 2016, : 43 - 43
  • [44] Cooperative Behavior by Multi-agent Reinforcement Learning with Abstractive Communication
    Tanda, Jin
    Moustafa, Ahmed
    Ito, Takayuki
    2019 IEEE INTERNATIONAL CONFERENCE ON AGENTS (ICA), 2019, : 8 - 13
  • [45] Diffusion-based Multi-agent Reinforcement Learning with Communication
    Qi, Xinyue
    Tang, Jianhang
    Jin, Jiangming
    Zhang, Yang
    2024 IEEE VTS ASIA PACIFIC WIRELESS COMMUNICATIONS SYMPOSIUM, APWCS 2024, 2024,
  • [46] Semantic Communication for Partial Observation Multi-agent Reinforcement Learning
    Do, Hoang Khoi
    Dinh, Thi Quynh
    Nguyen, Minh Duong
    Nguyen, Tien Hoa
    2023 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP, SSP, 2023, : 319 - 323
  • [47] Multi-agent Pathfinding with Communication Reinforcement Learning and Deadlock Detection
    Ye, Zhaohui
    Li, Yanjie
    Guo, Ronghao
    Gao, Jianqi
    Fu, Wen
    INTELLIGENT ROBOTICS AND APPLICATIONS (ICIRA 2022), PT I, 2022, 13455 : 493 - 504
  • [48] Optimistic sequential multi-agent reinforcement learning with motivational communication
    Huang, Anqi
    Wang, Yongli
    Zhou, Xiaoliang
    Zou, Haochen
    Dong, Xu
    Che, Xun
    NEURAL NETWORKS, 2024, 179
  • [49] Cooperative Multi-agent Reinforcement Learning with Hierachical Communication Architecture
    Liu, Shifan
    Yuan, Quan
    Chen, Bo
    Luo, Guiyang
    Li, Jinglin
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT II, 2022, 13530 : 14 - 25
  • [50] Deep Hierarchical Communication Graph in Multi-Agent Reinforcement Learning
    Liu, Zeyang
    Wan, Lipeng
    Sui, Xue
    Chen, Zhuoran
    Sun, Kewu
    Lan, Xuguang
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 208 - 216