Design of a Feasible Wireless MAC Communication Protocol via Multi-Agent Reinforcement Learning

被引:0
|
作者
Miuccio, Luciano [1 ]
Riolo, Salvatore [1 ]
Bennis, Mehdi [2 ]
Panno, Daniela [1 ]
机构
[1] Univ Catania, Dept Elect Elect & Comp Engn, Catania, Italy
[2] Univ Oulu, Ctr Wireless Commun, Oulu, Finland
来源
2024 IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING FOR COMMUNICATION AND NETWORKING, ICMLCN 2024 | 2024年
关键词
6G; multi-agent reinforcement learning; feasibility; protocol learning; MAC; industrial networks;
D O I
10.1109/ICMLCN59089.2024.10624759
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the future beyond 5G (B5G) and 6G wireless networks, the topic of automatically learning a medium access control (MAC) communication protocol via the multi-agent reinforcement learning (MARL) paradigm has been receiving much attention. The proposals available in the literature show promising simulation results. However, they have been designed to run in computer simulations, where an environment gives observations and rewards to the agents neglecting the communications overhead. As a result, these solutions cannot be implemented in real-world scenarios as they are or require huge additional costs. In this paper, we focus on this feasibility problem. First, we provide a new description of the main learning schemes available in the literature from the perspective of feasibility in practical scenarios. Then, we propose a new feasible MARL-based learning framework that goes beyond the concept of an omniscient environment. We properly model a feasible Markov decision process (MDP), identify which physical entity calculates the reward, and how the reward is provided to the learning agents. The proposed learning framework is designed to reduce impact on the communication resources, while better exploiting the available information to learn efficient MAC protocols. Finally, we compare the proposed feasible framework against other solutions in terms of training convergence and communication performance achieved by the learned MAC protocols. The simulation results show that our feasible system exhibits performance in line with the unfeasible solutions.
引用
收藏
页码:94 / 100
页数:7
相关论文
共 50 条
  • [21] Sparse communication in multi-agent deep reinforcement learning
    Han, Shuai
    Dastani, Mehdi
    Wang, Shihan
    NEUROCOMPUTING, 2025, 625
  • [22] Joint UAV trajectory and communication design with heterogeneous multi-agent reinforcement learning
    Xuanhan ZHOU
    Jun XIONG
    Haitao ZHAO
    Xiaoran LIU
    Baoquan REN
    Xiaochen ZHANG
    Jibo WEI
    Hao YIN
    Science China(Information Sciences), 2024, 67 (03) : 225 - 245
  • [23] Multi-Agent Reinforcement Learning for Coordinating Communication and Control
    Mason, Federico
    Chiariotti, Federico
    Zanella, Andrea
    Popovski, Petar
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2024, 10 (04) : 1566 - 1581
  • [24] Universally Expressive Communication in Multi-Agent Reinforcement Learning
    Morris, Matthew
    Barrett, Thomas D.
    Pretorius, Arnu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [25] Improving coordination with communication in multi-agent reinforcement learning
    Szer, D
    Charpillet, F
    ICTAI 2004: 16TH IEEE INTERNATIONALCONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2004, : 436 - 440
  • [26] Low Entropy Communication in Multi-Agent Reinforcement Learning
    Yu, Lebin
    Qiu, Yunbo
    Wang, Qiexiang
    Zhang, Xudong
    Wang, Jian
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 5173 - 5178
  • [27] Multi-agent reinforcement learning based on local communication
    Wenxu Zhang
    Lei Ma
    Xiaonan Li
    Cluster Computing, 2019, 22 : 15357 - 15366
  • [28] Biases for Emergent Communication in Multi-agent Reinforcement Learning
    Eccles, Tom
    Bachrach, Yoram
    Lever, Guy
    Lazaridou, Angeliki
    Graepel, Thore
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [29] Joint UAV trajectory and communication design with heterogeneous multi-agent reinforcement learning
    Zhou, Xuanhan
    Xiong, Jun
    Zhao, Haitao
    Liu, Xiaoran
    Ren, Baoquan
    Zhang, Xiaochen
    Wei, Jibo
    Yin, Hao
    SCIENCE CHINA-INFORMATION SCIENCES, 2024, 67 (03)
  • [30] Learning to Collaborate in Multi-Module Recommendation via Multi-Agent Reinforcement Learning without Communication
    He, Xu
    An, Bo
    Li, Yanghua
    Chen, Haikai
    Wang, Rundong
    Wang, Xinrun
    Yu, Runsheng
    Li, Xin
    Wang, Zhirong
    RECSYS 2020: 14TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2020, : 210 - 219