A Decentralized Communication Framework Based on Dual-Level Recurrence for Multiagent Reinforcement Learning

被引:3
|
作者
Li, Xuesi [1 ]
Li, Jingchen [1 ]
Shi, Haobin [1 ]
Hwang, Kao-Shing [2 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci & Engn, Xian 710129, Shaanxi, Peoples R China
[2] Natl Sun Yat Sen Univ, Dept Elect Engn, Kaohsiung 804, Taiwan
基金
中国国家自然科学基金;
关键词
Reinforcement learning; Logic gates; Training; Adaptation models; Multi-agent systems; Task analysis; Decision making; Gated recurrent network; multiagent reinforcement learning; multiagent system;
D O I
10.1109/TCDS.2023.3281878
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Designing communication channels for multiagent is a feasible method to conduct decentralized learning, especially in partially observable environments or large-scale multiagent systems. In this work, a communication model with dual-level recurrence is developed to provide a more efficient communication mechanism for the multiagent reinforcement learning field. The communications are conducted by a gated-attention-based recurrent network, in which the historical states are taken into account and regarded as the second-level recurrence. We separate communication messages from memories in the recurrent model so that the proposed communication flow can adapt changeable communication objects in the case of limited communication, and the communication results are fair to every agent. We provide a sufficient discussion about our method in both partially observable and fully observable environments. The results of several experiments suggest our method outperforms the existing decentralized communication frameworks and the corresponding centralized training method.
引用
收藏
页码:640 / 649
页数:10
相关论文
共 50 条
  • [21] An Evolutionary Transfer Reinforcement Learning Framework for Multiagent Systems
    Hou, Yaqing
    Ong, Yew-Soon
    Feng, Liang
    Zurada, Jacek M.
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2017, 21 (04) : 601 - 615
  • [22] Dual-Level Framework for OpenBIM-Enabled Design Collaboration
    Jin, Ming
    Li, Baizhan
    BUILDINGS, 2023, 13 (12)
  • [24] Decentralized network level adaptive signal control by multiagent deep reinforcement learning (vol 1, 100020, 2019)
    Gong, Yaobang
    Abdel-Aty, Mohamed
    Cai, Qing
    Rahman, Md Sharikur
    TRANSPORTATION RESEARCH INTERDISCIPLINARY PERSPECTIVES, 2019, 1
  • [25] A Study on Cooperative Action Selection Considering Unfairness in Decentralized Multiagent Reinforcement Learning
    Matsui, Toshihiro
    Matsuo, Hiroshi
    ICAART: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE, VOL 1, 2017, : 88 - 95
  • [26] Cooperative Partial Task Offloading and Resource Allocation for IIoT Based on Decentralized Multiagent Deep Reinforcement Learning
    Zhang, Fan
    Han, Guangjie
    Liu, Li
    Zhang, Yu
    Peng, Yan
    Li, Chao
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (03) : 5526 - 5544
  • [27] Learning a deep dual-level network for robust DeepFake detection
    Pu, Wenbo
    Hu, Jing
    Wang, Xin
    Li, Yuezun
    Hu, Shu
    Zhu, Bin
    Song, Rui
    Song, Qi
    Wu, Xi
    Lyu, Siwei
    PATTERN RECOGNITION, 2022, 130
  • [28] Dispatching Policy Optimizing of Cruise Taxi in a Multiagent‑Based Deep Reinforcement Learning Framework
    Ma X.
    Wuhan Daxue Xuebao (Xinxi Kexue Ban)/Geomatics and Information Science of Wuhan University, 2023, 48 (12): : 2108
  • [29] Blockchain-Based Distributed Multiagent Reinforcement Learning for Collaborative Multiobject Tracking Framework
    Shen, Jiahao
    Sheng, Hao
    Wang, Shuai
    Cong, Ruixuan
    Yang, Da
    Zhang, Yang
    IEEE TRANSACTIONS ON COMPUTERS, 2024, 73 (03) : 778 - 788
  • [30] Communication Efficient Framework for Decentralized Machine Learning
    Elgabli, Anis
    Park, Jihong
    Bedi, Amrit S.
    Bennis, Mehdi
    Aggarwal, Vaneet
    2020 54TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2020, : 47 - 51