Adaptive Learning: A New Decentralized Reinforcement Learning Approach for Cooperative Multiagent Systems

被引:13
|
作者
Li, Meng-Lin [1 ]
Chen, Shaofei [1 ]
Chen, Jing [1 ]
机构
[1] Natl Univ Def Technol, Coll Intelligence Sci & Technol, Changsha 410073, Peoples R China
来源
IEEE ACCESS | 2020年 / 8卷
基金
中国国家自然科学基金;
关键词
Learning (artificial intelligence); Training; Multi-agent systems; Heuristic algorithms; Roads; Urban areas; Games; Reinforcement learning; multiagent system; intelligent control;
D O I
10.1109/ACCESS.2020.2997899
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multiagent systems (MASs) have received extensive attention in a variety of domains, such as robotics and distributed control. This paper focuses on how independent learners (ILs, structures used in decentralized reinforcement learning) decide on their individual behaviors to achieve coherent joint behavior. To date, Reinforcement learning(RL) approaches for ILs have not guaranteed convergence to the optimal joint policy in scenarios in which communication is difficult. Especially in a decentralized algorithm, the proportion of credit for a single agent's action in a multiagent system is not distinguished, which can lead to miscoordination of joint actions. Therefore, it is highly significant to study the mechanisms of coordination between agents in MASs. Most previous coordination mechanisms have been carried out by modeling the communication mechanism and other agent policies. These methods are applicable only to a particular system, so such algorithms do not offer generalizability, especially when there are dozens or more agents. Therefore, this paper mainly focuses on the MAS contains more than a dozen agents. By combining the method of parallel computation, the experimental environment is closer to the application scene. By studying the paradigm of centralized training and decentralized execution(CTDE), a multi-agent reinforcement learning algorithm for implicit coordination based on TD error is proposed. The new algorithm can dynamically adjust the learning rate by deeply analyzing the dissonance problem in the matrix game and combining it with a multiagent environment. By adjusting the dynamic learning rate between agents, coordination of the agents' strategies can be achieved. Experimental results show that the proposed algorithm can effectively improve the coordination ability of a MAS. Moreover, the variance of the training results is more stable than that of the hysteretic Q learning(HQL) algorithm. Hence, the problem of miscoordination in a MAS can be avoided to some extent without additional communication. Our work provides a new way to solve the miscoordination problem for reinforcement learning algorithms in the scale of dozens or more number of agents. As a new IL structure algorithm, our results should be extended and further studied.
引用
收藏
页码:99404 / 99421
页数:18
相关论文
共 50 条
  • [41] N-learning: A reinforcement learning paradigm for multiagent systems
    Mansfield, M
    Collins, JJ
    Eaton, M
    Collins, T
    AI 2005: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2005, 3809 : 684 - 694
  • [42] Opportunities for multiagent systems and multiagent reinforcement learning in traffic control
    Bazzan, Ana L. C.
    AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2009, 18 (03) : 342 - 375
  • [43] Opportunities for multiagent systems and multiagent reinforcement learning in traffic control
    Ana L. C. Bazzan
    Autonomous Agents and Multi-Agent Systems, 2009, 18 : 342 - 375
  • [44] Decentralized Cooperative Reinforcement Learning with Hierarchical Information Structure
    Kao, Hsu
    Wei, Chen-Yu
    Subramanian, Vijay
    INTERNATIONAL CONFERENCE ON ALGORITHMIC LEARNING THEORY, VOL 167, 2022, 167
  • [45] Decentralized Scheduling for Cooperative Localization With Deep Reinforcement Learning
    Peng, Bile
    Seco-Granados, Gonzalo
    Steinmetz, Erik
    Frohle, Markus
    Wymeersch, Henk
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (05) : 4295 - 4305
  • [46] Timesharing-Tracking: a new framework for Decentralized Reinforcement Learning in Cooperative Multi-Agent Systems
    Fu Bo
    Chen Xin
    He Yong
    Wu Min
    2013 32ND CHINESE CONTROL CONFERENCE (CCC), 2013, : 7054 - 7059
  • [47] Scheduling in Multiagent Systems Using Reinforcement Learning
    Minashina, I. K.
    Gorbachev, R. A.
    Zakharova, E. M.
    DOKLADY MATHEMATICS, 2022, 106 (SUPPL 1) : S70 - S78
  • [48] An Advising Framework for Multiagent Reinforcement Learning Systems
    da Silva, Felipe Leno
    Glatt, Ruben
    Reali Costa, Anna Helena
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 4913 - 4914
  • [49] Scheduling in Multiagent Systems Using Reinforcement Learning
    I. K. Minashina
    R. A. Gorbachev
    E. M. Zakharova
    Doklady Mathematics, 2022, 106 : S70 - S78
  • [50] Effect of reinforcement learning on coordination of multiagent systems
    Bukkapatnam, S
    Gao, G
    NETWORK INTELLIGENCE: INTERNET-BASED MANUFACTURING, 2000, 4208 : 31 - 41