Reinforcement learning for multi-agent formation navigation with scalability

被引:3
|
作者
Gong, Yalei [1 ]
Xiong, Hongyun [1 ]
Li, Mengmeng [1 ]
Wang, Haibo [1 ]
Nian, Xiaohong [1 ]
机构
[1] Cent South Univ, Clustered Unmanned Syst Res Inst, Sch Automat, Changsha 410073, Hunan, Peoples R China
关键词
Deep reinforcement learning; Multi-agent formations; Collision avoidance; Scalability; AVOIDANCE;
D O I
10.1007/s10489-023-05007-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper addresses the multi-agent formation obstacle avoidance (MAFOA) problem using multi-agent deep reinforcement learning (MADRL). MAFOA control aims to achieve and maintain a desired formation while avoiding collisions among agents or with obstacles. It is a research hotspot in multi-agent cooperation due to its wide applications and challenges. However, current MADRL methods face two major difficulties in solving this problem: 1) the high complexity and uncertainty of the environment when there are many agents; 2) the lack of scalability when the number of agents varies. To overcome these difficulties, we propose: 1) A local multi-agent deep deterministic policy gradient algorithm that allows each agent to learn from its local neighbors' strategies during training and act independently during execution; 2) A reinforcement learning framework based on local information that uses partial observation as input and adapts to different numbers of agents; 3) A hybrid control method that switches between reinforcement learning and PID control to ensure formation stability. We evaluate our method on the multiagent particle environment environment and compare it with other algorithms to demonstrate its feasibility and superiority for solving the MAFOA problem.
引用
收藏
页码:28207 / 28225
页数:19
相关论文
共 50 条
  • [21] Multi-Agent Uncertainty Sharing for Cooperative Multi-Agent Reinforcement Learning
    Chen, Hao
    Yang, Guangkai
    Zhang, Junge
    Yin, Qiyue
    Huang, Kaiqi
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [22] Hierarchical multi-agent reinforcement learning
    Mohammad Ghavamzadeh
    Sridhar Mahadevan
    Rajbala Makar
    Autonomous Agents and Multi-Agent Systems, 2006, 13 : 197 - 229
  • [23] Learning to Share in Multi-Agent Reinforcement Learning
    Yi, Yuxuan
    Li, Ge
    Wang, Yaowei
    Lu, Zongqing
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [24] Multi-Agent Reinforcement Learning for Microgrids
    Dimeas, A. L.
    Hatziargyriou, N. D.
    IEEE POWER AND ENERGY SOCIETY GENERAL MEETING 2010, 2010,
  • [25] Hierarchical multi-agent reinforcement learning
    Ghavamzadeh, Mohammad
    Mahadevan, Sridhar
    Makar, Rajbala
    AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2006, 13 (02) : 197 - 229
  • [26] Multi-agent Exploration with Reinforcement Learning
    Sygkounas, Alkis
    Tsipianitis, Dimitris
    Nikolakopoulos, George
    Bechlioulis, Charalampos P.
    2022 30TH MEDITERRANEAN CONFERENCE ON CONTROL AND AUTOMATION (MED), 2022, : 630 - 635
  • [27] Partitioning in multi-agent reinforcement learning
    Sun, R
    Peterson, T
    FROM ANIMALS TO ANIMATS 6, 2000, : 325 - 332
  • [28] The Dynamics of Multi-Agent Reinforcement Learning
    Dickens, Luke
    Broda, Krysia
    Russo, Alessandra
    ECAI 2010 - 19TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2010, 215 : 367 - 372
  • [29] Multi-agent reinforcement learning: A survey
    Busoniu, Lucian
    Babuska, Robert
    De Schutter, Bart
    2006 9TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION, VOLS 1- 5, 2006, : 1133 - +
  • [30] FoX: Formation-Aware Exploration in Multi-Agent Reinforcement Learning
    Jo, Yonghyeon
    Lee, Sunwoo
    Yeom, Junghyuk
    Han, Seungyul
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 12, 2024, : 12985 - 12994