Deep Q-Learning-Based Dynamic Management of a Robotic Cluster

被引:4
|
作者
Gautier, Paul [1 ]
Laurent, Johann [1 ]
Diguet, Jean-Philippe [2 ]
机构
[1] Univ Bretagne Sud, Lab STICC, UMR6285 CNRS, F-56100 Lorient, France
[2] IRL2010 CNRS, CROSSING, Adelaide, SA 5000, Australia
关键词
Task analysis; Robots; Drones; Resource management; Computational modeling; Robot kinematics; Servers; MRS; task distribution; robotic cluster; multi-agent systems; reinforcement learning; deep Q-learning; ALLOCATION; SYSTEMS;
D O I
10.1109/TASE.2022.3205651
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The ever-increasing demands for autonomy and precision have led to the development of heavily computational multi-robot system (MRS). However, numerous missions exclude the use of robotic cloud. Another solution is to use the robotic cluster to locally distribute the computational load. This complex distribution requires adaptability to come up with a dynamic and uncertain environment. Classical approaches are too limited to solve this problem, but recent advances in reinforcement learning and deep learning offer new opportunities. In this paper we propose a new Deep Q-Network (DQN) based approaches where the MRS learns to distribute tasks directly from experience. Since the problem complexity leads to a curse of dimensionality, we use two specific methods, a new branching architecture, called Branching Dueling Q-Network (BDQ), and our own optimized multi-agent solution and we compare them with classical Market-based approaches as well as with non-distributed and purely local solutions. Our study shows the relevancy of learning-based methods for task mapping and also highlight the BDQ architecture capacity to solve high dimensional state space problems. Note to Practitioners-A lot of applications in industry like area exploration and monitoring can be efficiently delegated to a group of small-size robots or autonomous vehicles with advantages like reliability and cost in respect of single-robot solutions. But autonomy requires high and increasing compute-intensive tasks such as computer-vision. On the other hand small robots have energy constraints, limited embedded computing capacities and usually restricted and/or unreliable communications that limit the use of cloud resources. An alternative solution to cope with this problem consists in sharing the computing resources of the group of robots. Previous work was a proof of concept limited to the parallelisation of a single specific task. In this paper we formalize a general method that allows the group of robots to learn on the field how to efficiently distribute tasks in order to optimize the execution time of a mission under energy constraint. We demonstrate the relevancy of our solution over market-based and non-distributed approaches by means of intensive simulations. This successful study is a necessary first step towards distribution and parallelisation of computation tasks over a robotic cluster. The next steps, not tested yet, will address hardware in the loop simulation and finally a real-life mission with a group of robots.
引用
收藏
页码:2503 / 2515
页数:13
相关论文
共 50 条
  • [31] Critical Reliability Improvement Using Q-Learning-Based Energy Management System for Microgrids
    Maharjan, Lizon
    Ditsworth, Mark
    Fahimi, Babak
    ENERGIES, 2022, 15 (23)
  • [32] A Q-learning-based network content caching method
    Chen, Haijun
    Tan, Guanzheng
    EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2018,
  • [33] Q-learning-based H∞ control for LPV systems
    Wang, Hongye
    Wen, Jiwei
    Wan, Haiying
    Xue, Huiwen
    ASIAN JOURNAL OF CONTROL, 2024,
  • [34] A Q-learning-based network content caching method
    Haijun Chen
    Guanzheng Tan
    EURASIP Journal on Wireless Communications and Networking, 2018
  • [35] A Q-learning-based algorithm for the block relocation problem
    Liu, Liqun
    Feng, Yuanjun
    Zeng, Qingcheng
    Chen, Zhijun
    Li, Yaqiu
    JOURNAL OF HEURISTICS, 2025, 31 (01)
  • [36] A Comparative Study of Reinforcement Learning Algorithms for Distribution Network Reconfiguration With Deep Q-Learning-Based Action Sampling
    Gholizadeh, Nastaran
    Kazemi, Nazli
    Musilek, Petr
    IEEE ACCESS, 2023, 11 : 13714 - 13723
  • [37] Deep Q-Learning-Based Content Caching With Update Strategy for Fog Radio Access Networks
    Jiang, Fan
    Yuan, Zeng
    Sun, Changyin
    Wang, Junxuan
    IEEE ACCESS, 2019, 7 : 97505 - 97514
  • [38] Secure Status Updates for Internet of Drones: A Deep Q-Learning-Based Antenna Selection Approach
    Xiao, Yuquan
    Du, Qinghe
    Lut, Chen
    Wang, Yizhuo
    20TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE, IWCMC 2024, 2024, : 461 - 466
  • [39] Scheduling Multiobjective Dynamic Surgery Problems via Q-Learning-Based Meta-Heuristics
    Yu, Hui
    Gao, Kaizhou
    Wu, Naiqi
    Zhou, MengChu
    Suganthan, Ponnuthurai N.
    Wang, Shouguang
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2024, 54 (06): : 3321 - 3333
  • [40] A Q-learning-based memetic algorithm for multi-objective dynamic software project scheduling
    Shen, Xiao-Ning
    Minku, Leandro L.
    Marturi, Naresh
    Guo, Yi-Nan
    Han, Ying
    INFORMATION SCIENCES, 2018, 428 : 1 - 29