Deep Reinforcement Learning of Map-Based Obstacle Avoidance for Mobile Robot Navigation

被引:2
|
作者
Chen G. [1 ]
Pan L. [1 ]
Chen Y. [1 ]
Xu P. [2 ]
Wang Z. [1 ]
Wu P. [1 ]
Ji J. [1 ]
Chen X. [1 ]
机构
[1] School of Computer Science and Technology, University of Science and Technology of China, Anhui, Hefei
[2] School of Data Science, University of Science and Technology of China, Anhui, Hefei
基金
中国国家自然科学基金;
关键词
Deep reinforcement learning; Grid map; Obstacle avoidance; Robot navigation;
D O I
10.1007/s42979-021-00817-z
中图分类号
学科分类号
摘要
Autonomous and safe navigation in complex environments without collisions is particularly important for mobile robots. In this paper, we propose an end-to-end deep reinforcement learning method for mobile robot navigation with map-based obstacle avoidance. Using the experience collected in the simulation environment, a convolutional neural network is trained to predict the proper steering operation of the robot based on its egocentric local grid maps, which can accommodate various sensors and fusion algorithms. We use dueling double DQN with prioritized experienced replay technology to update parameters of the network and integrate curriculum learning techniques to enhance its performance. The trained deep neural network is then transferred and executed on a real-world mobile robot to guide it to avoid local obstacles for long-range navigation. The qualitative and quantitative evaluations of the new approach were performed in simulations and real robot experiments. The results show that the end-to-end map-based obstacle avoidance model is easy to deploy, without any fine-tuning, robust to sensor noise, compatible with different sensors, and better than other related DRL-based models in many evaluation indicators. © 2021, The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd.
引用
收藏
相关论文
共 50 条
  • [31] Navigation method for mobile robot based on hierarchical deep reinforcement learning
    Wang T.
    Li A.
    Song H.-L.
    Liu W.
    Wang M.-H.
    Kongzhi yu Juece/Control and Decision, 2022, 37 (11): : 2799 - 2807
  • [32] A novel mobile robot navigation method based on deep reinforcement learning
    Quan, Hao
    Li, Yansheng
    Zhang, Yi
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2020, 17 (03):
  • [33] Reinforcement Learning for Mobile Robot Obstacle Avoidance Under Dynamic Environments
    Huang, Liwei
    Qu, Hong
    Fu, Mingsheng
    Deng, Wu
    PRICAI 2018: TRENDS IN ARTIFICIAL INTELLIGENCE, PT I, 2018, 11012 : 441 - 453
  • [34] Mobile Robot Navigation Using Deep Reinforcement Learning
    Lee, Min-Fan Ricky
    Yusuf, Sharfiden Hassen
    PROCESSES, 2022, 10 (12)
  • [35] Obstacle Avoidance Planning of Virtual Robot Picking Path Based on Deep Reinforcement Learning
    Xiong J.
    Li Z.
    Chen S.
    Zheng Z.
    Nongye Jixie Xuebao/Transactions of the Chinese Society for Agricultural Machinery, 2020, 51 : 1 - 10
  • [36] Towards Dynamic Obstacle Avoidance for Robot Manipulators with Deep Reinforcement Learning
    Zindler, Friedemann
    Lucchi, Matteo
    Wohlhart, Lucas
    Pichler, Horst
    Hofbaur, Michael
    ADVANCES IN SERVICE AND INDUSTRIAL ROBOTICS, RAAD 2022, 2022, 120 : 89 - 96
  • [37] A Redundancy-Based Approach for Obstacle Avoidance in Mobile Robot Navigation
    Cherubini, Andrea
    Chaumette, Francois
    IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 2010, : 5700 - 5705
  • [38] MAP-BASED NAVIGATION FOR A MOBILE ROBOT WITH OMNIDIRECTIONAL IMAGE SENSOR COPIS
    YAGI, Y
    NISHIZAWA, Y
    YACHIDA, M
    IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, 1995, 11 (05): : 634 - 648
  • [39] Motion Route Planning and Obstacle Avoidance Method for Mobile Robot Based on Deep Learning
    Cui, Jichao
    Nie, Guanghua
    JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING, 2022, 2022
  • [40] Odometry Algorithm with Obstacle Avoidance on Mobile Robot Navigation
    Khoswanto, Handry
    Santoso, Petrus
    Lim, Resmana
    PROCEEDINGS OF SECOND INTERNATIONAL CONFERENCE ON ELECTRICAL SYSTEMS, TECHNOLOGY AND INFORMATION 2015 (ICESTI 2015), 2016, 365 : 155 - 161