Deep Reinforcement Learning-based ROS-Controlled RC Car for Autonomous Path Exploration in the Unknown Environment

被引:0
|
作者
Hossain, Sabir [1 ]
Doukhi, Oualid [1 ]
Jo, Yeonho [1 ]
Lee, Deok-Jin [1 ]
机构
[1] Kunsan Natl Univ, Sch Mech & Convergence Syst Engn, 558 Daehak Ro, Gunsan 54150, South Korea
来源
2020 20TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS) | 2020年
基金
新加坡国家研究基金会;
关键词
Deep-Q Network; Laser Map; ROS; Gazebo Simulation; Path Exploration;
D O I
10.23919/iccas50221.2020.9268370
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Nowadays, Deep reinforcement learning has become the front runner to solve problems in the field of robot navigation and avoidance. This paper presents a LiDAR-equipped RC car trained in the GAZEBO environment using the deep reinforcement learning method. This paper uses reshaped LiDAR data as the data input of the neural architecture of the training network. This paper also presents a unique way to convert the LiDAR data into a 2D grid map for the input of training neural architecture. It also presents the test result from the training network in different GAZEBO environment. It also shows the development of hardware and software systems of embedded RC car. The hardware system includes-Jetson AGX Xavier, teensyduino and Hokuyo LiDAR; the software system includes- ROS and Arduino C. Finally, this paper presents the test result in the real world using the model generated from training simulation.
引用
收藏
页码:1231 / 1236
页数:6
相关论文
共 50 条
  • [21] Integral reinforcement learning-based approximate minimum time-energy path planning in an unknown environment
    He, Chenyuan
    Wan, Yan
    Gu, Yixin
    Lewis, Frank L.
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2021, 31 (06) : 1905 - 1922
  • [22] Risk-Sensitive Autonomous Exploration of Unknown Environments: A Deep Reinforcement Learning Perspective
    Sarfi, Mohammad Hossein
    Bisheban, Mahdis
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2025, 111 (01)
  • [23] A Reinforcement Learning-Based Adaptive Path Tracking Approach for Autonomous Driving
    Shan, Yunxiao
    Zheng, Boli
    Chen, Longsheng
    Chen, Long
    Chen, De
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (10) : 10581 - 10595
  • [24] Voronoi-Based Multi-Robot Autonomous Exploration in Unknown Environments via Deep Reinforcement Learning
    Hu, Junyan
    Niu, Hanlin
    Carrasco, Joaquin
    Lennox, Barry
    Arvin, Farshad
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (12) : 14413 - 14423
  • [25] Novel deep reinforcement learning based collision avoidance approach for path planning of robots in unknown environment
    Alharthi, Raed
    Noreen, Iram
    Khan, Amna
    Aljrees, Turki
    Riaz, Zoraiz
    Innab, Nisreen
    PLOS ONE, 2025, 20 (01):
  • [26] A Real-Time USV Path Planning Algorithm in Unknown Environment Based on Deep Reinforcement Learning
    Zhou, Zhi-Guo
    Zheng, Yi-Peng
    Liu, Kai-Yuan
    He, Xu
    Qu, Chong
    Beijing Ligong Daxue Xuebao/Transaction of Beijing Institute of Technology, 2019, 39 : 86 - 92
  • [27] Reinforcement Learning-based Car-Following Control for Autonomous Vehicles with OTFS
    Liu, Yulin
    Shi, Yuye
    Zhang, Xiaoqi
    Wu, Jun
    Yang, Songyuan
    2024 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC 2024, 2024,
  • [28] Deep Reinforcement Learning-Based Vehicle Energy Efficiency Autonomous Learning System
    Qi, Xuewei
    Luo, Yadan
    Wu, Guoyuan
    Boriboonsomsin, Kanok
    Barth, Matthew J.
    2017 28TH IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV 2017), 2017, : 1228 - 1233
  • [29] Environment Exploration for Mapless Navigation based on Deep Reinforcement Learning
    Toan, Nguyen Duc
    Gon-Woo, Kim
    2021 21ST INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2021), 2021, : 17 - 20
  • [30] ATENA: An Autonomous System for Data Exploration Based on Deep Reinforcement Learning
    Bar, El Ori
    Milo, Tova
    Somech, Amit
    PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM '19), 2019, : 2873 - 2876