Q-Learning for autonomous vehicle navigation

被引:0
|
作者
Gonzalez-Miranda, Oscar [1 ]
Miranda, Luis Antonio Lopez [1 ]
Ibarra-Zannatha, Juan Manuel [1 ]
机构
[1] CINVESTAV, Dept Automat Control, Mexico City, DF, Mexico
关键词
Autonomous vehicles; lane-keeping; q-learning; reinforcement learning;
D O I
10.1109/COMROB60035.2023.10349747
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
In this work, we proposed and developed a reinforcement Q-learning method to do the lane-keeping and obstacle evasion driving maneuvers. We detail how to design a simple car simulator and how to use it to do the training. For each problem, we define different states, actions, and reward functions to obtain a Q-table. Next, we use it as a driving maneuver controller in a different simulation environment. With this method, our car successfully droves on a road different to where it was training. An important conclusion is the possibility to build, more complex controllers to do passing or behavior selectors.
引用
收藏
页码:138 / 142
页数:5
相关论文
共 50 条
  • [31] Autonomous Driving in Roundabout Maneuvers Using Reinforcement Learning with Q-Learning
    Garcia Cuenca, Laura
    Puertas, Enrique
    Fernandez Andres, Javier
    Aliane, Nourdine
    ELECTRONICS, 2019, 8 (12)
  • [32] Hybrid control for robot navigation - A hierarchical Q-learning algorithm
    Chen, Chunlin
    Li, Han-Xiong
    Dong, Daoyi
    IEEE ROBOTICS & AUTOMATION MAGAZINE, 2008, 15 (02) : 37 - 47
  • [33] Autonomous Vehicle Motion Control and Energy Optimization Based on Q-Learning for a 4-Wheel Independently Driven Electric Vehicle
    Hou, Shengyan
    Chen, Hong
    Liu, Jinfa
    Wang, Yilin
    Liu, Xuan
    Lin, Runzi
    Gao, Jinwu
    UNMANNED SYSTEMS, 2025,
  • [34] Deep Q-Learning for Navigation of Robotic Arm for Tokamak Inspection
    Jain, Swati
    Sharma, Priyanka
    Bhoiwala, Jaina
    Gupta, Sarthak
    Dutta, Pramit
    Gotewal, Krishan Kumar
    Rastogi, Naveen
    Raju, Daniel
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2018, PT IV, 2018, 11337 : 62 - 71
  • [35] Object Goal Navigation using Data Regularized Q-Learning
    Gireesh, Nandiraju
    Kiran, D. A. Sasi
    Banerjee, Snehasis
    Sridharan, Mohan
    Bhowmick, Brojeshwar
    Krishna, Madhava
    2022 IEEE 18TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2022, : 1092 - 1097
  • [36] RSS-Based Q-Learning for Indoor UAV Navigation
    Chowdhury, Md Moin Uddin
    Erden, Fatih
    Guvenc, Ismail
    MILCOM 2019 - 2019 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM), 2019,
  • [37] An Error-Sensitive Q-learning Approach for Robot Navigation
    Tang, Rongkuan
    Yuan, Hongliang
    2015 34TH CHINESE CONTROL CONFERENCE (CCC), 2015, : 5835 - 5840
  • [38] Application of Deep Q-Learning for Wheel Mobile Robot Navigation
    Mohanty, Prases K.
    Sah, Arun Kumar
    Kumar, Vikas
    Kundu, Shubhasri
    2017 3RD INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND NETWORKS (CINE), 2017, : 88 - 93
  • [39] Autonomous Decentralized Traffic Control Using Q-Learning in LPWAN
    Kaburaki, Aoto
    Adachi, Koichi
    Takyu, Osamu
    Ohta, Mai
    Fujii, Takeo
    IEEE ACCESS, 2021, 9 : 93651 - 93661
  • [40] Enhanced continuous valued Q-learning for real autonomous robots
    Takeda, M
    Nakamura, T
    Imai, M
    Ogasawara, T
    Asada, M
    ADVANCED ROBOTICS, 2000, 14 (05) : 439 - 441