Building A Socially Acceptable Navigation and Behavior of A Mobile Robot Using Q-Learning

被引:0
|
作者
Dewantara, Bima Sena Bayu [1 ]
机构
[1] Elect Engn Polytech Inst Surabaya, Dept Informat & Comp Engn, Surabaya, Indonesia
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper proposes a self-learner mobile robot to navigate among social environment under social force framework. It addresses a drawback of the Social Force Navigation Model (SFNM) based approaches which used a set of fixed-SFNM parameters that is not always in accordance with various conditions. Those fixed parameters are usually assumed as the most optimal generated parameters by optimizing a collection of data samples from the specific interaction of human beings. We utilize the Qlearning algorithm to select adaptively the parameters of SFNM to deal with each circumstance. It implies that we train our robot to interact with the environments directly. However, training the real robot in the real environments under Q-learning framework is difficult, time-consuming, and hazardous. Therefore, in this study, we utilize a realistic simulator, V-Rep, for both training and testing. The simulation results for several scenarios exhibit the usefulness of our approach to smoothly and safely navigate our robot from start to the goal position.
引用
收藏
页码:88 / 93
页数:6
相关论文
共 50 条
  • [11] Topological Q-learning with internally guided exploration for mobile robot navigation
    Muhammad Burhan Hafez
    Chu Kiong Loo
    Neural Computing and Applications, 2015, 26 : 1939 - 1954
  • [12] Autonomous Exploration for Mobile Robot using Q-learning
    Liu, Yang
    Liu, Huaping
    Wang, Bowen
    2017 2ND INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM), 2017, : 614 - 619
  • [13] Behavior Control Algorithm for Mobile Robot Based on Q-Learning
    Yang, Shiqiang
    Li, Congxiao
    2017 INTERNATIONAL CONFERENCE ON COMPUTER NETWORK, ELECTRONIC AND AUTOMATION (ICCNEA), 2017, : 45 - 48
  • [14] A new mobile robot navigation method using fuzzy logic and a modified Q-learning algorithm
    Boubertakh, H.
    Tadjine, M.
    Glorennec, P. -Y.
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2010, 21 (1-2) : 113 - 119
  • [15] Path Navigation For Indoor Robot With Q-Learning
    Huang, Lvwen
    He, Dongjian
    Zhang, Zhiyong
    Zhang, Peng
    INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2016, 22 (02): : 317 - 323
  • [16] Comparison of Deep Q-Learning, Q-Learning and SARSA Reinforced Learning for Robot Local Navigation
    Anas, Hafiq
    Ong, Wee Hong
    Malik, Owais Ahmed
    ROBOT INTELLIGENCE TECHNOLOGY AND APPLICATIONS 6, 2022, 429 : 443 - 454
  • [17] Neural Q-Learning Controller for Mobile Robot
    Ganapathy, Velappa
    Yun, Soh Chin
    Joe, Halim Kusama
    2009 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS, VOLS 1-3, 2009, : 863 - 868
  • [18] Socially Acceptable Robot Navigation in the Presence of Humans
    Vasconcelos, Phelipe A. A.
    Pereira, Henrique N. S.
    Macharet, Douglas G.
    Nascimento, Erickson R.
    2015 12TH LATIN AMERICAN ROBOTICS SYMPOSIUM AND 2015 3RD BRAZILIAN SYMPOSIUM ON ROBOTICS (LARS-SBR), 2015, : 222 - 227
  • [19] Towards Online Socially Acceptable Robot Navigation
    Silva, Steven
    Paillacho, Dennys
    Verdezoto, Nervo
    Hernandez, Juan David
    2022 IEEE 18TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2022, : 707 - 714
  • [20] A Hybrid Fuzzy Q-Learning algorithm for robot navigation
    Gordon, Sean W.
    Reyes, Napoleon H.
    Barczak, Andre
    2011 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2011, : 2625 - 2631