A Q-learning Based Continuous Tuning of Fuzzy Wall Tracking without Exploration

被引:4
|
作者
Valiollahi, S. [1 ]
Ghaderi, R. [1 ]
Ebrahimzadeh, A. [1 ]
机构
[1] Babol Univ Technol, Dept Elect & Comp Engn, Babol Sar 7414871167, Iran
来源
INTERNATIONAL JOURNAL OF ENGINEERING | 2012年 / 25卷 / 04期
关键词
Autonomous Navigation; Wall Tracking; Fuzzy Q-learning; Khepera Robot;
D O I
10.5829/idosi.ije.2012.25.04a.07
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
A simple and easy to implement is proposed to address wall tracking task of an autonomous robot. The robot should navigate in unknown environments, find the nearest wall, and track it solely based on locally sensed data. The proposed method benefits from coupling fuzzy logic and Q-learning to meet requirements of autonomous navigations. The robot summerizes the obtained information from the world into a set of fuzzy states. For each fuzzy state, there are some suggested actions. States are related to their corresponding actions via simple fuzzy if-then rules, designed by human reasoning. The robot selects the most encouraged action for each state by Q-learning and through online experiences. The objective is to design a wall tracking algorithm which can efficiently adapt itself to different wall shapes in completely unknown environments. Q-learning is applied without any exploration phase, i.e. no training environment is considered. Experimental results on simulated Khepera robot validate that the proposed method efficiently deals with various wall contours from simple straight shape to complex concave, convex, or polygon shapes. The robot successfully keeps track of walls while staying within predefined margins.
引用
收藏
页码:355 / 366
页数:12
相关论文
共 50 条
  • [1] A Q-learning based continuous tuning of fuzzy wall tracking without exploration
    Ghaderi, R. (r_ghaderi@nit.ac.ir), 1600, Materials and Energy Research Center (25):
  • [2] Dynamic Fuzzy Q-Learning with Facility of Tuning and Removing Fuzzy Rules
    Hosoya, Yu
    Umano, Motohide
    2012 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE), 2012,
  • [3] Fuzzy interporation-based Q-learning with continuous states and actions
    Horiuchi, T
    Fujino, A
    Katai, O
    Sawaragi, T
    FUZZ-IEEE '96 - PROCEEDINGS OF THE FIFTH IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1-3, 1996, : 594 - 600
  • [4] Hyperparameter Optimization for Tracking with Continuous Deep Q-Learning
    Dong, Xingping
    Shen, Jianbing
    Wang, Wenguan
    Liu, Yu
    Shao, Ling
    Porikli, Fatih
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 518 - 527
  • [5] Fuzzy Q-learning in continuous state and action space
    Xu M.-L.
    Xu W.-B.
    Journal of China Universities of Posts and Telecommunications, 2010, 17 (04): : 100 - 109
  • [7] Fuzzy Q-learning
    Glorennec, PY
    Jouffe, L
    PROCEEDINGS OF THE SIXTH IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS I - III, 1997, : 659 - 662
  • [8] Online tuning of fuzzy inference systems using dynamic fuzzy Q-learning
    Er, MJ
    Deng, C
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2004, 34 (03): : 1478 - 1489
  • [9] An Investigation of Methods of Parameter Tuning For Q-Learning Fuzzy Inference System
    Al-Talabi, Ahmad A.
    Schwartz, Howard M.
    2014 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE), 2014, : 2594 - 2601
  • [10] Continuous interval type-2 fuzzy Q-learning algorithm for trajectory tracking tasks for vehicles
    Xuan, Chengbin
    Lam, Hak-Keung
    Shi, Qian
    Chen, Ming
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2022, 32 (08) : 4788 - 4815