Fuzzy Rule Interpolation-based Q-learning

被引:0
|
作者
Vincze, David [1 ]
Kovacs, Szilveszter [1 ]
机构
[1] Univ Miskolc, Dept Informat Technol, Miskolc, Hungary
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement learning is a well known topic in computational intelligence. It can be used to solve control problems in unknown environments without defining an exact method on how to solve problems in various situations. Instead the goal is defined and all the actions done in the different states are given feedback, called reward or punishment (positive or negative reward). Based on these rewards the system can learn which action is considered the best in a given state. A method called Q-learning can be used for building up the state-action-value function. This method uses discrete states. With the application of fuzzy reasoning the method can be extended to be used in continuous environment, called Fuzzy Q-learning (FQ-Learning). Traditional Fuzzy Q-learning uses 0-order Takagi-Sugeno fuzzy inference. The main goal of this paper is to introduce Fuzzy Rule Interpolation (FRI), namely the FIVE (Fuzzy rule Interpolation based on Vague Environment) to be the model applied with Q-learning (FRIQ-learning). The paper also includes an application example: the well known cart pole (reversed pendulum) problem is used for demonstrating the applicability of the FIVE model in Q-learning.
引用
收藏
页码:45 / 49
页数:5
相关论文
共 50 条
  • [21] Weights-Learning for Weighted Fuzzy Rule Interpolation in Sparse Fuzzy Rule-Based Systems
    Chen, Shyi-Ming
    Chang, Yu-Chuan
    IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ 2011), 2011, : 346 - 351
  • [22] Learning of Keepaway Task for RoboCup Soccer Agent Based on Fuzzy Q-Learning
    Sawa, Toru
    Watanabe, Toshihiko
    2011 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2011, : 250 - 256
  • [23] Agent learning in simulated soccer by fuzzy Q-learning
    Takahashi, K
    Ueda, H
    Miyahara, T
    TENCON 2004 - 2004 IEEE REGION 10 CONFERENCE, VOLS A-D, PROCEEDINGS: ANALOG AND DIGITAL TECHNIQUES IN ELECTRICAL ENGINEERING, 2004, : B338 - B341
  • [24] Improved Fuzzy Q-Learning with Replay Memory
    Li, Xin
    Cohen, Kelly
    FUZZY INFORMATION PROCESSING 2020, 2022, 1337 : 13 - 23
  • [25] Fuzzy Q-learning Control for Temperature Systems
    Chen, Yeong-Chin
    Hung, Lon-Chen
    Syamsudin, Mariana
    22ND IEEE/ACIS INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, ARTIFICIAL INTELLIGENCE, NETWORKING AND PARALLEL/DISTRIBUTED COMPUTING (SNPD 2021-FALL), 2021, : 148 - 151
  • [26] Parameter specification for fuzzy clustering by Q-learning
    Oh, CH
    Ikeda, E
    Honda, K
    Ichihashi, H
    IJCNN 2000: PROCEEDINGS OF THE IEEE-INNS-ENNS INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOL IV, 2000, : 9 - 12
  • [27] Extending Q-learning to fuzzy classifier systems
    Bonarini, A
    TOPICS IN ARTIFICIAL INTELLIGENCE, 1995, 992 : 25 - 36
  • [28] Decoupled Visual Servoing With Fuzzy Q-Learning
    Shi, Haobin
    Li, Xuesi
    Hwang, Kao-Shing
    Pan, Wei
    Xu, Genjiu
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2018, 14 (01) : 241 - 252
  • [29] Efficient implementation of dynamic fuzzy Q-learning
    Deng, C
    Er, MJ
    ICICS-PCM 2003, VOLS 1-3, PROCEEDINGS, 2003, : 1854 - 1858
  • [30] Implementation of fuzzy Q-learning for a soccer agent
    Nakashima, T
    Udo, M
    Ishibuchi, H
    PROCEEDINGS OF THE 12TH IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1 AND 2, 2003, : 533 - 536