A Reinforcement Learning Method for Continuous Domains Using Artificial Hydrocarbon Networks

被引:0
|
作者
Ponce, Hiram [1 ]
Gonzalez-Mora, Guillermo [1 ]
Martinez-Villasenor, Lourdes [1 ]
机构
[1] Univ Panamer, Fac Ingn, Augusto Rodin 498, Ciudad De Mexico 03920, Mexico
关键词
reinforcement learning; artificial hydrocarbon networks; artificial organic networks; continuous domain; policy search;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement learning in continuous states and actions has been limitedly studied in ocassions given difficulties in the determination of the transition function, lack of performance in continuous-to-discrete relaxation problems, among others. For instance, real-world problems, e.g. robotics, require these methods for learning complex tasks. Thus, in this paper, we propose a method for reinforcement learning with continuous states and actions using a model-based approach learned with artificial hydrocarbon networks (AHN). The proposed method considers modeling the dynamics of the continuous task with the supervised AHN method. Initial random rollouts and posterior data collection from policy evaluation improve the training of the AHN-based dynamics model. Preliminary results over the well-known mountain car task showed that artificial hydrocarbon networks can contribute to model-based approaches in continuous RL problems in both estimation efficiency (0.0012 in root mean squared-error) and sub-optimal policy convergence (reached in 357 steps), in just 5 trials over a parameter space theta is an element of R-86. Data from experimental results are available at: http://sites. google.com/up.edu.mx/reinforcement learning/.
引用
收藏
页码:398 / 403
页数:6
相关论文
共 50 条
  • [21] Accelerating Reinforcement Learning for Reaching Using Continuous Curriculum Learning
    Luo, Sha
    Kasaei, Hamidreza
    Schomaker, Lambert
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [22] Reinforcement Learning in Card Game Environments Using Monte Carlo Methods and Artificial Neural Networks
    Baykal, Omer
    Alpaslan, Ferda Nur
    2019 4TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND ENGINEERING (UBMK), 2019, : 618 - 623
  • [23] A reinforcement learning algorithm for continuous state spaces using multiple Fuzzy-ART networks
    Tateyama, Takeshi
    Kawata, Seiichi
    Shimomura, Yoshiki
    2006 SICE-ICASE INTERNATIONAL JOINT CONFERENCE, VOLS 1-13, 2006, : 88 - +
  • [24] TD based reinforcement learning using neural networks in control problems with continuous action space
    Lee, JH
    Oh, SY
    Choi, DH
    IEEE WORLD CONGRESS ON COMPUTATIONAL INTELLIGENCE, 1998, : 2028 - 2033
  • [25] A PID Control Algorithm With Adaptive Tuning Using Continuous Artificial Hydrocarbon Networks for a Two-Tank System
    Sanchez-Palma, Jesus
    Ordonez-Avila, Jose Luis
    IEEE ACCESS, 2022, 10 : 114694 - 114710
  • [26] Reinforcement Learning using Associative Memory Networks
    Salmon, Ricardo
    Sadeghian, Alireza
    Chartier, Sylvain
    2010 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS IJCNN 2010, 2010,
  • [27] Game Artificial Intelligence Based Using Reinforcement Learning
    Agung, Albertus
    Gaol, Ford Lumban
    INTERNATIONAL CONFERENCE ON ADVANCES SCIENCE AND CONTEMPORARY ENGINEERING 2012, 2012, 50 : 555 - 565
  • [28] Reinforcement learning using continuous states and interactive feedback
    Ayala, Angel
    Henriquez, Claudio
    Cruz, Francisco
    PROCEEDINGS OF 2ND INTERNATIONAL CONFERENCE ON APPLICATIONS OF INTELLIGENT SYSTEMS (APPIS 2019), 2019,
  • [29] Adaptive Reinforcement Learning Method for Networks-on-Chip
    Farahnakian, Fahimeh
    Ebrahimi, Masoumeh
    Daneshtalab, Masoud
    Plosila, Juha
    Liljeberg, Pasi
    2012 INTERNATIONAL CONFERENCE ON EMBEDDED COMPUTER SYSTEMS (SAMOS): ARCHITECTURES, MODELING AND SIMULATION, 2012, : 236 - 243
  • [30] Reinforcement Learning Applied to the Optimization of Power Delivery Networks with Multiple Voltage Domains
    Han, Seunghyup
    Bhatti, Osama Waqar
    Na, Woo-Jin
    Swaminathan, Madhavan
    2023 IEEE MTT-S INTERNATIONAL CONFERENCE ON NUMERICAL ELECTROMAGNETIC AND MULTIPHYSICS MODELING AND OPTIMIZATION, NEMO, 2023, : 147 - 150