Parallel Implementation of Reinforcement Learning Q-Learning Technique for FPGA

被引:36
|
作者
Da Silva, Lucileide M. D. [1 ]
Torquato, Matheus F. [2 ]
Fernandes, Marcelo A. C. [3 ]
机构
[1] Fed Inst Rio Grande do Norte, Dept Comp Sci & Technol, BR-59200000 Santa Cruz, Brazil
[2] Swansea Univ, Coll Engn, Swansea SA2 8PP, W Glam, Wales
[3] Univ Fed Rio Grande do Norte, Dept Comp Engn & Automat, BR-59078970 Natal, RN, Brazil
关键词
FPGA; Q-learning; reinforcement learning; reconfigurable computing; HARDWARE; ARCHITECTURE; NETWORK;
D O I
10.1109/ACCESS.2018.2885950
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Q-learning is an off-policy reinforcement learning technique, which has the main advantage of obtaining an optimal policy interacting with an unknown model environment. This paper proposes a parallel fixed-point Q-learning algorithm architecture implemented on field programmable gate arrays (FPGA) focusing on optimizing the system processing time. The convergence results are presented, and the processing time and occupied area were analyzed for different states and actions sizes scenarios and various fixed-point formats. The studies concerning the accuracy of the Q-learning technique response and resolution error associated with a decrease in the number of bits were also carried out for hardware implementation. The architecture implementation details were featured. The entire project was developed using the system generator platform (Xilinx), with a Virtex-6 xc6vcx240t-1ff1156 as the target FPGA.
引用
收藏
页码:2782 / 2798
页数:17
相关论文
共 50 条
  • [41] Pessimistic Q-Learning for Offline Reinforcement Learning: Towards Optimal Sample Complexity
    Shi, Laixi
    Li, Gen
    Wei, Yuting
    Chen, Yuxin
    Chi, Yuejie
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [42] Heuristically accelerated Q-learning: A new approach to speed up reinforcement learning
    Bianchi, RAC
    Ribeiro, CHC
    Costa, AHR
    ADVANCES IN ARTIFICIAL INTELLIGENCE - SBIA 2004, 2004, 3171 : 245 - 254
  • [43] Backward Q-learning: The combination of Sarsa algorithm and Q-learning
    Wang, Yin-Hao
    Li, Tzuu-Hseng S.
    Lin, Chih-Jui
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2013, 26 (09) : 2184 - 2193
  • [44] Reinforcement Learning-Based Multihop Relaying: A Decentralized Q-Learning Approach
    Wang, Xiaowei
    Wang, Xin
    ENTROPY, 2021, 23 (10)
  • [45] Solving Twisty Puzzles Using Parallel Q-learning
    Hukmani, Kavish
    Kolekar, Sucheta
    Vobugari, Sreekumar
    ENGINEERING LETTERS, 2021, 29 (04) : 1535 - 1543
  • [46] Parallel Q-Learning for a block-pushing problem
    Laurent, G
    Piat, E
    IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4: EXPANDING THE SOCIETAL ROLE OF ROBOTICS IN THE NEXT MILLENNIUM, 2001, : 286 - 291
  • [47] Time Horizon Generalization in Reinforcement Learning: Generalizing Multiple Q-Tables in Q-Learning Agents
    Hatcho, Yasuyo
    Hattori, Kiyohiko
    Takadama, Keiki
    JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2009, 13 (06) : 667 - 674
  • [48] Reinforcement Q-Learning and Neural Networks to Acquire Negotiation Behaviors
    Chohra, Amine
    Madani, Kurosh
    Kanzari, Dalel
    NEW CHALLENGES IN APPLIED INTELLIGENCE TECHNOLOGIES, 2008, 134 : 23 - 33
  • [49] Multiple-Model Q-Learning for Stochastic Reinforcement Delays
    Campbell, Jeffrey S.
    Givigi, Sidney N.
    Schwartz, Howard M.
    2014 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC), 2014, : 1611 - 1617
  • [50] Constrained Deep Q-Learning Gradually Approaching Ordinary Q-Learning
    Ohnishi, Shota
    Uchibe, Eiji
    Yamaguchi, Yotaro
    Nakanishi, Kosuke
    Yasui, Yuji
    Ishii, Shin
    FRONTIERS IN NEUROROBOTICS, 2019, 13