Parallel Implementation of Reinforcement Learning Q-Learning Technique for FPGA

被引:36
|
作者
Da Silva, Lucileide M. D. [1 ]
Torquato, Matheus F. [2 ]
Fernandes, Marcelo A. C. [3 ]
机构
[1] Fed Inst Rio Grande do Norte, Dept Comp Sci & Technol, BR-59200000 Santa Cruz, Brazil
[2] Swansea Univ, Coll Engn, Swansea SA2 8PP, W Glam, Wales
[3] Univ Fed Rio Grande do Norte, Dept Comp Engn & Automat, BR-59078970 Natal, RN, Brazil
关键词
FPGA; Q-learning; reinforcement learning; reconfigurable computing; HARDWARE; ARCHITECTURE; NETWORK;
D O I
10.1109/ACCESS.2018.2885950
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Q-learning is an off-policy reinforcement learning technique, which has the main advantage of obtaining an optimal policy interacting with an unknown model environment. This paper proposes a parallel fixed-point Q-learning algorithm architecture implemented on field programmable gate arrays (FPGA) focusing on optimizing the system processing time. The convergence results are presented, and the processing time and occupied area were analyzed for different states and actions sizes scenarios and various fixed-point formats. The studies concerning the accuracy of the Q-learning technique response and resolution error associated with a decrease in the number of bits were also carried out for hardware implementation. The architecture implementation details were featured. The entire project was developed using the system generator platform (Xilinx), with a Virtex-6 xc6vcx240t-1ff1156 as the target FPGA.
引用
收藏
页码:2782 / 2798
页数:17
相关论文
共 50 条
  • [31] Reinforcement Learning-Based Load Forecasting of Electric Vehicle Charging Station Using Q-Learning Technique
    Dabbaghjamanesh, Morteza
    Moeini, Amirhossein
    Kavousi-Fard, Abdollah
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (06) : 4229 - 4237
  • [32] The Sample Complexity of Teaching-by-Reinforcement on Q-Learning
    Zhang, Xuezhou
    Bharti, Shubham Kumar
    Ma, Yuzhe
    Singla, Adish
    Zhu, Xiaojin
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 10939 - 10947
  • [33] Q-LEARNING
    WATKINS, CJCH
    DAYAN, P
    MACHINE LEARNING, 1992, 8 (3-4) : 279 - 292
  • [34] Bandit Approach to Conflict-Free Parallel Q-Learning in View of Photonic Implementation
    Shinkawa, Hiroaki
    Chauvet, Nicolas
    Röhm, André
    Mihana, Takatomo
    Horisaki, Ryoichi
    Bachelier, Guillaume
    Naruse, Makoto
    Intelligent Computing, 2023, 2
  • [35] Learning rates for Q-Learning
    Even-Dar, E
    Mansour, Y
    COMPUTATIONAL LEARNING THEORY, PROCEEDINGS, 2001, 2111 : 589 - 604
  • [36] Learning rates for Q-learning
    Even-Dar, E
    Mansour, Y
    JOURNAL OF MACHINE LEARNING RESEARCH, 2003, 5 : 1 - 25
  • [37] Reinforcement Learning for Automatic Parameter Tuning in Apache Spark: A Q-Learning Approach
    Deng, Mei
    Huang, Zirui
    Ren, Zhigang
    2024 14TH ASIAN CONTROL CONFERENCE, ASCC 2024, 2024, : 13 - 18
  • [38] Deep Q-Learning Based Reinforcement Learning Approach for Network Intrusion Detection
    Alavizadeh, Hooman
    Alavizadeh, Hootan
    Jang-Jaccard, Julian
    COMPUTERS, 2022, 11 (03)
  • [39] Designing a Fuzzy Q-Learning Power Energy System Using Reinforcement Learning
    J A.
    Konduru S.
    Kura V.
    NagaJyothi G.
    Dudi B.P.
    Mani Naidu S.
    International Journal of Fuzzy System Applications, 2022, 11 (03)
  • [40] Symmetric Q-learning: Reducing Skewness of Bellman Error in Online Reinforcement Learning
    Omura, Motoki
    Osa, Takayuki
    Mukuta, Yusuke
    Harada, Tatsuya
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 13, 2024, : 14474 - 14481