Derivative-free reinforcement learning: a review

被引:0
|
作者
Hong Qian
Yang Yu
机构
[1] Nanjing University,National Key Laboratory for Novel Software Technology
来源
关键词
reinforcement learning; derivative-free optimization; neuroevolution reinforcement learning; neural architecture search;
D O I
暂无
中图分类号
学科分类号
摘要
Reinforcement learning is about learning agent models that make the best sequential decisions in unknown environments. In an unknown environment, the agent needs to explore the environment while exploiting the collected information, which usually forms a sophisticated problem to solve. Derivative-free optimization, meanwhile, is capable of solving sophisticated problems. It commonly uses a sampling-and-updating framework to iteratively improve the solution, where exploration and exploitation are also needed to be well balanced. Therefore, derivative-free optimization deals with a similar core issue as reinforcement learning, and has been introduced in reinforcement learning approaches, under the names of learning classifier systems and neuroevolution/evolutionary reinforcement learning. Although such methods have been developed for decades, recently, derivative-free reinforcement learning exhibits attracting increasing attention. However, recent survey on this topic is still lacking. In this article, we summarize methods of derivative-free reinforcement learning to date, and organize the methods in aspects including parameter updating, model selection, exploration, and parallel/distributed methods. Moreover, we discuss some current limitations and possible future directions, hoping that this article could bring more attentions to this topic and serve as a catalyst for developing novel and efficient approaches.
引用
收藏
相关论文
共 50 条
  • [1] Derivative-free reinforcement learning: a review
    Qian, Hong
    Yu, Yang
    FRONTIERS OF COMPUTER SCIENCE, 2021, 15 (06)
  • [2] Derivative-free reinforcement learning: a review
    Hong QIAN
    Yang YU
    Frontiers of Computer Science, 2021, (06) : 44 - 62
  • [3] Reinforcement Learning with Derivative-Free Exploration
    Chen, Xiong-Hui
    Yu, Yang
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 1880 - 1882
  • [4] Combining Local and Global Direct Derivative-free Optimization for Reinforcement Learning
    Leonetti, Matteo
    Kormushev, Petar
    Sagratella, Simone
    CYBERNETICS AND INFORMATION TECHNOLOGIES, 2012, 12 (03) : 53 - 65
  • [5] Performance-Driven Controller Tuning via Derivative-Free Reinforcement Learning
    Lei, Yuheng
    Chen, Jianyu
    Li, Shengbo Eben
    Zheng, Sifa
    2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC), 2022, : 115 - 122
  • [6] Inexact Derivative-Free Optimization for Bilevel Learning
    Ehrhardt, Matthias J.
    Roberts, Lindon
    JOURNAL OF MATHEMATICAL IMAGING AND VISION, 2021, 63 (05) : 580 - 600
  • [7] Inexact Derivative-Free Optimization for Bilevel Learning
    Matthias J. Ehrhardt
    Lindon Roberts
    Journal of Mathematical Imaging and Vision, 2021, 63 : 580 - 600
  • [8] Distributed Reinforcement Learning for Decentralized Linear Quadratic Control: A Derivative-Free Policy Optimization Approach
    Li, Yingying
    Tang, Yujie
    Zhang, Runyu
    Li, Na
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2022, 67 (12) : 6429 - 6444
  • [9] Learning to select the recombination operator for derivative-free optimization
    Haotian Zhang
    Jianyong Sun
    Thomas B?ck
    Zongben Xu
    Science China(Mathematics), 2024, 67 (06) : 1457 - 1480
  • [10] Accelerated Derivative-Free Deep Reinforcement Learning for Large-Scale Grid Emergency Voltage Control
    Huang, Renke
    Chen, Yujiao
    Yin, Tianzhixi
    Li, Xinya
    Li, Ang
    Tan, Jie
    Yu, Wenhao
    Liu, Yuan
    Huang, Qiuhua
    IEEE TRANSACTIONS ON POWER SYSTEMS, 2022, 37 (01) : 14 - 25