Fast active learning for pure exploration in reinforcement learning

被引:0
|
作者
Menard, Pierre [1 ]
Domingues, Omar Darwiche [2 ]
Kaufmann, Emilie [2 ,3 ]
Jonsson, Anders [4 ]
Leurent, Edouard [2 ]
Valko, Michal [2 ,3 ,5 ]
机构
[1] Otto von Guericke Univ, Magdeburg, Germany
[2] Inria, Paris, France
[3] Univ Lille, Lille, France
[4] Univ Pompeu Fabra, Barcelona, Spain
[5] DeepMind Paris, Paris, France
关键词
BOUNDS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Realistic environments often provide agents with very limited feedback. When the environment is initially unknown, the feedback, in the beginning, can be completely absent, and the agents may first choose to devote all their effort on exploring efficiently. The exploration remains a challenge while it has been addressed with many hand-tuned heuristics with different levels of generality on one side, and a few theoretically-backed exploration strategies on the other. Many of them are incarnated by intrinsic motivation and in particular explorations bonuses. A common choice is to use 1/root n bonus, where n is a number of times this particular state-action pair was visited. We show that, surprisingly, for a pure-exploration objective of reward-free exploration, bonuses that scale with an bring faster learning rates, improving the known upper bounds with respect to the dependence on the horizon H. Furthermore, we show that with an improved analysis of the stopping time, we can improve by a factor H the sample complexity in the best-policy identification setting, which is another pure-exploration objective, where the environment provides rewards but the agent is not penalized for its behavior during the exploration phase.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] Active Exploration Deep Reinforcement Learning for Continuous Action Space with Forward Prediction
    Zhao, Dongfang
    Huanshi, Xu
    Xun, Zhang
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2024, 17 (01)
  • [42] Active Exploration Deep Reinforcement Learning for Continuous Action Space with Forward Prediction
    Dongfang Zhao
    Xu Huanshi
    Zhang Xun
    International Journal of Computational Intelligence Systems, 17
  • [43] Reinforcement Learning or Active Inference?
    Friston, Karl J.
    Daunizeau, Jean
    Kiebel, Stefan J.
    PLOS ONE, 2009, 4 (07):
  • [44] Active Perception and Reinforcement Learning
    Whitehead, Steven D.
    Ballard, Dana H.
    NEURAL COMPUTATION, 1990, 2 (04) : 409 - 419
  • [45] Provably Efficient Exploration for Reinforcement Learning Using Unsupervised Learning
    Feng, Fei
    Wang, Ruosong
    Yin, Wotao
    Du, Simon S.
    Yang, Lin F.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [46] Learning Task Decomposition and Exploration Shaping for Reinforcement Learning Agents
    Djurdjevic, Predrag
    Huber, Manfred
    2008 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC), VOLS 1-6, 2008, : 365 - 372
  • [47] Learning Transferable Domain Priors for Safe Exploration in Reinforcement Learning
    Karimpanal, Thommen George
    Rana, Santu
    Gupta, Sunil
    Truyen Tran
    Venkatesh, Svetha
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [48] Learning to soar: Resource-constrained exploration in reinforcement learning
    Chung, Jen Jen
    Lawrance, Nicholas R. J.
    Sukkarieh, Salah
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2015, 34 (02): : 158 - 172
  • [49] Active learning of causal structures with deep reinforcement learning
    Amirinezhad, Amir
    Salehkaleybar, Saber
    Hashemi, Matin
    NEURAL NETWORKS, 2022, 154 : 22 - 30
  • [50] Reinforcement Learning for Data Preparation with Active Reward Learning
    Berti-Equille, Laure
    INTERNET SCIENCE, INSCI 2019, 2019, 11938 : 121 - 132