Fast active learning for pure exploration in reinforcement learning

被引:0
|
作者
Menard, Pierre [1 ]
Domingues, Omar Darwiche [2 ]
Kaufmann, Emilie [2 ,3 ]
Jonsson, Anders [4 ]
Leurent, Edouard [2 ]
Valko, Michal [2 ,3 ,5 ]
机构
[1] Otto von Guericke Univ, Magdeburg, Germany
[2] Inria, Paris, France
[3] Univ Lille, Lille, France
[4] Univ Pompeu Fabra, Barcelona, Spain
[5] DeepMind Paris, Paris, France
关键词
BOUNDS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Realistic environments often provide agents with very limited feedback. When the environment is initially unknown, the feedback, in the beginning, can be completely absent, and the agents may first choose to devote all their effort on exploring efficiently. The exploration remains a challenge while it has been addressed with many hand-tuned heuristics with different levels of generality on one side, and a few theoretically-backed exploration strategies on the other. Many of them are incarnated by intrinsic motivation and in particular explorations bonuses. A common choice is to use 1/root n bonus, where n is a number of times this particular state-action pair was visited. We show that, surprisingly, for a pure-exploration objective of reward-free exploration, bonuses that scale with an bring faster learning rates, improving the known upper bounds with respect to the dependence on the horizon H. Furthermore, we show that with an improved analysis of the stopping time, we can improve by a factor H the sample complexity in the best-policy identification setting, which is another pure-exploration objective, where the environment provides rewards but the agent is not penalized for its behavior during the exploration phase.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Active Exploration for Inverse Reinforcement Learning
    Lindner, David
    Krause, Andreas
    Ramponi, Giorgia
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [2] Stimulus sampling as an exploration mechanism for fast reinforcement learning
    Boris B. Vladimirskiy
    Eleni Vasilaki
    Robert Urbanczik
    Walter Senn
    Biological Cybernetics, 2009, 100 : 319 - 330
  • [3] Stimulus sampling as an exploration mechanism for fast reinforcement learning
    Vladimirskiy, Boris B.
    Vasilaki, Eleni
    Urbanczik, Robert
    Senn, Walter
    BIOLOGICAL CYBERNETICS, 2009, 100 (04) : 319 - 330
  • [4] Model-Free Active Exploration in Reinforcement Learning
    Russo, Alessio
    Proutiere, Alexandre
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [5] AN ACTIVE EXPLORATION METHOD FOR DATA EFFICIENT REINFORCEMENT LEARNING
    Zhao, Dongfang
    Liu, Jiafeng
    Wu, Rui
    Cheng, Dansong
    Tang, Xianglong
    INTERNATIONAL JOURNAL OF APPLIED MATHEMATICS AND COMPUTER SCIENCE, 2019, 29 (02) : 351 - 362
  • [6] Active exploration is important for reinforcement learning of interval timing
    Osamu Shouno
    Hiroshi Tsujino
    BMC Neuroscience, 12 (Suppl 1)
  • [7] Efficient exploration through active learning for value function approximation in reinforcement learning
    Akiyama, Takayuki
    Hachiya, Hirotaka
    Sugiyama, Masashi
    NEURAL NETWORKS, 2010, 23 (05) : 639 - 648
  • [8] Fast Robot Hierarchical Exploration Based on Deep Reinforcement Learning
    Zuo, Shun
    Niu, Jianwei
    Ren, Lu
    Ouyang, Zhenchao
    2023 INTERNATIONAL WIRELESS COMMUNICATIONS AND MOBILE COMPUTING, IWCMC, 2023, : 138 - 143
  • [9] Active Policy Iteration: Efficient Exploration through Active Learning for Value Function Approximation in Reinforcement Learning
    Akiyama, Takayuki
    Hachiya, Hirotaka
    Sugiyama, Masashi
    21ST INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI-09), PROCEEDINGS, 2009, : 980 - 985
  • [10] Learning to Label with Active Learning and Reinforcement Learning
    Tang, Xiu
    Wu, Sai
    Chen, Gang
    Chen, Ke
    Shou, Lidan
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS (DASFAA 2021), PT II, 2021, 12682 : 549 - 557