A Provably Efficient Sample Collection Strategy for Reinforcement Learning

被引:0
|
作者
Tarbouriech, Jean [1 ,2 ]
Pirotta, Matteo [1 ]
Valko, Michal [3 ]
Lazaric, Alessandro [1 ]
机构
[1] Facebook AI Res Paris, Paris, France
[2] Inria Lille, Lille, France
[3] DeepMind Paris, Paris, France
关键词
REGRET BOUNDS; EXPLORATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
One of the challenges in online reinforcement learning (RL) is that the agent needs to trade off the exploration of the environment and the exploitation of the samples to optimize its behavior. Whether we optimize for regret, sample complexity, state-space coverage or model estimation, we need to strike a different exploration-exploitation trade-off. In this paper, we propose to tackle the exploration-exploitation problem following a decoupled approach composed of: 1) An "objective-specific" algorithm that (adaptively) prescribes how many samples to collect at which states, as if it has access to a generative model (i.e., a simulator of the environment); 2) An "objective-agnostic" sample collection exploration strategy responsible for generating the prescribed samples as fast as possible. Building on recent methods for exploration in the stochastic shortest path problem, we first provide an algorithm that, given as input the number of samples b(s, a) needed in each state-action pair, requires (O) over tilde (BD + D(3/2)S(2)A) time steps to collect the B = Sigma(s,a) b(s, a) desired samples, in any unknown communicating MDP with S states, A actions and diameter D. Then we show how this general-purpose exploration algorithm can be paired with "objective-specific" strategies that prescribe the sample requirements to tackle a variety of settings- e.g., model estimation, sparse reward discovery, goal-free cost-free exploration in communicating MDPs - for which we obtain improved or novel sample complexity guarantees.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Provably Efficient Offline Reinforcement Learning for Partially Observable Markov Decision Processes
    Guo, Hongyi
    Cai, Qi
    Zhang, Yufeng
    Yang, Zhuoran
    Wang, Zhaoran
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [22] Provably Efficient Causal Model-Based Reinforcement Learning for Systematic Generalization
    Mutti, Mirco
    De Santi, Riccardo
    Rossi, Emanuele
    Calderon, Juan Felipe
    Bronstein, Michael
    Restelli, Marcello
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 8, 2023, : 9251 - 9259
  • [23] Contrastive UCB: Provably Efficient Contrastive Self-Supervised Learning in Online Reinforcement Learning
    Qiu, Shuang
    Wang, Lingxiao
    Bai, Chenjia
    Yang, Zhuoran
    Wang, Zhaoran
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [24] Sample efficient reinforcement learning with active learning for molecular design
    Dodds, Michael
    Guo, Jeff
    Loehr, Thomas
    Tibo, Alessandro
    Engkvist, Ola
    Janet, Jon Paul
    CHEMICAL SCIENCE, 2024, 15 (11) : 4146 - 4160
  • [25] Sample Efficient Offline-to-Online Reinforcement Learning
    Guo, Siyuan
    Zou, Lixin
    Chen, Hechang
    Qu, Bohao
    Chi, Haotian
    Yu, Philip S.
    Chang, Yi
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (03) : 1299 - 1310
  • [26] Sample Efficient Reinforcement Learning for Navigation in Complex Environments
    Moridian, Barzin
    Page, Brian R.
    Mahmoudian, Nina
    2019 IEEE INTERNATIONAL SYMPOSIUM ON SAFETY, SECURITY, AND RESCUE ROBOTICS (SSRR), 2019, : 15 - 21
  • [27] Sample Efficient Hierarchical Reinforcement Learning for the Game of Othello
    Chang, Timothy
    Neshatian, Kourosh
    Atlas, James
    PROCEEDINGS OF NINTH INTERNATIONAL CONGRESS ON INFORMATION AND COMMUNICATION TECHNOLOGY, VOL 9, ICICT 2024, 2025, 1054 : 419 - 430
  • [28] Sample-Efficient Reinforcement Learning of Undercomplete POMDPs
    Jin, Chi
    Kakade, Sham M.
    Krishnamurthy, Akshay
    Liu, Qinghua
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [29] Sample Efficient Reinforcement Learning with Partial Dynamics Knowledge
    Alharbi, Meshal
    Roozbehani, Mardavij
    Dahleh, Munther
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 10, 2024, : 10804 - 10811
  • [30] Semi-Infinitely Constrained Markov Decision Processes and Provably Efficient Reinforcement Learning
    Zhang, Liangyu
    Peng, Yang
    Yang, Wenhao
    Zhang, Zhihua
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) : 3722 - 3735