A Provably Efficient Sample Collection Strategy for Reinforcement Learning

被引:0
|
作者
Tarbouriech, Jean [1 ,2 ]
Pirotta, Matteo [1 ]
Valko, Michal [3 ]
Lazaric, Alessandro [1 ]
机构
[1] Facebook AI Res Paris, Paris, France
[2] Inria Lille, Lille, France
[3] DeepMind Paris, Paris, France
关键词
REGRET BOUNDS; EXPLORATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
One of the challenges in online reinforcement learning (RL) is that the agent needs to trade off the exploration of the environment and the exploitation of the samples to optimize its behavior. Whether we optimize for regret, sample complexity, state-space coverage or model estimation, we need to strike a different exploration-exploitation trade-off. In this paper, we propose to tackle the exploration-exploitation problem following a decoupled approach composed of: 1) An "objective-specific" algorithm that (adaptively) prescribes how many samples to collect at which states, as if it has access to a generative model (i.e., a simulator of the environment); 2) An "objective-agnostic" sample collection exploration strategy responsible for generating the prescribed samples as fast as possible. Building on recent methods for exploration in the stochastic shortest path problem, we first provide an algorithm that, given as input the number of samples b(s, a) needed in each state-action pair, requires (O) over tilde (BD + D(3/2)S(2)A) time steps to collect the B = Sigma(s,a) b(s, a) desired samples, in any unknown communicating MDP with S states, A actions and diameter D. Then we show how this general-purpose exploration algorithm can be paired with "objective-specific" strategies that prescribe the sample requirements to tackle a variety of settings- e.g., model estimation, sparse reward discovery, goal-free cost-free exploration in communicating MDPs - for which we obtain improved or novel sample complexity guarantees.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] A Provably Efficient Model-Free Posterior Sampling Method for Episodic Reinforcement Learning
    Dann, Christoph
    Mohri, Mehryar
    Zhang, Tong
    Zimmert, Julian
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [32] Provably Efficient Black-Box Action Poisoning Attacks Against Reinforcement Learning
    Liu, Guanlin
    Lai, Lifeng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [33] Provably Robust Blackbox Optimization for Reinforcement Learning
    Choromanski, Krzysztof
    Pacchiano, Aldo
    Parker-Holder, Jack
    Tang, Yunhao
    Jain, Deepali
    Yang, Yuxiang
    Iscen, Atil
    Hsu, Jasmine
    Sindhwani, Vikas
    CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100
  • [34] Provably adaptive reinforcement learning in metric spaces
    Cao, Tongyi
    Krishnamurthy, Akshay
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [35] Sample Efficient Reinforcement Learning through Learning from Demonstrations in Minecraft
    Scheller, Christian
    Schraner, Yanick
    Vogel, Manfred
    NEURIPS 2019 COMPETITION AND DEMONSTRATION TRACK, VOL 123, 2019, 123 : 67 - 76
  • [36] Provably Efficient Generalized Lagrangian Policy Optimization for Safe Multi-Agent Reinforcement Learning
    Ding, Dongsheng
    Wei, Xiaohan
    Yang, Zhuoran
    Wang, Zhuoran
    Jovanovic, Mihailo R.
    LEARNING FOR DYNAMICS AND CONTROL CONFERENCE, VOL 211, 2023, 211
  • [37] Conditional Abstraction Trees for Sample-Efficient Reinforcement Learning
    Dadvar, Mehdi
    Nayyar, Rashmeet Kaur
    Srivastava, Siddharth
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 485 - 495
  • [38] A New Sample-Efficient PAC Reinforcement Learning Algorithm
    Zehfroosh, Ashkan
    Tanner, Herbert G.
    2020 28TH MEDITERRANEAN CONFERENCE ON CONTROL AND AUTOMATION (MED), 2020, : 788 - 793
  • [39] Sample-Efficient Reinforcement Learning in the Presence of Exogenous Information
    Efroni, Yonathan
    Foster, Dylan J.
    Misra, Dipendra
    Krishnamurthy, Akshay
    Langford, John
    CONFERENCE ON LEARNING THEORY, VOL 178, 2022, 178
  • [40] Sample-efficient Reinforcement Learning in Robotic Table Tennis
    Tebbe, Jonas
    Krauch, Lukas
    Gao, Yapeng
    Zell, Andreas
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 4171 - 4178