A Provably Efficient Sample Collection Strategy for Reinforcement Learning

被引:0
|
作者
Tarbouriech, Jean [1 ,2 ]
Pirotta, Matteo [1 ]
Valko, Michal [3 ]
Lazaric, Alessandro [1 ]
机构
[1] Facebook AI Res Paris, Paris, France
[2] Inria Lille, Lille, France
[3] DeepMind Paris, Paris, France
关键词
REGRET BOUNDS; EXPLORATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
One of the challenges in online reinforcement learning (RL) is that the agent needs to trade off the exploration of the environment and the exploitation of the samples to optimize its behavior. Whether we optimize for regret, sample complexity, state-space coverage or model estimation, we need to strike a different exploration-exploitation trade-off. In this paper, we propose to tackle the exploration-exploitation problem following a decoupled approach composed of: 1) An "objective-specific" algorithm that (adaptively) prescribes how many samples to collect at which states, as if it has access to a generative model (i.e., a simulator of the environment); 2) An "objective-agnostic" sample collection exploration strategy responsible for generating the prescribed samples as fast as possible. Building on recent methods for exploration in the stochastic shortest path problem, we first provide an algorithm that, given as input the number of samples b(s, a) needed in each state-action pair, requires (O) over tilde (BD + D(3/2)S(2)A) time steps to collect the B = Sigma(s,a) b(s, a) desired samples, in any unknown communicating MDP with S states, A actions and diameter D. Then we show how this general-purpose exploration algorithm can be paired with "objective-specific" strategies that prescribe the sample requirements to tackle a variety of settings- e.g., model estimation, sparse reward discovery, goal-free cost-free exploration in communicating MDPs - for which we obtain improved or novel sample complexity guarantees.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Is Pessimism Provably Efficient for Offline Reinforcement Learning?
    Jin, Ying
    Yang, Zhuoran
    Wang, Zhaoran
    MATHEMATICS OF OPERATIONS RESEARCH, 2024,
  • [2] An OCBA-Based Method for Efficient Sample Collection in Reinforcement Learning
    Li, Kuo
    Jin, Xinze
    Jia, Qing-Shan
    Ren, Dongchun
    Xia, Huaxia
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2024, 21 (03) : 3615 - 3626
  • [3] Provably Efficient Exploration for Reinforcement Learning Using Unsupervised Learning
    Feng, Fei
    Wang, Ruosong
    Yin, Wotao
    Du, Simon S.
    Yang, Lin F.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [4] Provably Efficient Reinforcement Learning with Linear Function Approximation
    Jin, Chi
    Yang, Zhuoran
    Wang, Zhaoran
    Jordan, Michael, I
    MATHEMATICS OF OPERATIONS RESEARCH, 2023, 48 (03) : 1496 - 1521
  • [5] Provably Efficient Reinforcement Learning via Surprise Bound
    Zhu, Hanlin
    Wang, Ruosong
    Lee, Jason D.
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 206, 2023, 206
  • [6] Provably Efficient Causal Reinforcement Learning with Confounded Observational Data
    Wang, Lingxiao
    Yang, Zhuoran
    Wang, Zhaoran
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [7] Provably Efficient Reinforcement Learning in Partially Observable Dynamical Systems
    Uehara, Masatoshi
    Sekhari, Ayush
    Kallus, Nathan
    Lee, Jason D.
    Sun, Wen
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [8] Provably Efficient Reinforcement Learning for Discounted MDPs with Feature Mapping
    Zhou, Dongruo
    He, Jiafan
    Gu, Quanquan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [9] Provably Efficient Offline Multi-agent Reinforcement Learning via Strategy-wise Bonus
    Cui, Qiwen
    Du, Simon S.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [10] Provably Efficient Offline Reinforcement Learning in Regular Decision Processes
    Cipollone, Roberto
    Jonsson, Anders
    Ronca, Alessandro
    Talebi, Mohammad Sadegh
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,