A Provably Efficient Sample Collection Strategy for Reinforcement Learning

被引:0
|
作者
Tarbouriech, Jean [1 ,2 ]
Pirotta, Matteo [1 ]
Valko, Michal [3 ]
Lazaric, Alessandro [1 ]
机构
[1] Facebook AI Res Paris, Paris, France
[2] Inria Lille, Lille, France
[3] DeepMind Paris, Paris, France
关键词
REGRET BOUNDS; EXPLORATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
One of the challenges in online reinforcement learning (RL) is that the agent needs to trade off the exploration of the environment and the exploitation of the samples to optimize its behavior. Whether we optimize for regret, sample complexity, state-space coverage or model estimation, we need to strike a different exploration-exploitation trade-off. In this paper, we propose to tackle the exploration-exploitation problem following a decoupled approach composed of: 1) An "objective-specific" algorithm that (adaptively) prescribes how many samples to collect at which states, as if it has access to a generative model (i.e., a simulator of the environment); 2) An "objective-agnostic" sample collection exploration strategy responsible for generating the prescribed samples as fast as possible. Building on recent methods for exploration in the stochastic shortest path problem, we first provide an algorithm that, given as input the number of samples b(s, a) needed in each state-action pair, requires (O) over tilde (BD + D(3/2)S(2)A) time steps to collect the B = Sigma(s,a) b(s, a) desired samples, in any unknown communicating MDP with S states, A actions and diameter D. Then we show how this general-purpose exploration algorithm can be paired with "objective-specific" strategies that prescribe the sample requirements to tackle a variety of settings- e.g., model estimation, sparse reward discovery, goal-free cost-free exploration in communicating MDPs - for which we obtain improved or novel sample complexity guarantees.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Sample-efficient reinforcement learning for CERN accelerator control
    Kain, Verena
    Hirlander, Simon
    Goddard, Brennan
    Velotti, Francesco Maria
    Porta, Giovanni Zevi Della
    Bruchon, Niky
    Valentino, Gianluca
    PHYSICAL REVIEW ACCELERATORS AND BEAMS, 2020, 23 (12)
  • [42] Provably Efficient Learning of Transferable Rewards
    Metelli, Alberto Maria
    Ramponi, Giorgia
    Concetti, Alessandro
    Restelli, Marcello
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [43] Optimized Feature Extraction for Sample Efficient Deep Reinforcement Learning
    Li, Yuangang
    Guo, Tao
    Li, Qinghua
    Liu, Xinyue
    ELECTRONICS, 2023, 12 (16)
  • [44] Is Q-learning Provably Efficient?
    Jin, Chi
    Allen-Zhu, Zeyuan
    Bubeck, Sebastien
    Jordan, Michael I.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [45] Sample Efficient Reinforcement Learning Method via High Efficient Episodic Memory
    Yang, Dujia
    Qin, Xiaowei
    Xu, Xiaodong
    Li, Chensheng
    Wei, Guo
    IEEE ACCESS, 2020, 8 : 129274 - 129284
  • [46] Reducing Safety Interventions in Provably Safe Reinforcement Learning
    Thumm, Jakob
    Pelat, Guillaume
    Althoff, Matthias
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 7515 - 7522
  • [47] Sample strategy based on TD-error for offline reinforcement learning
    Zhang L.
    Feng Y.
    Liang X.
    Liu S.
    Cheng G.
    Huang J.
    Gongcheng Kexue Xuebao/Chinese Journal of Engineering, 2023, 45 (12): : 2118 - 2128
  • [48] Optimistic Sampling Strategy for Data-Efficient Reinforcement Learning
    Zhao, Dongfang
    Liu, Jiafeng
    Wu, Rui
    Cheng, Dansong
    Tang, Xianglong
    IEEE ACCESS, 2019, 7 : 55763 - 55769
  • [49] Reinforcement Learning with General Value Function Approximation: Provably Efficient Approach via Bounded Eluder Dimension
    Wang, Ruosong
    Salakhutdinov, Ruslan
    Yang, Lin F.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [50] Provably Efficient Primal-Dual Reinforcement Learning for CMDPs with Non-stationary Objectives and Constraints
    Ding, Yuhao
    Lavaei, Javad
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7396 - 7404