Online Bootstrap Inference For Policy Evaluation In Reinforcement Learning

被引:7
|
作者
Ramprasad, Pratik [1 ]
Li, Yuantong [2 ]
Yang, Zhuoran [3 ]
Wang, Zhaoran [4 ]
Sun, Will Wei [5 ]
Cheng, Guang [2 ]
机构
[1] Purdue Univ, Dept Stat, W Lafayette, IN 47907 USA
[2] UCLA, Dept Stat, Los Angeles, CA USA
[3] Yale Univ, Dept Stat & Data Sci, New Haven, CT USA
[4] Northwestern Univ, Dept Ind Engn & Management Sci, Evanston, IL 60208 USA
[5] Purdue Univ, Krannert Sch Management, W Lafayette, IN 47907 USA
基金
美国国家科学基金会;
关键词
Asymptotic normality; Multiplier bootstrap; Reinforcement learning; Statistical inference; Stochastic approximation; STOCHASTIC-APPROXIMATION;
D O I
10.1080/01621459.2022.2096620
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
The recent emergence of reinforcement learning (RL) has created a demand for robust statistical inference methods for the parameter estimates computed using these algorithms. Existing methods for inference in online learning are restricted to settings involving independently sampled observations, while inference methods in RL have so far been limited to the batch setting. The bootstrap is a flexible and efficient approach for statistical inference in online learning algorithms, but its efficacy in settings involving Markov noise, such as RL, has yet to be explored. In this article, we study the use of the online bootstrap method for inference in RL policy evaluation. In particular, we focus on the temporal difference (TD) learning and Gradient TD (GTD) learning algorithms, which are themselves special instances of linear stochastic approximation under Markov noise. The method is shown to be distributionally consistent for statistical inference in policy evaluation, and numerical experiments are included to demonstrate the effectiveness of this algorithm across a range of real RL environments. Supplementary materials for this article are available online.
引用
收藏
页码:2901 / 2914
页数:14
相关论文
共 50 条
  • [21] Federated Offline Reinforcement Learning with Proximal Policy Evaluation
    Sheng YUE
    Yongheng DENG
    Guanbo WANG
    Ju REN
    Yaoxue ZHANG
    Chinese Journal of Electronics, 2024, 33 (06) : 1360 - 1372
  • [22] Federated Offline Reinforcement Learning with Proximal Policy Evaluation
    Yue, Sheng
    Deng, Yongheng
    Wang, Guanbo
    Ren, Ju
    Zhang, Yaoxue
    CHINESE JOURNAL OF ELECTRONICS, 2024, 33 (06) : 1360 - 1372
  • [23] Drone's Objective Inference Using Policy Error Inverse Reinforcement Learning
    Perrusquia, Adolfo
    Guo, Weisi
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, : 1 - 12
  • [24] A perspective on off-policy evaluation in reinforcement learning
    Li, Lihong
    FRONTIERS OF COMPUTER SCIENCE, 2019, 13 (05) : 911 - 912
  • [25] A perspective on off-policy evaluation in reinforcement learning
    Lihong Li
    Frontiers of Computer Science, 2019, 13 : 911 - 912
  • [26] Reliable Off-Policy Evaluation for Reinforcement Learning
    Wang, Jie
    Gao, Rui
    Zha, Hongyuan
    OPERATIONS RESEARCH, 2024, 72 (02) : 699 - 716
  • [27] Online least-squares policy iteration for reinforcement learning control
    Busoniu, Lucian
    Ernst, Damien
    De Schutter, Bart
    Babuska, Robert
    2010 AMERICAN CONTROL CONFERENCE, 2010, : 486 - 491
  • [28] On Applications of Bootstrap in Continuous Space Reinforcement Learning
    Faradonbeh, Mohamad Kazem Shirani
    Tewari, Ambuj
    Michailidis, George
    2019 IEEE 58TH CONFERENCE ON DECISION AND CONTROL (CDC), 2019, : 1977 - 1984
  • [29] Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning
    Thomas, Philip S.
    Brunskill, Emma
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [30] DECAF: Deep Case-based Policy Inference for knowledge transfer in Reinforcement Learning
    Glatt, Ruben
    Da Silva, Felipe Leno
    da Costa Bianchi, Reinaldo Augusto
    Reali Costa, Anna Helena
    EXPERT SYSTEMS WITH APPLICATIONS, 2020, 156