Online Bootstrap Inference For Policy Evaluation In Reinforcement Learning

被引:7
|
作者
Ramprasad, Pratik [1 ]
Li, Yuantong [2 ]
Yang, Zhuoran [3 ]
Wang, Zhaoran [4 ]
Sun, Will Wei [5 ]
Cheng, Guang [2 ]
机构
[1] Purdue Univ, Dept Stat, W Lafayette, IN 47907 USA
[2] UCLA, Dept Stat, Los Angeles, CA USA
[3] Yale Univ, Dept Stat & Data Sci, New Haven, CT USA
[4] Northwestern Univ, Dept Ind Engn & Management Sci, Evanston, IL 60208 USA
[5] Purdue Univ, Krannert Sch Management, W Lafayette, IN 47907 USA
基金
美国国家科学基金会;
关键词
Asymptotic normality; Multiplier bootstrap; Reinforcement learning; Statistical inference; Stochastic approximation; STOCHASTIC-APPROXIMATION;
D O I
10.1080/01621459.2022.2096620
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
The recent emergence of reinforcement learning (RL) has created a demand for robust statistical inference methods for the parameter estimates computed using these algorithms. Existing methods for inference in online learning are restricted to settings involving independently sampled observations, while inference methods in RL have so far been limited to the batch setting. The bootstrap is a flexible and efficient approach for statistical inference in online learning algorithms, but its efficacy in settings involving Markov noise, such as RL, has yet to be explored. In this article, we study the use of the online bootstrap method for inference in RL policy evaluation. In particular, we focus on the temporal difference (TD) learning and Gradient TD (GTD) learning algorithms, which are themselves special instances of linear stochastic approximation under Markov noise. The method is shown to be distributionally consistent for statistical inference in policy evaluation, and numerical experiments are included to demonstrate the effectiveness of this algorithm across a range of real RL environments. Supplementary materials for this article are available online.
引用
收藏
页码:2901 / 2914
页数:14
相关论文
共 50 条
  • [1] Online Reinforcement Learning by Bayesian Inference
    Xia, Zhongpu
    Zhao, Dongbin
    2015 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2015,
  • [2] Bootstrap Advantage Estimation for Policy Optimization in Reinforcement Learning
    Rahman, Md Masudur
    Xue, Yexiang
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 234 - 239
  • [3] Online reinforcement learning control by Bayesian inference
    Xia, Zhongpu
    Zhao, Dongbin
    IET CONTROL THEORY AND APPLICATIONS, 2016, 10 (12): : 1331 - 1338
  • [4] Lifelong Incremental Reinforcement Learning With Online Bayesian Inference
    Wang, Zhi
    Chen, Chunlin
    Dong, Daoyi
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (08) : 4003 - 4016
  • [5] Online Reinforcement Learning for Mixed Policy Scopes
    Zhang, Junzhe
    Bareinboim, Elias
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [6] Online bootstrap inference for the geometric median
    Cheng, Guanghui
    Xiong, Qiang
    Lin, Ruitao
    COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2024, 197
  • [7] Adaptive Policy Learning for Offline-to-Online Reinforcement Learning
    Zheng, Han
    Luo, Xufang
    Wei, Pengfei
    Song, Xuan
    Li, Dongsheng
    Jiang, Jing
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 11372 - 11380
  • [8] Conformal bootstrap with reinforcement learning
    Kantor, Gergely
    Niarchos, Vasilis
    Papageorgakis, Constantinos
    PHYSICAL REVIEW D, 2022, 105 (02)
  • [9] Offline Evaluation of Online Reinforcement Learning Algorithms
    Mandel, Travis
    Liu, Yun-En
    Brunskill, Emma
    Popovic, Zoran
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1926 - 1933
  • [10] Machine Learning and Causal Inference for Policy Evaluation
    Athey, Susan
    KDD'15: PROCEEDINGS OF THE 21ST ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2015, : 5 - 6