PROJECTED STATE-ACTION BALANCING WEIGHTS FOR OFFLINE REINFORCEMENT LEARNING

被引:1
|
作者
Wang, Jiayi [1 ]
Qi, Zhengling [2 ]
Wong, Raymond K. W. [3 ]
机构
[1] Univ Texas Dallas, Dept Math Sci, Richardson, TX 75083 USA
[2] George Washington Univ, Dept Decis Sci, Washington, DC 20052 USA
[3] Texas A&M Univ, Dept Stat, College Stn, TX 77843 USA
来源
ANNALS OF STATISTICS | 2023年 / 51卷 / 04期
基金
美国国家科学基金会;
关键词
Infinite horizons; Markov decision process; Policy evaluation; Reinforcement learning; DYNAMIC TREATMENT REGIMES; RATES; CONVERGENCE; INFERENCE;
D O I
10.1214/23-AOS2302
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Off-policy evaluation is considered a fundamental and challenging problem in reinforcement learning (RL). This paper focuses on value estimation of a target policy based on pre-collected data generated from a possibly different policy, under the framework of infinite-horizon Markov decision processes. Motivated by the recently developed marginal importance sampling method in RL and the covariate balancing idea in causal inference, we propose a novel estimator with approximately projected state-action balancing weights for the policy value estimation. We obtain the convergence rate of these weights, and show that the proposed value estimator is asymptotically normal under technical conditions. In terms of asymptotics, our results scale with both the number of trajectories and the number of decision points at each trajectory. As such, consistency can still be achieved with a limited number of subjects when the number of decision points diverges. In addition, we develop a necessary and sufficient condition for establishing the well-posedness of the operator that relates to the nonparametric Q-function estimation in the off-policy setting, which characterizes the difficulty of Q-function estimation and may be of independent interest. Numerical experiments demonstrate the promising performance of our proposed estimator.
引用
收藏
页码:1639 / 1665
页数:27
相关论文
共 50 条
  • [21] Exploiting Action Impact Regularity and Exogenous State Variables for Offline Reinforcement Learning
    Liu, Vincent
    Wright, James R.
    White, Martha
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2023, 77 : 71 - 101
  • [22] THE STATE-ACTION PROBLEM
    FREUND, PA
    PROCEEDINGS OF THE AMERICAN PHILOSOPHICAL SOCIETY, 1991, 135 (01) : 3 - 12
  • [23] Reinforcement learning in dynamic environment: abstraction of state-action space utilizing properties of the robot body and environment
    Ito, Kazuyuki
    Takeuchi, Yutaka
    ARTIFICIAL LIFE AND ROBOTICS, 2016, 21 (01) : 11 - 17
  • [24] Online Reinforcement Learning Control of Nonlinear Dynamic Systems: A State-action Value Function Based Solution
    Asl, Hamed Jabbari
    Uchibe, Eiji
    NEUROCOMPUTING, 2023, 544
  • [25] State Deviation Correction for Offline Reinforcement Learning
    Zhang, Hongchang
    Shao, Jianzhun
    Jiang, Yuhang
    He, Shuncheng
    Zhang, Guanwen
    Ji, Xiangyang
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 9022 - 9030
  • [26] UAC: Offline Reinforcement Learning With Uncertain Action Constraint
    Guan, Jiayi
    Gu, Shangding
    Li, Zhijun
    Hou, Jing
    Yang, Yiqin
    Chen, Guang
    Jiang, Changjun
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2024, 16 (02) : 671 - 680
  • [27] Learning cooperative assembly with the graph representation of a state-action space
    Ferch, M
    Höchsmann, M
    Zhang, JW
    2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-3, PROCEEDINGS, 2002, : 990 - 995
  • [28] R-learning with multiple state-action value tables
    School of Knowledge Science, Japan Advanced Institute of Science and Technology, 1-1, Asahi-dai, Nomi 923-1292, Japan
    不详
    不详
    不详
    IEEJ Trans. Electron. Inf. Syst., 2006, 1 (72-82):
  • [29] R-learning with multiple state-action value tables
    Ishikawa, Koichiro
    Sakurai, Akito
    Fujinami, Tsutomu
    Kunifuji, Susumu
    ELECTRICAL ENGINEERING IN JAPAN, 2007, 159 (03) : 34 - 47
  • [30] Learning cooperative grasping with the graph representation of a state-action space
    Ferch, M
    Zhang, JW
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2002, 38 (3-4) : 183 - 195