Offline Reinforcement Learning as Anti-exploration

被引:0
|
作者
Rezaeifar, Shideh [1 ]
Dadashi, Robert [2 ]
Vieillard, Nino [2 ,3 ]
Hussenot, Leonard [2 ,4 ]
Bachem, Olivier [2 ]
Pietquin, Olivier [2 ]
Geist, Matthieu [2 ]
机构
[1] Univ Geneva, Geneva, Switzerland
[2] Google Res, Brain Team, Mountain View, CA USA
[3] Univ Lorraine, CNRS, INRIA, IECL, F-54000 Nancy, France
[4] Univ Lille, CNRS, INRIA, UMR 9189,CRIStAL, Villeneuve Dascq, France
来源
THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE | 2022年
关键词
ALGORITHM;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Offline Reinforcement Learning (RL) aims at learning an optimal control from a fixed dataset, without interactions with the system. An agent in this setting should avoid selecting actions whose consequences cannot be predicted from the data. This is the converse of exploration in RL, which favors such actions. We thus take inspiration from the literature on bonus-based exploration to design a new offline RL agent. The core idea is to subtract a prediction-based exploration bonus from the reward, instead of adding it for exploration. This allows the policy to stay close to the support of the dataset, and practically extends some previous pessimism-based offline RL methods to a deep learning setting with arbitrary bonuses. We also connect this approach to a more common regularization of the learned policy towards the data. Instantiated with a bonus based on the prediction error of a variational autoencoder, we show that our simple agent is competitive with the state of the art on a set of continuous control locomotion and manipulation tasks.
引用
收藏
页码:8106 / 8114
页数:9
相关论文
共 50 条
  • [41] Offline Evaluation of Online Reinforcement Learning Algorithms
    Mandel, Travis
    Liu, Yun-En
    Brunskill, Emma
    Popovic, Zoran
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1926 - 1933
  • [42] Efficient Offline Reinforcement Learning With Relaxed Conservatism
    Huang, Longyang
    Dong, Botao
    Zhang, Weidong
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (08) : 5260 - 5272
  • [43] Federated Offline Reinforcement Learning With Multimodal Data
    Wen, Jiabao
    Dai, Huiao
    He, Jingyi
    Xi, Meng
    Xiao, Shuai
    Yang, Jiachen
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (01) : 4266 - 4276
  • [44] Is Pessimism Provably Efficient for Offline Reinforcement Learning?
    Jin, Ying
    Yang, Zhuoran
    Wang, Zhaoran
    MATHEMATICS OF OPERATIONS RESEARCH, 2024,
  • [45] Supported Policy Optimization for Offline Reinforcement Learning
    Wu, Jialong
    Wu, Haixu
    Qiu, Zihan
    Wang, Jianmin
    Long, Mingsheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [46] Improving Offline Reinforcement Learning with Inaccurate Simulators
    Hou, Yiwen
    Sun, Haoyuan
    Ma, Jinming
    Wu, Feng
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, : 5162 - 5168
  • [47] Corruption-Robust Offline Reinforcement Learning
    Zhang, Xuezhou
    Chen, Yiding
    Zhu, Jerry
    Sun, Wen
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151 : 5757 - 5773
  • [48] Offline Quantum Reinforcement Learning in a Conservative Manner
    Cheng, Zhihao
    Zhang, Kaining
    Shen, Li
    Tao, Dacheng
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7148 - 7156
  • [49] Advancing RAN Slicing with Offline Reinforcement Learning
    Yang, Kun
    Yeh, Shu-ping
    Zhang, Menglei
    Sydir, Jerry
    Yang, Jing
    Shen, Cong
    2024 IEEE INTERNATIONAL SYMPOSIUM ON DYNAMIC SPECTRUM ACCESS NETWORKS, DYSPAN 2024, 2024, : 331 - 338
  • [50] Percentile Criterion Optimization in Offline Reinforcement Learning
    Lobo, Elita A.
    Cousins, Cyrus
    Zick, Yair
    Petrik, Marek
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,