Offline Reinforcement Learning as Anti-exploration

被引:0
|
作者
Rezaeifar, Shideh [1 ]
Dadashi, Robert [2 ]
Vieillard, Nino [2 ,3 ]
Hussenot, Leonard [2 ,4 ]
Bachem, Olivier [2 ]
Pietquin, Olivier [2 ]
Geist, Matthieu [2 ]
机构
[1] Univ Geneva, Geneva, Switzerland
[2] Google Res, Brain Team, Mountain View, CA USA
[3] Univ Lorraine, CNRS, INRIA, IECL, F-54000 Nancy, France
[4] Univ Lille, CNRS, INRIA, UMR 9189,CRIStAL, Villeneuve Dascq, France
来源
THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE | 2022年
关键词
ALGORITHM;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Offline Reinforcement Learning (RL) aims at learning an optimal control from a fixed dataset, without interactions with the system. An agent in this setting should avoid selecting actions whose consequences cannot be predicted from the data. This is the converse of exploration in RL, which favors such actions. We thus take inspiration from the literature on bonus-based exploration to design a new offline RL agent. The core idea is to subtract a prediction-based exploration bonus from the reward, instead of adding it for exploration. This allows the policy to stay close to the support of the dataset, and practically extends some previous pessimism-based offline RL methods to a deep learning setting with arbitrary bonuses. We also connect this approach to a more common regularization of the learned policy towards the data. Instantiated with a bonus based on the prediction error of a variational autoencoder, we show that our simple agent is competitive with the state of the art on a set of continuous control locomotion and manipulation tasks.
引用
收藏
页码:8106 / 8114
页数:9
相关论文
共 50 条
  • [31] Discrete Uncertainty Quantification For Offline Reinforcement Learning
    Perez, Jose Luis
    Corrochano, Javier
    Garcia, Javier
    Majadas, Ruben
    Ibanez-Llano, Cristina
    Perez, Sergio
    Fernandez, Fernando
    JOURNAL OF ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING RESEARCH, 2023, 13 (04) : 273 - 287
  • [32] Supported Value Regularization for Offline Reinforcement Learning
    Mao, Yixiu
    Zhang, Hongchang
    Chen, Chen
    Xu, Yi
    Ji, Xiangyang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [33] Boundary Data Augmentation for Offline Reinforcement Learning
    SHEN Jiahao
    JIANG Ke
    TAN Xiaoyang
    ZTECommunications, 2023, 21 (03) : 29 - 36
  • [34] Fast Rates for the Regret of Offline Reinforcement Learning
    Hu, Yichun
    Kallus, Nathan
    Uehara, Masatoshi
    MATHEMATICS OF OPERATIONS RESEARCH, 2025, 50 (01)
  • [35] Mutual Information Regularized Offline Reinforcement Learning
    Ma, Xiao
    Kang, Bingyi
    Xu, Zhongwen
    Lin, Min
    Yan, Shuicheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [36] Revisiting the Minimalist Approach to Offline Reinforcement Learning
    Tarasov, Denis
    Kurenkov, Vladislav
    Nikulin, Alexander
    Kolesnikov, Sergey
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [37] Bellman Residual Orthogonalization for Offline Reinforcement Learning
    Zanette, Andrea
    Wainwright, Martin J.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [38] Offline Reinforcement Learning with Behavioral Supervisor Tuning
    Srinivasan, Padmanaba
    Knottenbelt, William
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 4929 - 4937
  • [39] Offline Reinforcement Learning for Automated Stock Trading
    Lee, Namyeong
    Moon, Jun
    IEEE ACCESS, 2023, 11 : 112577 - 112589
  • [40] On the Role of Discount Factor in Offline Reinforcement Learning
    Hu, Hao
    Yang, Yiqing
    Zhao, Qianchuan
    Zhang, Chongjie
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,