Sample-efficient Reinforcement Learning Representation Learning with Curiosity Contrastive Forward Dynamics Model

被引:9
|
作者
Nguyen, Thanh [1 ]
Luu, Tung M. [1 ]
Vu, Thang [1 ]
Yoo, Chang D. [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Fac Elect Engn, Daejeon 34141, South Korea
关键词
LEVEL; GO;
D O I
10.1109/IROS51168.2021.9636536
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Developing an agent in reinforcement learning (RL) that is capable of performing complex control tasks directly from high-dimensional observation such as raw pixels is a challenge as efforts still need to be made towards improving sample efficiency and generalization of RL algorithm. This paper considers a learning framework for a Curiosity Contrastive Forward Dynamics Model (CCFDM) to achieve a more sample-efficient RL based directly on raw pixels. CCFDM incorporates a forward dynamics model (FDM) and performs contrastive learning to train its deep convolutional neural network-based image encoder (IE) to extract conducive spatial and temporal information to achieve a more sample efficiency for RL. In addition, during training, CCFDM provides intrinsic rewards, produced based on FDM prediction error, and encourages the curiosity of the RL agent to improve exploration. The diverge and less-repetitive observations provided by both our exploration strategy and data augmentation available in contrastive learning improve not only the sample efficiency but also the generalization. Performance of existing model-free RL methods such as Soft Actor-Critic built on top of CCFDM outperforms prior state-of-the-art pixel-based RL methods on the DeepMind Control Suite benchmark.
引用
收藏
页码:3471 / 3477
页数:7
相关论文
共 50 条
  • [31] TEXPLORE: real-time sample-efficient reinforcement learning for robots
    Hester, Todd
    Stone, Peter
    MACHINE LEARNING, 2013, 90 (03) : 385 - 429
  • [32] Policy Finetuning: Bridging Sample-Efficient Offline and Online Reinforcement Learning
    Xie, Tengyang
    Jiang, Nan
    Wang, Huan
    Xiong, Caiming
    Bai, Yu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [33] TEXPLORE: real-time sample-efficient reinforcement learning for robots
    Todd Hester
    Peter Stone
    Machine Learning, 2013, 90 : 385 - 429
  • [34] Augmented Memory: Sample-Efficient Generative Molecular Design with Reinforcement Learning
    Guo, Jeff
    Schwaller, Philippe
    JACS AU, 2024, 4 (06): : 2160 - 2172
  • [35] Sample-Efficient Blockage Prediction and Handover Using Causal Reinforcement Learning
    Kanagamani, Tamizharasan
    Sadasivan, Jishnu
    Banerjee, Serene
    10TH INTERNATIONAL CONFERENCE ON ELECTRONICS, COMPUTING AND COMMUNICATION TECHNOLOGIES, CONECCT 2024, 2024,
  • [36] Sample-efficient multi-agent reinforcement learning with masked reconstruction
    Kim, Jung In
    Lee, Young Jae
    Heo, Jongkook
    Park, Jinhyeok
    Kim, Jaehoon
    Lim, Sae Rin
    Jeong, Jinyong
    Kim, Seoung Bum
    PLOS ONE, 2023, 18 (09):
  • [37] Sample-Efficient Deep Reinforcement Learning via Episodic Backward Update
    Lee, Su Young
    Choi, Sungik
    Chung, Sae-Young
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [38] Sample-efficient Adversarial Imitation Learning
    Jung, Dahuin
    Lee, Hyungyu
    Yoon, Sungroh
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25
  • [39] Sample-efficient Adversarial Imitation Learning
    Jung, Dahuin
    Lee, Hyungyu
    Yoon, Sungroh
    Journal of Machine Learning Research, 2024, 25 : 1 - 32
  • [40] Sample-Efficient Reinforcement Learning via Conservative Model-Based Actor-Critic
    Wang, Zhihai
    Wang, Jie
    Zhou, Qi
    Li, Bin
    Li, Houqiang
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 8612 - 8620