Self-Supervised Exploration via Disagreement

被引:0
|
作者
Pathak, Deepak [1 ]
Gandhi, Dhiraj [2 ]
Gupta, Abhinav [2 ,3 ]
机构
[1] UC Berkelely, Berkeley, CA 94720 USA
[2] CMU, Pittsburgh, PA USA
[3] Facebook AI Res, Menlo Pk, CA USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Efficient exploration is a long-standing problem in sensorimotor learning. Major advances have been demonstrated in noise-free, non-stochastic domains such as video games and simulation. However, most of these formulations either get stuck in environments with stochastic dynamics or are too inefficient to be scalable to real robotics setups. In this paper, we propose a formulation for exploration inspired by the work in active learning literature. Specifically, we train an ensemble of dynamics models and incentivize the agent to explore such that the disagreement of those ensembles is maximized This allows the agent to learn skills by exploring in a self-supervised manner without any external reward. Notably, we further leverage the disagreement objective to optimize the agent's policy in a differentiable manner, without using reinforcement learning, which results in a sample-efficient exploration. We demonstrate the efficacy of this formulation across a variety of benchmark environments including stochastic-Atari, Mujoco and Unity. Finally, we implement our differentiable exploration on a real robot which learns to interact with objects completely from scratch. Project videos and code are at https://pathak22.github. io/exploration-by-disagreement/.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] SEMI: Self-supervised Exploration via Multisensory Incongruity
    Wang, Jianren
    Zhuang, Ziwen
    Zhao, Hang
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), 2022, : 2663 - 2670
  • [2] Self-Supervised Exploration via Temporal Inconsistency in Reinforcement Learning
    Gao Z.
    Xu K.
    Zhai Y.
    Ding B.
    Feng D.
    Mao X.
    Wang H.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (11): : 1 - 10
  • [3] Self-Supervised Network Distillation for Exploration
    Zhang, Xu
    Dai, Ruiyu
    Chen, Weisi
    Qiu, Jiguang
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2023, 37 (15)
  • [4] Dynamic Bottleneck for Robust Self-Supervised Exploration
    Bai, Chenjia
    Wang, Lingxiao
    Han, Lei
    Garg, Animesh
    Hao, Jianye
    Liu, Peng
    Wang, Zhaoran
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [5] PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models
    Menon, Sachit
    Damian, Alexandru
    Hu, Shijia
    Ravi, Nikhil
    Rudin, Cynthia
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 2434 - 2442
  • [6] Learning Action Representations for Self-supervised Visual Exploration
    Oh, Changjae
    Cavallaro, Andrea
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 5873 - 5879
  • [7] Curiosity-driven Exploration by Self-supervised Prediction
    Pathak, Deepak
    Agrawal, Pulkit
    Efros, Alexei A.
    Darrell, Trevor
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [8] Curiosity-driven Exploration by Self-supervised Prediction
    Pathak, Deepak
    Agrawal, Pulkit
    Efros, Alexei A.
    Darrell, Trevor
    2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, : 488 - 489
  • [9] Self-Supervised Pretraining Improves Self-Supervised Pretraining
    Reed, Colorado J.
    Yue, Xiangyu
    Nrusimha, Ani
    Ebrahimi, Sayna
    Vijaykumar, Vivek
    Mao, Richard
    Li, Bo
    Zhang, Shanghang
    Guillory, Devin
    Metzger, Sean
    Keutzer, Kurt
    Darrell, Trevor
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 1050 - 1060
  • [10] Efficient DDPG via the Self-Supervised Method
    Zhang, Guanghao
    Chen, Hongliang
    Li, Jianxun
    PROCEEDINGS OF THE 32ND 2020 CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2020), 2020, : 4636 - 4642