Model-Based Reinforcement Learning With Isolated Imaginations

被引:0
|
作者
Pan, Minting [1 ]
Zhu, Xiangming [1 ]
Zheng, Yitao [1 ]
Wang, Yunbo [1 ]
Yang, Xiaokang [1 ]
机构
[1] Shanghai Jiao Tong Univ, AI Inst, MoE Key Lab Artificial Intelligence, Shanghai 200240, Peoples R China
基金
中国国家自然科学基金;
关键词
Decoupled dynamics; model-based reinforcement learning; world model;
D O I
10.1109/TPAMI.2023.3335263
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
World models learn the consequences of actions in vision-based interactive systems. However, in practical scenarios like autonomous driving, noncontrollable dynamics that are independent or sparsely dependent on action signals often exist, making it challenging to learn effective world models. To address this issue, we propose Iso-Dream++, a model-based reinforcement learning approach that has two main contributions. First, we optimize the inverse dynamics to encourage the world model to isolate controllable state transitions from the mixed spatiotemporal variations of the environment. Second, we perform policy optimization based on the decoupled latent imaginations, where we roll out noncontrollable states into the future and adaptively associate them with the current controllable state. This enables long-horizon visuomotor control tasks to benefit from isolating mixed dynamics sources in the wild, such as self-driving cars that can anticipate the movement of other vehicles, thereby avoiding potential risks. On top of our previous work (Pan et al. 2022), we further consider the sparse dependencies between controllable and noncontrollable states, address the training collapse problem of state decoupling, and validate our approach in transfer learning setups. Our empirical study demonstrates that Iso-Dream++ outperforms existing reinforcement learning models significantly on CARLA and DeepMind Control.
引用
收藏
页码:2788 / 2803
页数:16
相关论文
共 50 条
  • [41] Model-Based Reinforcement Learning with a Generative Model is Minimax Optimal
    Agarwal, Alekh
    Kakade, Sham
    Yang, Lin F.
    CONFERENCE ON LEARNING THEORY, VOL 125, 2020, 125
  • [42] Model-based reinforcement learning under concurrent schedules of reinforcement in rodents
    Huh, Namjung
    Jo, Suhyun
    Kim, Hoseok
    Sul, Jung Hoon
    Jung, Min Whan
    LEARNING & MEMORY, 2009, 16 (05) : 315 - 323
  • [43] Reward Shaping for Model-Based Bayesian Reinforcement Learning
    Kim, Hyeoneun
    Lim, Woosang
    Lee, Kanghoon
    Noh, Yung-Kyun
    Kim, Kee-Eung
    PROCEEDINGS OF THE TWENTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2015, : 3548 - 3555
  • [44] Model-based Adversarial Meta-Reinforcement Learning
    Lin, Zichuan
    Thomas, Garrett
    Yang, Guangwen
    Ma, Tengyu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [45] On the Importance of Hyperparameter Optimization for Model-based Reinforcement Learning
    Zhang, Baohe
    Rajan, Raghu
    Pineda, Luis
    Lambert, Nathan
    Biedenkapp, Andre
    Chua, Kurtland
    Hutter, Frank
    Calandra, Roberto
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [46] Model-based reinforcement learning for approximate optimal regulation
    Kamalapurkar, Rushikesh
    Walters, Patrick
    Dixon, Warren E.
    AUTOMATICA, 2016, 64 : 94 - 104
  • [47] Model-based Bayesian Reinforcement Learning for Dialogue Management
    Lison, Pierre
    14TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2013), VOLS 1-5, 2013, : 475 - 479
  • [48] Model-based Lifelong Reinforcement Learning with Bayesian Exploration
    Fu, Haotian
    Yu, Shangqun
    Littman, Michael
    Konidaris, George
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [49] The Value Equivalence Principle for Model-Based Reinforcement Learning
    Grimm, Christopher
    Barreto, Andre
    Singh, Satinder
    Silver, David
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [50] Continuous-Time Model-Based Reinforcement Learning
    Yildiz, Cagatay
    Heinonen, Markus
    Lahdesmaki, Harri
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139