Adversarial Active Exploration for Inverse Dynamics Model Learning

被引:0
|
作者
Hong, Zhang-Wei [1 ]
Fu, Tsu-Jui [1 ]
Shann, Tzu-Yun [1 ]
Chang, Yi-Hsiang [1 ]
Lee, Chun-Yi [1 ]
机构
[1] Natl Tsing Hua Univ, Dept Comp Sci, Elsa Lab, Hsinchu, Taiwan
来源
关键词
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
We present an adversarial active exploration for inverse dynamics model learning, a simple yet effective learning scheme that incentivizes exploration in an environment without any human intervention. Our framework consists of a deep reinforcement learning (DRL) agent and an inverse dynamics model contesting with each other. The former collects training samples for the latter, with an objective to maximize the error of the latter. The latter is trained with samples collected by the former, and generates rewards for the former when it fails to predict the actual action taken by the former. In such a competitive setting, the DRL agent learns to generate samples that the inverse dynamics model fails to predict correctly, while the inverse dynamics model learns to adapt to the challenging samples. We further propose a reward structure that ensures the DRL agent to collect only moderately hard samples but not overly hard ones that prevent the inverse model from predicting effectively. We evaluate the effectiveness of our method on several robotic arm and hand manipulation tasks against multiple baseline models. Experimental results show that our method is comparable to those directly trained with expert demonstrations, and superior to the other baselines even without any human priors.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Adversarial Inverse Reinforcement Learning With Self-Attention Dynamics Model
    Sun, Jiankai
    Yu, Lantao
    Dong, Pinqian
    Lu, Bo
    Zhou, Bolei
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02) : 1880 - 1886
  • [2] Active Exploration for Inverse Reinforcement Learning
    Lindner, David
    Krause, Andreas
    Ramponi, Giorgia
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [3] Active Training Trajectory Generation for Inverse Dynamics Model Learning with Deep Neural Networks
    Zhou, Siqi
    Schoellig, Angela P.
    2019 IEEE 58TH CONFERENCE ON DECISION AND CONTROL (CDC), 2019, : 1784 - 1790
  • [4] Active learning of inverse models with intrinsically motivated goal exploration in robots
    Baranes, Adrien
    Oudeyer, Pierre-Yves
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2013, 61 (01) : 49 - 73
  • [5] Learning inverse kinematics and dynamics of a robotic manipulator using generative adversarial networks
    Ren, Hailin
    Ben-Tzvi, Pinhas
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2020, 124
  • [6] Using Model Knowledge for Learning Inverse Dynamics
    Nguyen-Tuong, Duy
    Peters, Jan
    2010 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2010, : 2677 - 2682
  • [7] Hierarchical Adversarial Inverse Reinforcement Learning
    Chen, Jiayu
    Lan, Tian
    Aggarwal, Vaneet
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (12) : 17549 - 17558
  • [8] Multiagent Adversarial Inverse Reinforcement Learning
    Wei, Ermo
    Wicke, Drew
    Luke, Sean
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 2265 - 2266
  • [9] DEEP ADVERSARIAL ACTIVE LEARNING WITH MODEL UNCERTAINTY FOR IMAGE CLASSIFICATION
    Zhu, Zheng
    Wang, Hongxing
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 1711 - 1715
  • [10] Camera view planning based on generative adversarial imitation learning in indoor active exploration
    Dai, Xu-Yang
    Meng, Qing-Hao
    Jin, Sheng
    Liu, Yin -Bo
    APPLIED SOFT COMPUTING, 2022, 129