An imitation learning framework for generating multi-modal trajectories from unstructured demonstrations

被引:3
|
作者
Peng, Jian-Wei [1 ]
Hu, Min-Chun [2 ]
Chu, Wei-Ta [1 ]
机构
[1] Natl Cheng Kung Univ, Dept Comp Sci & Informat Engn, Tainan, Taiwan
[2] Natl Tsing Hua Univ, Dept Comp Sci, Hsinchu, Taiwan
关键词
Trajectory generation; Motion synthesis; Imitation learning; Reinforcement learning; Generative adversarial networks; HUMAN MOTION PREDICTION;
D O I
10.1016/j.neucom.2022.05.076
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The main challenge of the trajectory generation problem is to generate long-term as well as diverse tra-jectories. Generative Adversarial Imitation Learning (GAIL) is a well-known model-free imitation learning algorithm that can be utilized to generate trajectory data, while vanilla GAIL would fail to capture multi -modal demonstrations. Recent methods propose latent variable models to solve this problem; however, previous works may have a mode missing problem. In this work, we propose a novel method to generate long-term trajectories that are controllable by a continuous latent variable based on GAIL and a condi-tional Variational Autoencoder (cVAE). We further assume that subsequences of the same trajectory should be encoded to similar locations in the latent space. Therefore, we introduce a contrastive loss in the training of the encoder. In our motion synthesis task, we propose to first construct a low-dimensional motion manifold by using a VAE to reduce the burden of our imitation learning model. Our experimental results show that the proposed model outperforms the state-of-the-art methods and can be applied to motion synthesis.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:712 / 723
页数:12
相关论文
共 50 条
  • [31] MultiJAF: Multi-modal joint entity alignment framework for multi-modal knowledge graph
    Cheng, Bo
    Zhu, Jia
    Guo, Meimei
    NEUROCOMPUTING, 2022, 500 : 581 - 591
  • [32] A NOVEL METHOD FOR AUTOMATICALLY GENERATING MULTI-MODAL DIALOGUE FROM TEXT
    Prendinger, Helmut
    Piwek, Paul
    Ishizuka, Mitsuru
    INTERNATIONAL JOURNAL OF SEMANTIC COMPUTING, 2007, 1 (03) : 319 - 334
  • [33] A Discriminant Information Theoretic Learning Framework for Multi-modal Feature Representation
    Gao, Lei
    Guan, Ling
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2023, 14 (03)
  • [34] MDNNSyn: A Multi-Modal Deep Learning Framework for Drug Synergy Prediction
    Li, Lei
    Li, Haitao
    Ishdorj, Tseren-Onolt
    Zheng, Chunhou
    Su, Yansen
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (10) : 6225 - 6236
  • [35] A Multi-modal Metric Learning Framework for Time Series kNN Classification
    Cao-Tri Do
    Douzal-Chouakria, Ahlame
    Marie, Sylvain
    Rombaut, Michele
    ADVANCED ANALYSIS AND LEARNING ON TEMPORAL DATA, AALTD 2015, 2016, 9785 : 131 - 143
  • [36] Adversarial Imitation Learning from Incomplete Demonstrations
    Sun, Mingfei
    Xiaojuan
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 3513 - 3519
  • [37] Multi-Modal Legged Locomotion Framework With Automated Residual Reinforcement Learning
    Yu, Chen
    Rosendo, Andre
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04) : 10312 - 10319
  • [38] Robust Imitation Learning from Noisy Demonstrations
    Tangkaratt, Voot
    Charoenphakdee, Nontawat
    Sugiyama, Masashi
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130 : 298 - +
  • [39] PUMICE: A Multi-Modal Agent that Learns Concepts and Conditionals from Natural Language and Demonstrations
    Li, Toby Jia-Jun
    Radensky, Marissa
    Jial, Justin
    Singarajah, Kirielle
    Mitchell, Tom M.
    Myers, Brad A.
    PROCEEDINGS OF THE 32ND ANNUAL ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY (UIST 2019), 2019, : 577 - 589
  • [40] A Multi-Modal Vertical Federated Learning Framework Based on Homomorphic Encryption
    Gong, Maoguo
    Zhang, Yuanqiao
    Gao, Yuan
    Qin, A. K.
    Wu, Yue
    Wang, Shanfeng
    Zhang, Yihong
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 1826 - 1839