An All-Purpose Bidirectional Recurrent Autoencoder for Retargeting of Motion Data Represented by Joint Position

被引:0
|
作者
Zhou Y. [1 ]
Li S. [1 ]
Zhu H. [1 ]
Liu X. [1 ]
机构
[1] School of Computer Science and Information Engineering, Hefei University of Technology, Hefei
来源
Li, Shujie (lisjhfut@hfut.edu.cn) | 1600年 / Institute of Computing Technology卷 / 32期
关键词
Bidirectional recurrent autoencoder; Joint positions; Motion retargeting;
D O I
10.3724/SP.J.1089.2020.17925
中图分类号
学科分类号
摘要
We present an all-purpose bidirectional recurrent autoencoder aiming at the lack of the generality of the existing retargeting networks of motion data represented by joint position. The autoencoder can retarget the motion data from a source to any target character. The autoencoder is trained by motion data represented by joint position and the loss function defined by reconstruction error. After training, the hidden units and reconstructed motion of the corresponding source motion data are calculated by the autoencoder. Then, we impose the bone length constraints, foot trajectory constraints, the root joint position constraints and bone-to-bone angle constraints on the reconstructed motion, and the cost is projected back into the hidden-unit space and optimize the hidden units iteratively. The experimental results on CMU motion database show that the proposed autoencoder and four constraints can implement the retargeting of motion data represented by joint position, and the retargeting results have better effects on bone length error, bone-to-bone angle error, and end effector trajectory and smoothness. © 2020, Beijing China Science Journal Publishing Co. Ltd. All right reserved.
引用
收藏
页码:315 / 324and333
相关论文
共 21 条
  • [1] Tak S., Ko H.S., Example guided inverse kinematics
  • [2] Gleicher M., Retargetting motion to new characters, Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, pp. 33-42, (1998)
  • [3] Aristidou A., Lasenby J., Chrysanthou Y., Et al., Inverse kinematics techniques in computer graphics: a survey, Computer Graphics Forum, 37, 6, pp. 35-58, (2018)
  • [4] Razzaq A., Wu Z.K., Zhou M.Q., Et al., Automatic conversion of human mesh into skeleton animation by using Kinect motion, International Journal of Computer Theory and Engineering, 7, 6, pp. 482-488, (2015)
  • [5] Bernardin A., Hoyet L., Mucherino A., Et al., Normalized Euclidean distance matrices for human motion retargeting, Proceedings of the 10th International Conference on Motion in Games, (2017)
  • [6] Holden D., Saito J., Komura T., A deep learning framework for character motion synthesis and editing, ACM Transactions on Graphics, 35, 4, pp. 1-11, (2016)
  • [7] Butepage J., Black M.J., Kragic D., Et al., Deep representation learning for human motion prediction and classification, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6158-6166, (2017)
  • [8] Li Z.M., Zhou Y., Xiao S.J., Et al., Auto-conditioned LSTM network for extended complex human motion synthesis
  • [9] Villegas R., Yang J.M., Ceylan D., Et al., Neural kinematic networks for unsupervised motion retargeting, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8639-8648, (2018)
  • [10] Gu P.P., Zheng Y.H., Wang D.L., Et al., Review of constraints-based motion retargeting techniques in 3D animations, Proceedings of the 11th International Conference on Computer Science & Education, pp. 708-711, (2016)