Pose and Skeleton-aware Neural IK for Pose and Motion Editing

被引:1
|
作者
Agrawal, Dhruv [1 ,2 ]
Guay, Martin [2 ]
Buhmann, Jakob [2 ]
Borer, Dominik [2 ]
Sumner, Robert W. [1 ,2 ]
机构
[1] Swiss Fed Inst Technol, Zurich, Switzerland
[2] DisneyRes Studios, Zurich, Switzerland
关键词
skeletal networks; pose authoring; learned inverse kinematics; 3D animation;
D O I
10.1145/3610548.3618217
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Posing a 3D character for film or game is an iterative and laborious process where many control handles (e.g. joints) need to be manipulated to achieve a compelling result. Neural Inverse Kinematics (IK) is a new type of IK that enables sparse control over a 3D character pose, and leverages full body correlations to complete the un-manipulated joints of the body. While neural IK is promising, current methods are not designed to preserve previous edits in posing workflows. Current models generate a single pose from the handles only-regardless of what was there previously-making it difficult to preserve any variations and hindering tasks such as pose and motion editing. In this paper, we introduce SKEL-IK, a novel architecture and training scheme that is conditioned on a base pose, and designed to flow information directly onto the skeletal graph structure, such that hard constraints can be enforced by blocking information flows at certain joints. As a result, we are able to satisfy both hard and soft constraints, as well as preserve un-manipulated parts of the body when desired. Finally, by controlling the base pose in different ways, we demonstrate the ability of our model to perform tasks such as generating variations and quickly editing poses and motions; with less erosion of the base poses compared to the current state-of-the-art.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Limb Pose Aware Networks for Monocular 3D Pose Estimation
    Wu, Lele
    Yu, Zhenbo
    Liu, Yijiang
    Liu, Qingshan
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 906 - 917
  • [22] Learnable Skeleton-Aware 3D Point Cloud Sampling
    Wen, Cheng
    Yu, Baosheng
    Tao, Dacheng
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 17671 - 17681
  • [23] Pose and motion from contact
    Jia, YB
    Erdmann, M
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 1999, 18 (05): : 466 - 490
  • [24] POSE AWARE FINE-GRAINED VISUAL CLASSIFICATION USING POSE EXPERTS
    Mahajan, Kushagra
    Khurana, Tarasha
    Chopra, Ayush
    Gupta, Isha
    Arora, Chetan
    Rai, Atul
    2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 2381 - 2385
  • [25] Skeleton-aware Implicit Function for single-view human reconstruction
    Liu, Pengpeng
    Zhang, Guixuan
    Zhang, Shuwu
    Li, Yuanhao
    Zeng, Zhi
    CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2023, 8 (02) : 379 - 389
  • [26] Pose and motion from contact
    Robotics Institute, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213-3890, United States
    Int J Rob Res, 5 (466-490):
  • [27] Pose-Aware Person Recognition
    Kumar, Vijay
    Namboodiri, Anoop
    Paluri, Manohar
    Jawahar, C. V.
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 6797 - 6806
  • [28] Skeleton data pre-processing for human pose recognition using Neural Network
    Guerra, Bruna M. V.
    Ramat, Stefano
    Gandolfi, Roberto
    Beltrami, Giorgio
    Schmid, Micaela
    42ND ANNUAL INTERNATIONAL CONFERENCES OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY: ENABLING INNOVATIVE TECHNOLOGIES FOR GLOBAL HEALTHCARE EMBC'20, 2020, : 4265 - 4268
  • [29] Skeleton Based Action Recognition Using Pose Change Map and Convolutional Neural Networks
    Hou, Boxiang
    Tian, Guohui
    Huang, Bin
    ELEVENTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2019), 2019, 11179
  • [30] Mesh pose-editing using examples
    Lee, Tong-Yee
    Lin, Chao-Hung
    Chu, Hung-Kuo
    Wang, Yu-Shuen
    Yen, Shoo-Wei
    Tsai, Chang-Rung
    COMPUTER ANIMATION AND VIRTUAL WORLDS, 2007, 18 (4-5) : 235 - 245