Real-time Pose and Shape Reconstruction of Two Interacting Hands With a Single Depth Camera

被引:96
|
作者
Mueller, Franziska [1 ]
Davis, Micah [1 ,2 ]
Bernard, Florian [1 ]
Sotnychenko, Oleksandr [1 ]
Verschoor, Mickeal [2 ]
Otaduy, Miguel A. [2 ]
Casas, Dan [2 ]
Theobalt, Christian [1 ]
机构
[1] MPI Informat, Saarland Informat Campus, Saarbrucken, Germany
[2] Univ Rey Juan Carlos, Madrid, Spain
来源
ACM TRANSACTIONS ON GRAPHICS | 2019年 / 38卷 / 04期
基金
欧洲研究理事会;
关键词
hand tracking; hand pose estimation; two hands; depth camera; computer vision;
D O I
10.1145/3306346.3322958
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We present a novel method for real-time pose and shape reconstruction of two strongly interacting hands. Our approach is the first two-hand tracking solution that combines an extensive list of favorable properties, namely it is marker-less, uses a single consumer-level depth camera, runs in real time, handles inter- and intra-hand collisions, and automatically adjusts to the user's hand shape. In order to achieve this, we embed a recent parametric hand pose and shape model and a dense correspondence predictor based on a deep neural network into a suitable energy minimization framework. For training the correspondence prediction network, we synthesize a two-hand dataset based on physical simulations that includes both hand pose and shape annotations while at the same time avoiding inter-hand penetrations. To achieve real-time rates, we phrase the model fitting in terms of a nonlinear least-squares problem so that the energy can be optimized based on a highly efficient GPU-based Gauss-Newton optimizer. We show state-of-the-art results in scenes that exceed the complexity level demonstrated by previous work, including tight two-hand grasps, significant inter-hand occlusions, and gesture interaction.(1)
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Real-time 3D Pose Estimation from Single Depth Images
    Schnuerer, Thomas
    Fuchs, Stefan
    Eisenbach, Markus
    Gross, Horst-Michael
    PROCEEDINGS OF THE 14TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 5, 2019, : 716 - 724
  • [22] 3D Pose Estimation of Two Interacting Hands from a Monocular Event Camera
    Millerdurai, Christen
    Luvizon, Diogo
    Rudnev, Viktor
    Jonas, Andre
    Wang, Jiayi
    Theobalt, Christian
    Golyanik, Vladislav
    2024 INTERNATIONAL CONFERENCE IN 3D VISION, 3DV 2024, 2024, : 291 - 301
  • [23] Real-time plasma boundary shape reconstruction using visible camera on EAST tokamak
    Chen, Ming
    Zhang, Qirui
    Guo, Bihao
    Yang, Jianhua
    Chen, Dalong
    Huang, Yao
    Shen, Biao
    NUCLEAR FUSION, 2025, 65 (01)
  • [24] Real-Time Dynamic 3D Shape Reconstruction with SWIR InGaAs Camera
    Fei, Cheng
    Ma, Yanyang
    Jiang, Shan
    Liu, Junliang
    Sun, Baoqing
    Li, Yongfu
    Gu, Yi
    Zhao, Xian
    Fang, Jiaxiong
    SENSORS, 2020, 20 (02)
  • [25] Real-time camera pose estimation via line tracking
    Yanli Liu
    Xianghui Chen
    Tianlun Gu
    Yanci Zhang
    Guanyu Xing
    The Visual Computer, 2018, 34 : 899 - 909
  • [26] Real-time camera pose estimation via line tracking
    Liu, Yanli
    Chen, Xianghui
    Gu, Tianlun
    Zhang, Yanci
    Xing, Guanyu
    VISUAL COMPUTER, 2018, 34 (6-8): : 899 - 909
  • [27] Multi-camera system for real-time pose estimation
    Savakis, Andreas
    Erhard, Matthew
    Schimmel, James
    Hnatow, Justin
    INTELLIGENT COMPUTING: THEORY AND APPLICATIONS V, 2007, 6560
  • [28] 3D real-time human reconstruction with a single RGBD camera
    Yang Lu
    Han Yu
    Wei Ni
    Liang Song
    Applied Intelligence, 2023, 53 : 8735 - 8745
  • [29] 3D real-time human reconstruction with a single RGBD camera
    Lu, Yang
    Yu, Han
    Ni, Wei
    Song, Liang
    APPLIED INTELLIGENCE, 2023, 53 (08) : 8735 - 8745
  • [30] Robust Real-Time Human Perception with Depth Camera
    Zhang, Guyue
    Tian, Luchao
    Liu, Ye
    Liu, Jun
    Liu, Xiang An
    Liu, Yang
    Chen, Yan Qiu
    ECAI 2016: 22ND EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, 285 : 304 - 310