Playing for 3D Human Recovery

被引:1
|
作者
Cai, Zhongang [1 ,2 ]
Zhang, Mingyuan [1 ]
Ren, Jiawei [1 ]
Wei, Chen [2 ]
Ren, Daxuan [1 ]
Lin, Zhengyu [2 ]
Zhao, Haiyu [2 ]
Yang, Lei [2 ]
Loy, Chen Change [1 ]
Liu, Ziwei [1 ]
机构
[1] Nanyang Technol Univ, S Lab, Singapore 639798, Singapore
[2] Shanghai AI Lab, Shanghai 200240, Peoples R China
关键词
Three-dimensional displays; Annotations; Synthetic data; Shape; Training; Parametric statistics; Solid modeling; Human pose and shape estimation; 3D human recovery; parametric humans; synthetic data; dataset;
D O I
10.1109/TPAMI.2024.3450537
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Image- and video-based 3D human recovery (i.e., pose and shape estimation) have achieved substantial progress. However, due to the prohibitive cost of motion capture, existing datasets are often limited in scale and diversity. In this work, we obtain massive human sequences by playing the video game with automatically annotated 3D ground truths. Specifically, we contribute GTA-Human, a large-scale 3D human dataset generated with the GTA-V game engine, featuring a highly diverse set of subjects, actions, and scenarios. More importantly, we study the use of game-playing data and obtain five major insights. First, game-playing data is surprisingly effective. A simple frame-based baseline trained on GTA-Human outperforms more sophisticated methods by a large margin. For video-based methods, GTA-Human is even on par with the in-domain training set. Second, we discover that synthetic data provides critical complements to the real data that is typically collected indoor. We highlight that our investigation into domain gap provides explanations for our data mixture strategies that are simple yet useful, which offers new insights to the research community. Third, the scale of the dataset matters. The performance boost is closely related to the additional data available. A systematic study on multiple key factors (such as camera angle and body pose) reveals that the model performance is sensitive to data density. Fourth, the effectiveness of GTA-Human is also attributed to the rich collection of strong supervision labels (SMPL parameters), which are otherwise expensive to acquire in real datasets. Fifth, the benefits of synthetic data extend to larger models such as deeper convolutional neural networks (CNNs) and Transformers, for which a significant impact is also observed. We hope our work could pave the way for scaling up 3D human recovery to the real world.
引用
收藏
页码:10533 / 10545
页数:13
相关论文
共 50 条
  • [1] Playing with 3D Imaging
    Waurzyniak, Patrick
    Kehoe, Ellen
    MANUFACTURING ENGINEERING, 2014, 152 (05): : 41 - +
  • [2] Playing Around with 3D modeling
    Mahoney, DP
    COMPUTER GRAPHICS WORLD, 2000, 23 (10) : 17 - 18
  • [3] Learning Human Mesh Recovery in 3D Scenes
    Shen, Zehong
    Cen, Zhi
    Peng, Sida
    Shuai, Qing
    Bao, Hujun
    Zhou, Xiaowei
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 17038 - 17047
  • [4] Observable Subspaces for 3D Human Motion Recovery
    Fossati, Andrea
    Salzmann, Mathieu
    Fua, Pascal
    CVPR: 2009 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-4, 2009, : 1137 - +
  • [5] 3D recovery of human gaze in natural environments
    Paletta, Lucas
    Santner, Katrin
    Fritz, Gerald
    Mayer, Heinz
    INTELLIGENT ROBOTS AND COMPUTER VISION XXX: ALGORITHMS AND TECHNIQUES, 2013, 8662
  • [6] A 3D shape descriptor for human pose recovery
    Gond, Laetitia
    Sayd, Patrick
    Chateau, Thierry
    Dhome, Michel
    ARTICULATED MOTION AND DEFORMABLE OBJECTS, PROCEEDINGS, 2008, 5098 : 370 - +
  • [7] Survey on 2D and 3D Human Pose Recovery
    Perez-Sala, Xavier
    Escalera, Sergio
    Angulo, Cecilio
    ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT, 2012, 248 : 101 - +
  • [8] Hypergraph Regularized Autoencoder for 3D Human Pose Recovery
    Hong, Chaoqun
    Yu, Jun
    Jane, You
    Chen, Xuhui
    COMPUTER VISION, CCCV 2015, PT I, 2015, 546 : 66 - 75
  • [9] PostureHMR: Posture Transformation for 3D Human Mesh Recovery
    Song, Yu-Pei
    Wu, Xiao
    Yuan, Zhaoquan
    Qiao, Jian-Jun
    Peng, Qiang
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 9732 - 9741
  • [10] ReFit: Recurrent Fitting Network for 3D Human Recovery
    Wang, Yufu
    Daniilidis, Kostas
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 14598 - 14608