GAPS: Geometry-Aware, Physics-Based, Self-Supervised Neural Garment Draping

被引:0
|
作者
Chen, Ruochen [1 ]
Chen, Liming [1 ]
Parashar, Shaifali [1 ]
机构
[1] Univ Lumiere Lyon 2, INSA Lyon, CNRS,LIRIS,UMR5205, Ecole Cent Lyon,Univ Claude Bernard Lyon 1, Lyon, France
来源
2024 INTERNATIONAL CONFERENCE IN 3D VISION, 3DV 2024 | 2024年
关键词
D O I
10.1109/3DV62453.2024.00059
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent neural, physics-based modeling of garment deformations allows faster and visually aesthetic results as opposed to the existing methods. Material-specific parameters are used by the formulation to control the garment inextensibility. This delivers unrealistic results with physically implausible stretching. Oftentimes, the draped garment is pushed inside the body which is either corrected by an expensive post-processing, thus adding to further inconsistent stretching; or by deploying a separate training regime for each body type, restricting its scalability. Additionally, the flawed skinning process deployed by existing methods produces incorrect results on loose garments. In this paper, we introduce a geometrical constraint to the existing formulation that is collision-aware and imposes garment inextensibility wherever possible. Thus, we obtain realistic results where draped clothes stretch only while covering bigger body regions. Furthermore, we propose a geometry-aware garment skinning method by defining a body-garment closeness measure which works for all garment types, especially the loose ones. Our code is publicly available at https://github.com/Simonhfls/GAPS.
引用
收藏
页码:116 / 125
页数:10
相关论文
共 50 条
  • [1] DrapeNet: Garment Generation and Self-Supervised Draping
    De Luigi, Luca
    Li, Ren
    Guillard, Benoit
    Salzmann, Mathieu
    Fua, Pascal
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 1451 - 1460
  • [2] Physics-Based Self-Supervised Grasp Pose Detection
    Ruiz, Jon Ander
    Iriondo, Ander
    Lazkano, Elena
    Ansuategi, Ander
    Maurtua, Inaki
    MACHINES, 2025, 13 (01)
  • [3] Self-supervised Physics-based Denoising for Computed Tomography
    Zainulina, Elvira
    Chernyavskiy, Alexey
    Dylov, Dmitry V.
    arXiv, 2022,
  • [4] Self-Supervised Geometry-Aware Encoder for Style-Based 3D GAN Inversion
    Lan, Yushi
    Meng, Xuyi
    Yang, Shuai
    Loy, Chen Change
    Dai, Bo
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 20940 - 20949
  • [5] Physics-Informed Geometry-Aware Neural Operator
    Zhong, Weiheng
    Meidani, Hadi
    COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, 2025, 434
  • [6] Exploring Geometry-aware Contrast and Clustering Harmonization for Self-supervised 3D Object Detection
    Liang, Hanxue
    Jiang, Chenhan
    Feng, Dapeng
    Chen, Xin
    Xu, Hang
    Liang, Xiaodan
    Zhang, Wei
    Li, Zhenguo
    Van Gool, Luc
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 3273 - 3282
  • [7] Self-supervised learning with physics-aware neural networks - I. Galaxy model fitting
    Aragon-Calvo, M. A.
    Carvajal, J. C.
    MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, 2020, 498 (03) : 3713 - 3719
  • [8] SELF-SUPERVISED PHYSICS-BASED DEEP LEARNING MRI RECONSTRUCTION WITHOUT FULLY-SAMPLED DATA
    Yaman, Burhaneddin
    Hosseini, Seyed Amir Hossein
    Moeller, Steen
    Ellermannt, Jutta
    Ukurbilt, Kamil
    Akeakaye, Mehmet
    2020 IEEE 17TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2020), 2020, : 921 - 925
  • [9] Material-Aware Self-Supervised Network for Dynamic 3D Garment Simulation
    Liu, Aoran
    Hu, Kun
    Yue, Wenxi
    Wu, Qiuxia
    Wang, Zhiyong
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 630 - 635
  • [10] Category-aware self-supervised graph neural network for session-based recommendation
    Wang, Dongjing
    Du, Ruijie
    Yang, Qimeng
    Yu, Dongjin
    Wan, Feng
    Gong, Xiaojun
    Xu, Guandong
    Deng, Shuiguang
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2024, 27 (05):