Dynamic Domain Adaptation for Single-view 3D Reconstruction

被引:2
|
作者
Yang, Cong [1 ]
Xie, Housen [1 ]
Tian, Haihong [1 ]
Yu, Yuanlong [1 ]
机构
[1] AI Inst, Ecovacs Robot, Nanjing, Peoples R China
关键词
single-view 3D reconstruction; dynamic domain adaptation; GCN;
D O I
10.1109/IROS51168.2021.9636343
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Learning 3D object reconstruction from a single RGB image is a fundamental and extremely challenging problem for robots. As acquiring labeled 3D shape representations for real-world data is time-consuming and expensive, synthetic image-shape pairs are widely used for 3D reconstruction. However, the models trained on synthetic data set did not perform equally well on real-world images. The existing method used the domain adaptation to fill the domain gap between different data sets. Unlike the approach simply considered global distribution for domain adaptation, this paper presents a dynamic domain adaptation (DDA) network to extract domain-invariant image features for 3D reconstruction. The relative importance between global and local distributions are considered to reduce the discrepancy between synthetic and real-world data. In addition, graph convolution network (GCN) based mesh generation methods have achieved impressive results than voxel-based and point cloud-based methods. However, the global context in a graph is not effectively used due to the limited receptive field of GCN. In this paper, a multi-scale processing method for graph convolution network (GCN) is proposed to further improve the performance of GCN-based 3D reconstruction. The experiment results conducted on both synthetic and real-world data set have demonstrated the effectiveness of the proposed methods.
引用
收藏
页码:3563 / 3570
页数:8
相关论文
共 50 条
  • [31] Sym3DNet: Symmetric 3D Prior Network for Single-View 3D Reconstruction
    Siddique, Ashraf
    Lee, Seungkyu
    SENSORS, 2022, 22 (02)
  • [32] Single-View 3D Face Reconstruction via Cross-View Consistency Constraints
    Zhong Y.
    Pei Y.
    Li P.
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2024, 36 (04): : 543 - 551
  • [33] Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction without 3D Supervision
    Yan, Xinchen
    Yang, Jimei
    Yumer, Ersin
    Guo, Yijie
    Lee, Honglak
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [34] Enhancing single-view 3D mesh reconstruction with the aid of implicit surface learning
    Fahim, George
    Amin, Khalid
    Zarif, Sameh
    IMAGE AND VISION COMPUTING, 2022, 119
  • [35] Single-view 3D Scene Reconstruction with High-fidelity Shape and Texture
    Chen, Yixin
    Ni, Junfeng
    Jiang, Nan
    Zhang, Yaowei
    Zhu, Yixin
    Huang, Siyuan
    2024 INTERNATIONAL CONFERENCE IN 3D VISION, 3DV 2024, 2024, : 1456 - 1467
  • [36] LIST: Learning Implicitly from Spatial Transformers for Single-View 3D Reconstruction
    Arshad, Mohammad Samiul
    Beksi, William J.
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 9287 - 9296
  • [37] Single-View 3D Reconstruction via Differentiable Rendering and Inverse Procedural Modeling
    Garifullin, Albert
    Maiorov, Nikolay
    Frolov, Vladimir
    Voloboy, Alexey
    SYMMETRY-BASEL, 2024, 16 (02):
  • [38] A new single-view 3D pantograph reconstruction aided by prior CAD model
    Sun, Tiecheng
    Liu, Guanghui
    Peng, Jianping
    Meng, Fanman
    Liu, Shuaicheng
    Zhu, Shuyuan
    MEASUREMENT, 2021, 181
  • [39] Single-View 3D Reconstruction Based on Gradient-Applied Weighted Loss
    Kim, Taehyeon
    Lee, Jiho
    Lee, Kyung-Taek
    Choe, Yoonsik
    JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY, 2024, 19 (07) : 4523 - 4535
  • [40] A Dataset-Dispersion Perspective on Reconstruction Versus Recognition in Single-View 3D Reconstruction Networks
    Zhou, Yefan
    Shen, Yiru
    Yan, Yujun
    Feng, Chen
    Yang, Yaoqing
    2021 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2021), 2021, : 1331 - 1340