A point contextual transformer network for point cloud completion

被引:1
|
作者
Leng, Siyi [1 ,2 ,3 ]
Zhang, Zhenxin [1 ,2 ]
Zhang, Liqiang [4 ]
机构
[1] Capital Normal Univ, Key Lab Informat Acquisit & Applicat 3D, MOE, Beijing 100048, Peoples R China
[2] Capital Normal Univ, Coll Resource Environm & Tourism, Beijing 100048, Peoples R China
[3] Xinjiang Normal Univ, Coll Geosci & Tourism, Urumqi 830054, Peoples R China
[4] Beijing Normal Univ, State Key Lab Remote Sensing Sci, Beijing 100875, Peoples R China
基金
北京市自然科学基金;
关键词
Point cloud completion; Feature extraction; Point contextual transformer; Attention mechanism;
D O I
10.1016/j.eswa.2024.123672
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Point cloud completion is an essential task for recovering a complete point cloud from its partial observation to support downstream applications, such as object detection and reconstruction. Existing point cloud completion networks primarily rely on large-scale datasets to learn the mapping between the partial shapes and the complete shapes. They often adopt a multi-stage strategy to progressively generate complete point clouds with finer details. However, underutilization of shape priors and complex modelling frameworks still plague these networks. To address these issues, we innovatively propose a point contextual transformer (PCoT) for point cloud completion (PCoT-Net). We design the PCoT to adaptively fuse static and dynamic point contextual information. This allows for the effective capture of fine-grained local contextual features. We then propose a one-stage network with a feature completion module to directly generate credible and detailed complete point cloud results. Furthermore, we incorporate External Attention (EA) into the feature completion module, which is lightweight and further improves the performance of learning complete features and reconstructing the complete point cloud. Extensive experiments on various datasets validate the effectiveness of our PCoT-based approach and the EA-enhanced feature completion module, which achieves superior quantitative performance in Chamfer Distance (CD) and F1-Score. In comparison to PMP-Net++ (Wen et al., 2022), our method improves the F1-Score by 0.010, 0.022, and 0.026, and reduces the CD by 0.16, 0.95, and 1.74 on the MVP, CRN, and ScanNet datasets, respectively, while providing visually superior results, capturing more fine-grained details and producing smoother reconstructed surfaces.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Cyclic Global Guiding Network for Point Cloud Completion
    Wei, Ming
    Zhu, Ming
    Zhang, Yaoyuan
    Sun, Jiaqi
    Wang, Jiarong
    REMOTE SENSING, 2022, 14 (14)
  • [22] A cascaded graph convolutional network for point cloud completion
    Wang, Luhan
    Li, Jun
    Guo, Shangwei
    Han, Shaokun
    VISUAL COMPUTER, 2025, 41 (01): : 659 - 674
  • [23] Are All Point Clouds Suitable for Completion? Weakly Supervised Quality Evaluation Network for Point Cloud Completion
    Shi, Jieqi
    Li, Peiliang
    Chen, Xiaozhi
    Shao le Shen
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 2796 - 2802
  • [24] TopologyFormer: structure transformer assisted topology reconstruction for point cloud completion
    Jiang, Zhenwei
    Gao, Chenqiang
    Li, Pengcheng
    Liu, Chuandong
    Liu, Fangcen
    Zhu, Lijie
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (26) : 68743 - 68771
  • [25] SeedFormer: Patch Seeds Based Point Cloud Completion with Upsample Transformer
    Zhou, Haoran
    Cao, Yun
    Chu, Wenqing
    Zhu, Junwei
    Lu, Tong
    Tai, Ying
    Wang, Chengjie
    COMPUTER VISION - ECCV 2022, PT III, 2022, 13663 : 416 - 432
  • [26] Point cloud completion by dynamic transformer with adaptive neighbourhood feature fusion
    Liu, Xinpu
    Xu, Guoquan
    Xu, Ke
    Wan, Jianwei
    Ma, Yanxin
    IET COMPUTER VISION, 2022, 16 (07) : 619 - 631
  • [27] Dynamic clustering transformer network for point cloud segmentation
    Lu, Dening
    Zhou, Jun
    Gao, Kyle
    Du, Jing
    Xu, Linlin
    Li, Jonathan
    INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2024, 128
  • [28] MPCT: Multiscale Point Cloud Transformer With a Residual Network
    Wu, Yue
    Liu, Jiaming
    Gong, Maoguo
    Liu, Zhixiao
    Miao, Qiguang
    Ma, Wenping
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 3505 - 3516
  • [29] SparseFormer: Sparse transformer network for point cloud classification
    Wang, Yong
    Liu, Yangyang
    Zhou, Pengbo
    Geng, Guohua
    Zhang, Qi
    COMPUTERS & GRAPHICS-UK, 2023, 116 : 24 - 32
  • [30] Transformer-based Point Cloud Generation Network
    Xu, Rui
    Hui, Le
    Han, Yuehui
    Qian, Jianjun
    Xie, Jin
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 4169 - 4177