Weakly supervised 3D point cloud semantic segmentation for architectural heritage using teacher-guided consistency and contrast learning

被引:0
|
作者
Huang, Shuowen [1 ]
Hu, Qingwu [1 ]
Ai, Mingyao [1 ]
Zhao, Pengcheng [1 ]
Li, Jian [2 ]
Cui, Hao [2 ]
Wang, Shaohua [1 ]
机构
[1] Wuhan Univ, Sch Remote Sensing & Informat Engn, Wuhan 430079, Peoples R China
[2] Zhengzhou Univ, Sch Geosci & Technol, Zhengzhou 450001, Peoples R China
基金
中国国家自然科学基金;
关键词
Point cloud; Architectural heritage; 3D semantic segmentation; Weakly supervised;
D O I
10.1016/j.autcon.2024.105831
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Point cloud semantic segmentation is significant for managing and protecting architectural heritage. Currently, fully supervised methods require a large amount of annotated data, while weakly supervised methods are difficult to transfer directly to architectural heritage. This paper proposes an end-to-end teacher-guided consistency and contrastive learning weakly supervised (TCCWS) framework for architectural heritage point cloud semantic segmentation, which can fully utilize limited labeled points to train network. Specifically, a teacherstudent framework is designed to generate pseudo labels and a pseudo label dividing module is proposed to distinguish reliable and ambiguous point sets. Based on it, a consistency and contrastive learning strategy is designed to fully utilize supervision signals to learn the features of point clouds. The framework is tested on the ArCH dataset and self-collected point cloud, which demonstrates that the proposed method can achieve effective semantic segmentation of architectural heritage using only 0.1 % of annotated points.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Semantic Context Encoding for Accurate 3D Point Cloud Segmentation
    Liu, Hao
    Guo, Yulan
    Ma, Yanni
    Lei, Yinjie
    Wen, Gongjian
    IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 2045 - 2055
  • [42] Semantic and Geometric Labeling for Enhanced 3D Point Cloud Segmentation
    Perez-Perez, Yeritza
    Golparvar-Fard, Mani
    El-Rayes, Khaled
    CONSTRUCTION RESEARCH CONGRESS 2016: OLD AND NEW CONSTRUCTION TECHNOLOGIES CONVERGE IN HISTORIC SAN JUAN, 2016, : 2542 - 2552
  • [43] Transformer Enhanced Hierarchical 3D Point Cloud Semantic Segmentation
    Liu, Yaohua
    Ma, Yue
    Xu, Min
    2ND INTERNATIONAL CONFERENCE ON APPLIED MATHEMATICS, MODELLING, AND INTELLIGENT COMPUTING (CAMMIC 2022), 2022, 12259
  • [44] Investigate Indistinguishable Points in Semantic Segmentation of 3D Point Cloud
    Xu, Mingye
    Zhou, Zhipeng
    Zhang, Junhao
    Qiao, Yu
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 3047 - 3055
  • [45] Novel Class Discovery for 3D Point Cloud Semantic Segmentation
    Riz, Luigi
    Saltori, Cristiano
    Ricci, Elisa
    Poiesi, Fabio
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 9393 - 9402
  • [46] Subdivision of Adjacent Areas for 3D Point Cloud Semantic Segmentation
    Xu, Haixia
    Hu, Kaiyu
    Xu, Yuting
    Zhu, Jiang
    SIGNAL IMAGE AND VIDEO PROCESSING, 2025, 19 (01)
  • [47] Local Transformer Network on 3D Point Cloud Semantic Segmentation
    Wang, Zijun
    Wang, Yun
    An, Lifeng
    Liu, Jian
    Liu, Haiyang
    INFORMATION, 2022, 13 (04)
  • [48] Large-Scale Supervised Learning For 3D Point Cloud Labeling: Semantic3d.Net
    Hackel, Timo
    Wegner, Jan D.
    Savinov, Nikolay
    Ladicky, Lubor
    Schindler, Konrad
    Pollefeys, Marc
    PHOTOGRAMMETRIC ENGINEERING AND REMOTE SENSING, 2018, 84 (05): : 297 - 308
  • [49] Active self-training for weakly supervised 3D scene semantic segmentation
    Liu, Gengxin
    van Kaick, Oliver
    Huang, Hui
    Hu, Ruizhen
    COMPUTATIONAL VISUAL MEDIA, 2024, 10 (06) : 1063 - 1078
  • [50] Active self-training for weakly supervised 3D scene semantic segmentation
    Gengxin Liu
    Oliver van Kaick
    Hui Huang
    Ruizhen Hu
    Computational Visual Media, 2024, 10 : 425 - 438