Self-Supervised Pre-Training for 3-D Roof Reconstruction on LiDAR Data

被引:1
|
作者
Yang, Hongxin [1 ]
Huang, Shangfeng [1 ]
Wang, Ruisheng [1 ,2 ]
Wang, Xin [1 ]
机构
[1] Univ Calgary, Dept Geomatics Engn, Calgary, AB T2N 1N4, Canada
[2] Shenzhen Univ, Sch Architecture & Urban Planning, Shenzhen 518060, Peoples R China
关键词
Corner detection; Training; Task analysis; edge prediction; roof reconstruction; self-supervised learning;
D O I
10.1109/LGRS.2024.3362733
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Reconstructing building roofs from light detection and ranging (LiDAR) point clouds from aerial perspectives is significantly important in photogrammetry domains. This letter proposes a novel approach for 3-D real-world building roof reconstruction in Estonia, employing a two-stage self-supervised pre-training architecture to transform 3-D roof point clouds into wireframe models. We utilize a self-supervised pre-training framework that incorporates a purpose-designed and efficient self-attention mechanism to generate point-wise features. Subsequently, we develop modules for corner detection and edge prediction to classify and regress the coordinates of corner points and determine optimal edge selections, respectively, to construct the final wireframe model. The effectiveness of our approach is evaluated on real-world roof datasets, achieving corner and edge precision accuracies of 83% and 78%, respectively. In addition, fine-tuning our self-supervised pre-training method with varying ratios of labeled data, particularly with only 50% partially labeled data, attains superior performance, achieving 84% and 85% corner and edge precision, respectively.
引用
收藏
页码:1 / 5
页数:5
相关论文
共 50 条
  • [21] Self-supervised VICReg pre-training for Brugada ECG detection
    Ronan, Robert
    Tarabanis, Constantine
    Chinitz, Larry
    Jankelson, Lior
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [22] A Self-Supervised Pre-Training Method for Chinese Spelling Correction
    Su J.
    Yu S.
    Hong X.
    Huanan Ligong Daxue Xuebao/Journal of South China University of Technology (Natural Science), 2023, 51 (09): : 90 - 98
  • [23] Self-supervised pre-training on industrial time-series
    Biggio, Luca
    Kastanis, Iason
    2021 8TH SWISS CONFERENCE ON DATA SCIENCE, SDS, 2021, : 56 - 57
  • [24] Self-supervised Pre-training for Semantic Segmentation in an Indoor Scene
    Shrestha, Sulabh
    Li, Yimeng
    Kosecka, Jana
    2024 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS, WACVW 2024, 2024, : 625 - 635
  • [25] DiT: Self-supervised Pre-training for Document Image Transformer
    Li, Junlong
    Xu, Yiheng
    Lv, Tengchao
    Cui, Lei
    Zhang, Cha
    Wei, Furu
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 3530 - 3539
  • [26] Masked Feature Prediction for Self-Supervised Visual Pre-Training
    Wei, Chen
    Fan, Haoqi
    Xie, Saining
    Wu, Chao-Yuan
    Yuille, Alan
    Feichtenhofer, Christoph
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 14648 - 14658
  • [27] FALL DETECTION USING SELF-SUPERVISED PRE-TRAINING MODEL
    Yhdego, Haben
    Audette, Michel
    Paolini, Christopher
    PROCEEDINGS OF THE 2022 ANNUAL MODELING AND SIMULATION CONFERENCE (ANNSIM'22), 2022, : 361 - 371
  • [28] CDS: Cross-Domain Self-supervised Pre-training
    Kim, Donghyun
    Saito, Kuniaki
    Oh, Tae-Hyun
    Plummer, Bryan A.
    Sclaroff, Stan
    Saenko, Kate
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9103 - 9112
  • [29] SPAKT: A Self-Supervised Pre-TrAining Method for Knowledge Tracing
    Ma, Yuling
    Han, Peng
    Qiao, Huiyan
    Cui, Chaoran
    Yin, Yilong
    Yu, Dehu
    IEEE ACCESS, 2022, 10 : 72145 - 72154
  • [30] Correlational Image Modeling for Self-Supervised Visual Pre-Training
    Li, Wei
    Xie, Jiahao
    Loy, Chen Change
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 15105 - 15115