SG-LPR: Semantic-Guided LiDAR-Based Place Recognition

被引:1
|
作者
Jiang, Weizhong [1 ]
Xue, Hanzhang [1 ,2 ]
Si, Shubin [1 ,3 ]
Min, Chen [4 ]
Xiao, Liang [1 ]
Nie, Yiming [1 ]
Dai, Bin [1 ]
机构
[1] Def Innovat Inst, Unmanned Syst Technol Res Ctr, Beijing 100071, Peoples R China
[2] Natl Univ Def Technol, Test Ctr, Xian 710106, Peoples R China
[3] Harbin Engn Univ, Coll Intelligent Syst Sci & Engn, Harbin 150001, Peoples R China
[4] Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China
关键词
LiDAR-based place recognition; semantic-guided; auxiliary task; swin transformer; U-Net; SCAN CONTEXT;
D O I
10.3390/electronics13224532
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Place recognition plays a crucial role in tasks such as loop closure detection and re-localization in robotic navigation. As a high-level representation within scenes, semantics enables models to effectively distinguish geometrically similar places, therefore enhancing their robustness to environmental changes. Unlike most existing semantic-based LiDAR place recognition (LPR) methods that adopt a multi-stage and relatively segregated data-processing and storage pipeline, we propose a novel end-to-end LPR model guided by semantic information-SG-LPR. This model introduces a semantic segmentation auxiliary task to guide the model in autonomously capturing high-level semantic information from the scene, implicitly integrating these features into the main LPR task, thus providing a unified framework of "segmentation-while-describing" and avoiding additional intermediate data-processing and storage steps. Moreover, the semantic segmentation auxiliary task operates only during model training, therefore not adding any time overhead during the testing phase. The model also combines the advantages of Swin Transformer and U-Net to address the shortcomings of current semantic-based LPR methods in capturing global contextual information and extracting fine-grained features. Extensive experiments conducted on multiple sequences from the KITTI and NCLT datasets validate the effectiveness, robustness, and generalization ability of our proposed method. Our approach achieves notable performance improvements over state-of-the-art methods.
引用
收藏
页数:21
相关论文
共 50 条
  • [31] Comparison of camera-based and 3D LiDAR-based place recognition across weather conditions
    Zywanowski, Kamil
    Banaszczyk, Adam
    Nowicki, Michal R.
    16TH IEEE INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV 2020), 2020, : 886 - 891
  • [32] Semantic-Guided Relation Propagation Network for Few-shot Action Recognition
    Wang, Xiao
    Ye, Weirong
    Qi, Zhongang
    Zhao, Xun
    Wang, Guangge
    Shan, Ying
    Wang, Hanzi
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 816 - 825
  • [33] Lidar-Based Tree Recognition and Platform Localization in Orchards
    Underwood, James P.
    Jagbrant, Gustav
    Nieto, Juan I.
    Sukkarieh, Salah
    JOURNAL OF FIELD ROBOTICS, 2015, 32 (08) : 1056 - 1074
  • [34] FreSCo: Frequency-Domain Scan Context for LiDAR-based Place Recognition with Translation and Rotation Invariance
    Fan, Yongzhi
    Du, Xin
    Luo, Lun
    Shen, Jizhong
    2022 17TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV), 2022, : 576 - 583
  • [35] SuMa plus plus : Efficient LiDAR-based Semantic SLAM
    Chen, Xieyuanli
    Milioto, Andres
    Palazzolo, Emanuele
    Giguere, Philippe
    Behlcy, Jens
    Stachniss, Cyrill
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 4530 - 4537
  • [36] LiDAR-Based SLAM under Semantic Constraints in Dynamic Environments
    Wang, Weiqi
    You, Xiong
    Zhang, Xin
    Chen, Lingyu
    Zhang, Lantian
    Liu, Xu
    REMOTE SENSING, 2021, 13 (18)
  • [37] MixedSCNet: LiDAR-Based Place Recognition Using Multi-Channel Scan Context Neural Network
    Si, Yan
    Han, Wenyi
    Yu, Die
    Bao, Baizhong
    Duan, Jian
    Zhan, Xiaobin
    Shi, Tielin
    ELECTRONICS, 2024, 13 (02)
  • [38] IS-CAT: Intensity-Spatial Cross-Attention Transformer for LiDAR-Based Place Recognition
    Joo, Hyeong-Jun
    Kim, Jaeho
    SENSORS, 2024, 24 (02)
  • [39] CCTNet: A Circular Convolutional Transformer Network for LiDAR-Based Place Recognition Handling Movable Objects Occlusion
    Wang, Gang
    Zhu, Chaoran
    Xu, Qian
    Zhang, Tongzhou
    Zhang, Hai
    Fan, Xiaopeng
    Hu, Jue
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (04) : 3276 - 3289
  • [40] A Semantic-Guided LiDAR-Vision Fusion Approach for Moving Objects Segmentation and State Estimation
    Chen, Songming
    Sun, Haixin
    Fremont, Vincent
    2022 IEEE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2022, : 4308 - 4313