DFD-SLAM: Visual SLAM with Deep Features in Dynamic Environment

被引:1
|
作者
Qian, Wei [1 ]
Peng, Jiansheng [1 ,2 ,3 ,4 ]
Zhang, Hongyu [1 ]
机构
[1] Guangxi Univ Sci & Technol, Coll Automat, Liuzhou 545000, Peoples R China
[2] Hechi Univ, Dept Artificial Intelligence & Mfg, Hechi 547000, Peoples R China
[3] Hechi Univ, Educ Dept Guangxi Zhuang Autonomous Reg, Key Lab AI & Informat Proc, Hechi 547000, Peoples R China
[4] Hechi Univ, Sch Chem & Bioengn, Guangxi Key Lab Sericulture Ecol & Appl Intelligen, Hechi 546300, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 11期
基金
中国国家自然科学基金;
关键词
visual SLAM; deep features; dynamic SLAM; YOLOv8; HFNet; VERSATILE;
D O I
10.3390/app14114949
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Visual SLAM technology is one of the important technologies for mobile robots. Existing feature-based visual SLAM techniques suffer from tracking and loop closure performance degradation in complex environments. We propose the DFD-SLAM system to ensure outstanding accuracy and robustness across diverse environments. Initially, building on the ORB-SLAM3 system, we replace the original feature extraction component with the HFNet network and introduce a frame rotation estimation method. This method determines the rotation angles between consecutive frames to select superior local descriptors. Furthermore, we utilize CNN-extracted global descriptors to replace the bag-of-words approach. Subsequently, we develop a precise removal strategy, combining semantic information from YOLOv8 to accurately eliminate dynamic feature points. In the TUM-VI dataset, DFD-SLAM shows an improvement over ORB-SLAM3 of 29.24% in the corridor sequences, 40.07% in the magistrale sequences, 28.75% in the room sequences, and 35.26% in the slides sequences. In the TUM-RGBD dataset, DFD-SLAM demonstrates a 91.57% improvement over ORB-SLAM3 in highly dynamic scenarios. This demonstrates the effectiveness of our approach.
引用
收藏
页数:21
相关论文
共 50 条
  • [21] DOE-SLAM: Dynamic Object Enhanced Visual SLAM
    Hu, Xiao
    Lang, Jochen
    SENSORS, 2021, 21 (09)
  • [22] Dynamic-SLAM: Semantic monocular visual localization and mapping based on deep learning in dynamic environment
    Xiao, Linhui
    Wang, Jinge
    Qiu, Xiaosong
    Rong, Zheng
    Zou, Xudong
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2019, 117 : 1 - 16
  • [23] OVD-SLAM: An Online Visual SLAM for Dynamic Environments
    He, Jiaming
    Li, Mingrui
    Wang, Yangyang
    Wang, Hongyu
    IEEE SENSORS JOURNAL, 2023, 23 (12) : 13210 - 13219
  • [24] SOF-SLAM: A Semantic Visual SLAM for Dynamic Environments
    Cui, Linyan
    Ma, Chaowei
    IEEE ACCESS, 2019, 7 : 166528 - 166539
  • [25] PS-SLAM: A Visual SLAM for Semantic Mapping in Dynamic Outdoor Environment Using Panoptic Segmentation
    Li, Gang
    Cai, Jinxiang
    Huang, Chen
    Luo, Hao
    Yu, Jian
    IEEE ACCESS, 2025, 13 : 46534 - 46545
  • [26] Visual SLAM Algorithm Based on Weighted Static in Dynamic Environment
    Li Yong
    Wu Haibo
    Li Wan
    Li Dongze
    LASER & OPTOELECTRONICS PROGRESS, 2024, 61 (04)
  • [27] Radar SLAM using visual features
    Callmer, Jonas
    Tornqvist, David
    Gustafsson, Fredrik
    Svensson, Henrik
    Carlbom, Pelle
    EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING, 2011,
  • [28] Good Features to Track for Visual SLAM
    Zhang, Guangcong
    Vela, Patricio A.
    2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2015, : 1373 - 1382
  • [29] Visual SLAM with line and corner features
    Jeong, Woo Yeon
    Lee, Kyoung Mu
    2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-12, 2006, : 2570 - 2575
  • [30] PPS-SLAM: Dynamic Visual SLAM with a Precise Pruning Strategy
    Peng, Jiansheng
    Qian, Wei
    Zhang, Hongyu
    CMC-COMPUTERS MATERIALS & CONTINUA, 2025, 82 (02): : 2849 - 2868