Improving RGB-D SLAM in dynamic environments using semantic aided segmentation

被引:9
|
作者
Kenye, Lhilo [1 ,2 ]
Kala, Rahul [1 ]
机构
[1] Indian Inst Informat Technol, Ctr Intelligent Robot, Allahabad, Prayagraj, India
[2] NavAjna Technol Pvt Ltd, Hyderabad, India
关键词
simultaneous localization and mapping; object recognition; dynamic SLAM; background detection; dynamic object filtering; computer vision; SIMULTANEOUS LOCALIZATION; MOTION REMOVAL; VISUAL SLAM;
D O I
10.1017/S0263574721001521
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Most conventional simultaneous localization and mapping (SLAM) approaches assume the working environment to be static. In a highly dynamic environment, this assumption divulges the impediments of a SLAM algorithm that lack modules that distinctively attend to dynamic objects despite the inclusion of optimization techniques. This work exploits such environments and reduces the effects of dynamic objects in a SLAM algorithm by separating features belonging to dynamic objects and static background using a generated binary mask image. While the features belonging to the static region are used for performing SLAM, the features belonging to non-static segments are reused instead of being eliminated. The approach employs deep neural network or DNN-based object detection module to obtain bounding boxes and then generates a lower resolution binary mask image using depth-first search algorithm over the detected semantics, characterizing the segmentation of the foreground from the static background. In addition, the features belonging to dynamic objects are tracked into consecutive frames to obtain better masking consistency. The proposed approach is tested on both publicly available dataset as well as self-collected dataset, which includes both indoor and outdoor environments. The experimental results show that the removal of features belonging to dynamic objects for a SLAM algorithm can significantly improve the overall output in a dynamic scene.
引用
收藏
页码:2065 / 2090
页数:26
相关论文
共 50 条
  • [21] Robust RGB-D SLAM in Dynamic Environments for Autonomous Vehicles
    Ji, Tete
    Yuan, Shenghai
    Xie, Lihua
    2022 17TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV), 2022, : 665 - 671
  • [22] RGB-D SLAM with moving object tracking in dynamic environments
    Dai, Weichen
    Zhang, Yu
    Zheng, Yuxin
    Sun, Donglei
    Li, Ping
    IET CYBER-SYSTEMS AND ROBOTICS, 2021, 3 (04) : 281 - 291
  • [23] Motion removal for reliable RGB-D SLAM in dynamic environments
    Sun, Yuxiang
    Liu, Ming
    Meng, Max Q. -H.
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2018, 108 : 115 - 128
  • [24] PoseFusion: Dense RGB-D SLAM in Dynamic Human Environments
    Zhang, Tianwei
    Nakamura, Yoshihiko
    PROCEEDINGS OF THE 2018 INTERNATIONAL SYMPOSIUM ON EXPERIMENTAL ROBOTICS, 2020, 11 : 772 - 780
  • [25] DRG-SLAM: A Semantic RGB-D SLAM using Geometric Features for Indoor Dynamic Scene
    Wang, Yanan
    Xu, Kun
    Tian, Yaobin
    Ding, Xilun
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 1352 - 1359
  • [26] StaticFusion: Background Reconstruction for Dense RGB-D SLAM in Dynamic Environments
    Scona, Raluca
    Jaimez, Mariano
    Petillot, Yvan R.
    Fallon, Maurice
    Cremers, Daniel
    2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 3849 - 3856
  • [27] RGB-D SLAM Method Based on Enhanced Segmentation in Dynamic Environment
    Wang H.
    Lu D.
    Fang B.
    Jiqiren/Robot, 2022, 44 (04): : 418 - 430
  • [28] Linear RGB-D SLAM for Structured Environments
    Joo, Kyungdon
    Kim, Pyojin
    Hebert, Martial
    Kweon, In So
    Kim, Hyoun Jin
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (11) : 8403 - 8419
  • [29] RGB-D Object SLAM Using Quadrics for Indoor Environments
    Liao, Ziwei
    Wang, Wei
    Qi, Xianyu
    Zhang, Xiaoyu
    SENSORS, 2020, 20 (18) : 1 - 34
  • [30] RGB-D SLAM Algorithm in Indoor Dynamic Environments Based on Gridding Segmentation and Dual Map Coupling
    Ai Q.
    Wang W.
    Liu G.
    Jiqiren/Robot, 2022, 44 (04): : 431 - 442