RGB-D Based Visual SLAM Algorithm for Indoor Crowd Environment

被引:3
|
作者
Li, Jianfeng [1 ,2 ,3 ]
Dai, Juan [1 ,2 ,3 ]
Su, Zhong [1 ,2 ,3 ]
Zhu, Cui [4 ]
机构
[1] Beijing Informat Sci & Technol Univ, Beijing Key Lab High Dynam Nav Technol, Beijing 100192, Peoples R China
[2] Minist Educ, Key Lab Modern Measurement & Control Technol, Beijing 100192, Peoples R China
[3] Beijing Informat Sci & Technol Univ, Sch Automat, Beijing 100192, Peoples R China
[4] Beijing Informat Sci & Technol Univ, Sch Informat & Commun Engn, Beijing 100101, Peoples R China
基金
中国国家自然科学基金;
关键词
Visual SLAM; Indoor environment; Object detection; Dynamic environment;
D O I
10.1007/s10846-023-02046-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most current research on dynamic visual Simultaneous Localization and Mapping (SLAM) systems focuses on scenes where static objects occupy most of the environment. However, in densely populated indoor environments, the movement of the crowd can lead to the loss of feature information, thereby diminishing the system's robustness and accuracy. This paper proposes a visual SLAM algorithm for dense crowd environments based on a combination of the ORB-SLAM2 framework and RGB-D cameras. Firstly, we introduced a dedicated target detection network thread and improved the performance of the target detection network, enhancing its detection coverage in crowded environments, resulting in a 41.5% increase in average accuracy. Additionally, we found that some feature points other than humans in the detection box were mistakenly deleted. Therefore, we proposed an algorithm based on standard deviation fitting to effectively filter out the features. Finally, our system is evaluated on the TUM and Bonn RGB-D dynamic datasets and compared with ORB-SLAM2 and other state-of-the-art visual dynamic SLAM methods. The results indicate that our system's pose estimation error is reduced by at least 93.60% and 97.11% compared to ORB-SLAM2 in high dynamic environments and the Bonn RGB-D dynamic dataset, respectively. Our method demonstrates comparable performance compared to other recent visual dynamic SLAM methods.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] RGB-D Based Visual SLAM Algorithm for Indoor Crowd Environment
    Jianfeng Li
    Juan Dai
    Zhong Su
    Cui Zhu
    Journal of Intelligent & Robotic Systems, 2024, 110
  • [2] RGB-D Visual SLAM Based on Yolov4-Tiny in Indoor Dynamic Environment
    Chang, Zhanyuan
    Wu, Honglin
    Sun, Yunlong
    Li, Chuanjiang
    MICROMACHINES, 2022, 13 (02)
  • [3] RGB-D Sensor Based Mobile Robot SLAM in Indoor Environment
    Lyu Qiang
    Liu Feng
    Wang Xiaolong
    Wang Guosheng
    26TH CHINESE CONTROL AND DECISION CONFERENCE (2014 CCDC), 2014, : 3848 - 3852
  • [4] Optimization Algorithm of RGB-D SLAM Visual Odometry based on Triangulation
    Dong J.
    Jiang Y.
    Han Z.
    Dong, Jingwei (djw@hrbust.edu.cn), 1600, Totem Publishers Ltd (16): : 438 - 445
  • [5] Visual SLAM with RGB-D Cameras
    Jin, Qiongyao
    Liu, Yungang
    Man, Yongchao
    Li, Fengzhong
    PROCEEDINGS OF THE 38TH CHINESE CONTROL CONFERENCE (CCC), 2019, : 4072 - 4077
  • [6] An RGB-D SLAM Algorithm Based on Semi-direct Visual Odometry
    Gu X.
    Yang M.
    Zhang Y.
    Liu K.
    Jiqiren/Robot, 2020, 42 (01): : 39 - 48
  • [7] An RGB-D SLAM algorithm based on adaptive semantic segmentation in dynamic environment
    Song Wei
    Zhang Li
    Journal of Real-Time Image Processing, 2023, 20
  • [8] DRSO-SLAM: A Dynamic RGB-D SLAM Algorithm for Indoor Dynamic Scenes
    Yu, Naigong
    Gan, Mengzhe
    Yu, Hejie
    Yang, Kang
    PROCEEDINGS OF THE 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2021), 2021, : 1052 - 1058
  • [9] An RGB-D SLAM algorithm based on adaptive semantic segmentation in dynamic environment
    Wei, Song
    Li, Zhang
    JOURNAL OF REAL-TIME IMAGE PROCESSING, 2023, 20 (05)
  • [10] RGB-D SLAM Algorithm Based on Delayed Semantic Information in Dynamic Environment
    Wang H.
    Zhou S.
    Fang B.
    Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2023, 36 (10): : 953 - 966