Review of Research on Multi-robot Visual Simultaneous Localization and Mapping

被引:0
|
作者
Yin H. [1 ]
Pei S. [1 ]
Xu L. [1 ]
Huang B. [1 ]
机构
[1] State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin
来源
Jixie Gongcheng Xuebao/Journal of Mechanical Engineering | 2022年 / 58卷 / 11期
关键词
active SLAM; collaborative mapping; global association; multi-robot systems; visual SLAM;
D O I
10.3901/JME.2022.11.011
中图分类号
学科分类号
摘要
The simultaneous localization and mapping (SLAM) technology is the basis and key technology for the collaborative work of multi-robot systems (MRS) in a complex, dynamic and GPS-denied environment, which is of great significance for improving the autonomy and intelligence of robots. Visual sensors have been widely used in SLAM due to their high resolution, rich information, low cost and other advantages. Based on a brief review of the visual SLAM and the application requirements of this research field, the essence and advantages of multi-robot visual SLAM (MR-VSLAM) are first summarized, and then the corresponding scientific problems are mainly outlined into three aspects: how to make the global association of visual SLAM, how to allocate robot resources to execute SLAM-driven collaborative mapping strategy, and how to implement robust active SLAM. Secondly, for each core problem, a comprehensive overview of the existing methods is carried out, while the advantages and disadvantages of related methods are discussed, and the problems existing in the current MR-VSLAM key technology are analyzed. Finally, the hot issues and development trend of the MR-VSLAM technology are concluded and discussed. © 2022 Editorial Office of Chinese Journal of Mechanical Engineering. All rights reserved.
引用
收藏
页码:11 / 36
页数:25
相关论文
共 172 条
  • [11] KHAIRUDDIN A R, TALIB M S, HARON H., Review on simultaneous localization and mapping (SLAM)[C], Proceedings of the 2015 IEEE International Conference on Control System , Computing and Engineering (ICCSCE), pp. 85-90, (2015)
  • [12] GAO Xiang, ZHANG Tao, YAN Qinrui, Et al., Fourteen lectures on visual SLAM:From theory to practice [M], (2017)
  • [13] MUEGGLER E, REBECQ H, GALLEGO G, Et al., The event-camera dataset and simulator:Event-based data for pose estimation,visual odometry,and SLAM[J], The International Journal of Robotics Research, 36, 2, pp. 142-149, (2017)
  • [14] ENDRES F, HESS J, ENGELHARD N, Et al., An evaluation of the RGB-D SLAM system[C], Proceedings of the 2012 IEEE International Conference on Robotics and Automation, pp. 1691-1696, (2012)
  • [15] CHO Y, KIM A., Visibility enhancement for underwater visual SLAM based on underwater light scattering model[C], Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 710-717, (2017)
  • [16] NIKOLIC J, REHDER J, BURRI M, Et al., A synchronized visual-inertial sensor system with FPGA pre-processing for accurate real-time SLAM[C], Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 431-437, (2014)
  • [17] SHIN Y-S,, PARK Y S, KIM A., Direct visual slam using sparse depth for camera-lidar system[C], Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 5144-5151, (2018)
  • [18] KROMBACH N, DROESCHEL D, HOUBEN S, Et al., Feature-based visual odometry prior for real-time semi-dense stereo SLAM[J], Robotics and Autonomous Systems, 109, pp. 38-58, (2018)
  • [19] LOWE D G., Distinctive image features from scale-invariant keypoints[J], International Journal of Computer Vision, 60, 2, pp. 91-110, (2004)
  • [20] BAY H, TUYTELAARS T, VAN GOOL L., Surf:Speeded up robust features[C], Proceedings of the European Conference on Computer Vision, pp. 404-417, (2006)