Pose Invariant Topological Memory for Visual Navigation

被引:2
|
作者
Taniguchi, Asuto [1 ]
Sasaki, Fumihiro [1 ]
Yamashina, Ryota [1 ]
机构
[1] Ricoh Co LTD, Tokyo, Japan
关键词
D O I
10.1109/ICCV48922.2021.01510
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Planning for visual navigation using topological memory, a memory graph consisting of nodes and edges, has been recently well-studied. The nodes correspond to past observations of a robot, and the edges represent the reachability predicted by a neural network (NN). Most prior methods, however, often fail to predict the reachability when the robot takes different poses, i.e. the direction the robot faces, at close positions. This is because the methods observe first-person view images, which significantly changes when the robot changes its pose, and thus it is fundamentally difficult to correctly predict the reachability from them. In this paper, we propose pose invariant topological memory (POINT) to address the problem. POINT observes omnidirectional images and predicts the reachability by using a spherical convolutional NN, which has a rotation invariance property and enables planning regardless of the robot's pose. Additionally, we train the NN by contrastive learning with data augmentation to enable POINT to plan with robustness to changes in environmental conditions, such as light conditions and the presence of unseen objects. Our experimental results show that POINT outperforms conventional methods under both the same and different environmental conditions. In addition, the results with the KITTI-360 dataset show that POINT is more applicable to real-world environments than conventional methods.
引用
收藏
页码:15364 / 15373
页数:10
相关论文
共 50 条
  • [41] Autonomous Mobile Robot Intrinsic Navigation Based on Visual Topological Map
    Luo, Ren C.
    Shih, Wei
    2018 IEEE 27TH INTERNATIONAL SYMPOSIUM ON INDUSTRIAL ELECTRONICS (ISIE), 2018, : 541 - 546
  • [42] Environment modeling for topological navigation using visual landmarks and range data
    Lerasle, F
    Carbajo, J
    Devy, M
    Hayet, JB
    2003 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-3, PROCEEDINGS, 2003, : 1330 - 1335
  • [43] Using learned visual landmarks for intelligent topological navigation of mobile robots
    Mata, M
    Armingol, JM
    de la Escalera, A
    Salichs, MA
    2003 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-3, PROCEEDINGS, 2003, : 1324 - 1329
  • [44] Topological Map Generation for Intrinsic Visual Navigation of an Intelligent Service Robot
    Luo, Ren C.
    Shih, Wei
    2019 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS (ICCE), 2019,
  • [45] Pose-invariant view synthesis using image-based visual hull
    Yue, ZF
    Chellappa, R
    PROCEEDINGS OF THE 7TH JOINT CONFERENCE ON INFORMATION SCIENCES, 2003, : 781 - 784
  • [46] Feature and Pose Constrained Visual Aided Inertial Navigation for Computationally Constrained Aerial Vehicles
    Williams, Brian
    Hudson, Nicolas
    Tweddle, Brent
    Brockers, Roland
    Matthies, Larry
    2011 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2011,
  • [47] On-board camera pose estimation in augmented reality space for direct visual navigation
    Hu, ZC
    Uchimura, K
    STEREOSCOPIC DISPLAYS AND VIRTUAL REALITY SYSTEMS X, 2003, 5006 : 508 - 518
  • [48] Pose Invariant Palmprint Recognition
    Methani, Chhaya
    Namboodiri, Anoop M.
    ADVANCES IN BIOMETRICS, 2009, 5558 : 577 - 586
  • [49] Dense topological maps and partial pose estimation for visual control of an autonomous cleaning robot
    Gerstmayr-Hillen, L.
    Roeben, F.
    Krzykawski, M.
    Kreft, S.
    Venjakob, D.
    Moeller, R.
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2013, 61 (05) : 497 - 516
  • [50] Decoupled Right Invariant Error States for Consistent Visual-Inertial Navigation
    Yang, Yulin
    Chen, Chuchu
    Lee, Woosik
    Huang, Guoquan
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02) : 1627 - 1634