MonoNav: MAV Navigation via Monocular Depth Estimation and Reconstruction

被引:0
|
作者
Simon, Nathaniel [1 ]
Majumdar, Anirudha [1 ]
机构
[1] Princeton Univ, Dept Mech & Aerosp Engn, Princeton, NJ 08544 USA
来源
关键词
MAV; monocular depth estimation; 3D reconstruction; collision avoidance;
D O I
10.1007/978-3-031-63596-0_37
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A major challenge in deploying the smallest of Micro Aerial Vehicle (MAV) platforms (<= 100 g) is their inability to carry sensors that provide high-resolution metric depth information (e.g., LiDAR or stereo cameras). Current systems rely on end-to-end learning or heuristic approaches that directly map images to control inputs, and struggle to fly fast in unknown environments. In this work, we ask the following question: using only a monocular camera, optical odometry, and off-board computation, can we create metrically accurate maps to leverage the powerful path planning and navigation approaches employed by larger state-of-the-art robotic systems to achieve robust autonomy in unknown environments? We present MonoNav: a fast 3D reconstruction and navigation stack for MAVs that leverages recent advances in depth prediction neural networks to enable metrically accurate 3D scene reconstruction from a stream of monocular images and poses. MonoNav uses off-the-shelf pre-trained monocular depth estimation and fusion techniques to construct a map, then searches over motion primitives to plan a collision-free trajectory to the goal. In extensive hardware experiments, we demonstrate how MonoNav enables the Crazyflie (a 37 g MAV) to navigate fast (0.5m/s) in cluttered indoor environments. We evaluate MonoNav against a state-of-the-art end-to-end approach, and find that the collision rate in navigation is significantly reduced (by a factor of 4). This increased safety comes at the cost of conservatism in terms of a 22% reduction in goal completion.
引用
收藏
页码:415 / 426
页数:12
相关论文
共 50 条
  • [1] How Does Monocular Depth Estimation Work for MAV Navigation in the Real World?
    Pan, Yongzhou
    Wang, Jingjing
    Chen, Fengnan
    Lin, Zheng
    Zhang, Siyao
    Yang, Tao
    PROCEEDINGS OF 2022 INTERNATIONAL CONFERENCE ON AUTONOMOUS UNMANNED SYSTEMS, ICAUS 2022, 2023, 1010 : 3763 - 3771
  • [2] Monocular Dense Reconstruction by Depth Estimation Fusion
    Chen, Tian
    Ding, Wendong
    Zhang, Dapeng
    Liu, Xilong
    PROCEEDINGS OF THE 30TH CHINESE CONTROL AND DECISION CONFERENCE (2018 CCDC), 2018, : 4460 - 4465
  • [3] Monocular Human Depth Estimation Via Pose Estimation
    Jun, Jinyoung
    Lee, Jae-Han
    Lee, Chul
    Kim, Chang-Su
    IEEE ACCESS, 2021, 9 : 151444 - 151457
  • [4] Self-Supervised Monocular Depth Estimation From Videos via Adaptive Reconstruction Constraints
    Ye, Xinchen
    Ou, Yuxiang
    Wu, Biao
    Xu, Rui
    Li, Haojie
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (03) : 2161 - 2172
  • [5] Autonomous Robust Navigation System for MAV Based on Monocular Cameras
    Caldas, Kenny A. Q.
    Benevides, Joao R. S.
    Inoue, Roberto S.
    Terra, Marco H.
    2022 INTERNATIONAL CONFERENCE ON UNMANNED AIRCRAFT SYSTEMS (ICUAS), 2022, : 1343 - 1349
  • [6] On deep learning techniques to boost monocular depth estimation for autonomous navigation
    Mendes, Raul de Queiroz
    Ribeiro, Eduardo Godinho
    Rosa, Nicolas dos Santos
    Grassi, Valdir, Jr.
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2021, 136
  • [7] Scale-Invariant Monocular Depth Estimation via SSI Depth
    Miangoleh, S. Mahdi H.
    Reddy, Mahesh
    Aksoy, Yagiz
    PROCEEDINGS OF SIGGRAPH 2024 CONFERENCE PAPERS, 2024,
  • [8] Depth Estimation of a Deformable Object via a Monocular Camera
    Jiang, Guolai
    Jin, Shaokun
    Ou, Yongsheng
    Zhou, Shoujun
    APPLIED SCIENCES-BASEL, 2019, 9 (07):
  • [9] MODE: Monocular omnidirectional depth estimation via consistent depth fusion
    Liu, Yunbiao
    Chen, Chunyi
    IMAGE AND VISION COMPUTING, 2023, 136
  • [10] Monocular Depth Estimation of Old Photos via Collaboration of Monocular and Stereo Networks
    Kim, Ju Ho
    Ko, Kwang-Lim
    Ha, Le Thanh Le
    Jung, Seung-Won
    IEEE ACCESS, 2023, 11 : 11675 - 11684