Capturing Uncertainty in Monocular Depth Estimation: Towards Fuzzy Voxel Maps

被引:0
|
作者
Buck, Andrew R. [1 ]
Anderson, Derek T. [1 ]
Camaioni, Raub [2 ]
Akers, Jack [1 ]
Luke, Robert H., III [2 ]
Keller, James M. [1 ]
机构
[1] Univ Missouri, Elect Engn & Comp Sci EECS Dept, Columbia, MO 65212 USA
[2] US Army DEVCOM C5ISR Ctr, Ft Belvoir, VA USA
关键词
fuzzy voxel map; structure from motion; monocular depth estimation;
D O I
10.1109/FUZZ52849.2023.10309749
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Monocular depth estimation methods using structure from motion (SfM) have become increasingly capable of reconstructing 3D representations from a sequence of 2D images. In the context of unmanned aerial vehicles (UAVs), these techniques can be used to create an occupancy map of an environment, which is useful for planning and navigation. OctoMap and a recent improvement, UFOMap, are commonly used hierarchical representations that represent the value of a voxel cell as the probability of being occupied. Although this captures some uncertainty in the map and allows for dynamic updates, it does not fully utilize the known characteristics of the sensor and SfM algorithm, and it can lead to unnecessarily noisy results. In this paper, we propose an approach to assign a weight to each point in a point cloud update based on camera extrinsics and SfM confidence. The weighted points are then added to the voxel map in a way that more closely resembles a degree of confidence rather than a probability. In this way, we take the first steps toward designing a fuzzy voxel map that is more robust in noisy situations and captures useful uncertainty to help with UAV applications. We demonstrate our approach on simulated scenarios using Unreal Engine and AirSim.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Towards Better Data Exploitation in Self-Supervised Monocular Depth Estimation
    Liu, Jinfeng
    Kong, Lingtong
    Yang, Jie
    Liu, Wei
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (01) : 763 - 770
  • [42] TSUDepth: Exploring temporal symmetry-based uncertainty for unsupervised monocular depth estimation
    Zhu, Yufan
    Ren, Rui
    Dong, Weisheng
    Li, Xin
    Shi, Guangming
    NEUROCOMPUTING, 2024, 600
  • [43] EVALUATING MONOCULAR DEPTH ESTIMATION METHODS
    Padkan, N.
    Trybala, P.
    Battisti, R.
    Remondino, F.
    Bergeret, C.
    2ND GEOBENCH WORKSHOP ON EVALUATION AND BENCHMARKING OF SENSORS, SYSTEMS AND GEOSPATIAL DATA IN PHOTOGRAMMETRY AND REMOTE SENSING, VOL. 48-1, 2023, : 137 - 144
  • [44] MONOCULAR DEPTH ESTIMATION IN FOREST ENVIRONMENTS
    Hristova, H.
    Abegg, M.
    Fischer, C.
    Rehush, N.
    XXIV ISPRS CONGRESS IMAGING TODAY, FORESEEING TOMORROW, COMMISSION II, 2022, 43-B2 : 1017 - 1023
  • [45] DepthNet: A Monocular Depth Estimation Framework
    Anunay
    Pankaj
    Dhiman, Chhavi
    2021 7TH INTERNATIONAL CONFERENCE ON ENGINEERING AND EMERGING TECHNOLOGIES (ICEET 2021), 2021, : 495 - 500
  • [46] Monocular depth estimation with enhanced edge
    Wang Q.
    Wang Q.
    Cheng K.
    Liu Z.
    Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2022, 50 (03): : 36 - 42
  • [47] Monocular Depth Estimation for Equirectangular Videos
    Fraser, Helmi
    Wang, Sen
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 5293 - 5299
  • [48] Monocular Depth Estimation with Sharp Boundary
    Yang, Xin
    Chang, Qingling
    Xu, Shiting
    Liu, Xinlin
    Cui, Yan
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2023, 136 (01): : 573 - 592
  • [49] Aperture Supervision for Monocular Depth Estimation
    Srinivasan, Pratul P.
    Garg, Rahul
    Wadhwa, Neal
    Ng, Ren
    Barron, Jonathan T.
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 6393 - 6401
  • [50] Monocular Depth Estimation for Mobile Device
    Lee, Yongsik
    Lee, Seungjae
    Ko, Jong Gook
    2021 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS-ASIA (ICCE-ASIA), 2021,