Capturing Uncertainty in Monocular Depth Estimation: Towards Fuzzy Voxel Maps

被引:0
|
作者
Buck, Andrew R. [1 ]
Anderson, Derek T. [1 ]
Camaioni, Raub [2 ]
Akers, Jack [1 ]
Luke, Robert H., III [2 ]
Keller, James M. [1 ]
机构
[1] Univ Missouri, Elect Engn & Comp Sci EECS Dept, Columbia, MO 65212 USA
[2] US Army DEVCOM C5ISR Ctr, Ft Belvoir, VA USA
关键词
fuzzy voxel map; structure from motion; monocular depth estimation;
D O I
10.1109/FUZZ52849.2023.10309749
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Monocular depth estimation methods using structure from motion (SfM) have become increasingly capable of reconstructing 3D representations from a sequence of 2D images. In the context of unmanned aerial vehicles (UAVs), these techniques can be used to create an occupancy map of an environment, which is useful for planning and navigation. OctoMap and a recent improvement, UFOMap, are commonly used hierarchical representations that represent the value of a voxel cell as the probability of being occupied. Although this captures some uncertainty in the map and allows for dynamic updates, it does not fully utilize the known characteristics of the sensor and SfM algorithm, and it can lead to unnecessarily noisy results. In this paper, we propose an approach to assign a weight to each point in a point cloud update based on camera extrinsics and SfM confidence. The weighted points are then added to the voxel map in a way that more closely resembles a degree of confidence rather than a probability. In this way, we take the first steps toward designing a fuzzy voxel map that is more robust in noisy situations and captures useful uncertainty to help with UAV applications. We demonstrate our approach on simulated scenarios using Unreal Engine and AirSim.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] Perceptual Monocular Depth Estimation
    Pan, Janice
    Bovik, Alan C.
    NEURAL PROCESSING LETTERS, 2021, 53 (02) : 1205 - 1228
  • [32] Perceptual Monocular Depth Estimation
    Janice Pan
    Alan C. Bovik
    Neural Processing Letters, 2021, 53 : 1205 - 1228
  • [33] Sparse depth densification for monocular depth estimation
    Zhen Liang
    Tiyu Fang
    Yanzhu Hu
    Yingjian Wang
    Multimedia Tools and Applications, 2024, 83 : 14821 - 14838
  • [34] Self-supervised Monocular Depth Estimation with Uncertainty-aware Feature Enhancement and Depth Fusion
    Li, Jiahui
    Wang, Zhicheng
    Sun, Kaiwei
    2024 INTERNATIONAL CONFERENCE ON ELECTRONIC ENGINEERING AND INFORMATION SYSTEMS, EEISS 2024, 2024, : 55 - 61
  • [35] Depth Map Decomposition for Monocular Depth Estimation
    Jun, Jinyoung
    Lee, Jae-Han
    Lee, Chul
    Kim, Chang-Su
    COMPUTER VISION - ECCV 2022, PT II, 2022, 13662 : 18 - 34
  • [36] Sparse depth densification for monocular depth estimation
    Liang, Zhen
    Fang, Tiyu
    Hu, Yanzhu
    Wang, Yingjian
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (05) : 14821 - 14838
  • [37] DA-NET: Monocular Depth Estimation using Disparity Maps Awareness NETwork
    Billy, Antoine
    Desbarats, Pascal
    PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS, VOL 5: VISAPP, 2020, : 529 - 535
  • [38] Boosting Monocular Depth with Panoptic Segmentation Maps
    Saeedan, Faraz
    Roth, Stefan
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, : 3852 - 3861
  • [39] Towards Zero-Shot Scale-Aware Monocular Depth Estimation
    Guizilini, Vitor
    Vasiljevic, Igor
    Chen, Dian
    Ambrus, Rares
    Gaidon, Adrien
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 9199 - 9209
  • [40] Towards Comprehensive Monocular Depth Estimation: Multiple Heads are Better Than One
    Shao, Shuwei
    Li, Ran
    Pei, Zhongcai
    Liu, Zhong
    Chen, Weihai
    Zhu, Wentao
    Wu, Xingming
    Zhang, Baochang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 7660 - 7671