Learning Feature Decomposition for Domain Adaptive Monocular Depth Estimation

被引:8
|
作者
Lo, Shao-Yuan [1 ]
Wang, Wei [2 ]
Thomas, Jim [2 ]
Zheng, Jingjing [2 ]
Patel, Vishal M. [1 ]
Kuo, Cheng-Hao [2 ]
机构
[1] Johns Hopkins Univ, Dept Elect & Comp Engn, Baltimore, MD 21218 USA
[2] Amazon Lab126, Sunnyvale, CA USA
关键词
D O I
10.1109/IROS47612.2022.9981342
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Monocular depth estimation (MDE) has attracted intense study due to its low cost and critical functions for robotic tasks such as localization, mapping and obstacle detection. Supervised approaches have led to great success with the advance of deep learning, but they rely on large quantities of ground-truth depth annotations that are expensive to acquire. Unsupervised domain adaptation (UDA) transfers knowledge from labeled source data to unlabeled target data, so as to relax the constraint of supervised learning. However, existing UDA approaches may not completely align the domain gap across different datasets because of the domain shift problem. We believe better domain alignment can be achieved via well-designed feature decomposition. In this paper, we propose a novel UDA method for MDE, referred to as Learning Feature Decomposition for Adaptation (LFDA), which learns to decompose the feature space into content and style components. LFDA only attempts to align the content component since it has a smaller domain gap. Meanwhile, it excludes the style component which is specific to the source domain from training the primary task. Furthermore, LFDA uses separate feature distribution estimations to further bridge the domain gap. Extensive experiments on three domain adaptative MDE scenarios show that the proposed method achieves superior accuracy and lower computational cost compared to the state-of-the-art approaches.
引用
收藏
页码:8376 / 8382
页数:7
相关论文
共 50 条
  • [1] An Adaptive Unsupervised Learning Framework for Monocular Depth Estimation
    Yang, Delong
    Zhong, Xunyu
    Lin, Lixiong
    Peng, Xiafu
    IEEE ACCESS, 2019, 7 : 148142 - 148151
  • [2] Depth Map Decomposition for Monocular Depth Estimation
    Jun, Jinyoung
    Lee, Jae-Han
    Lee, Chul
    Kim, Chang-Su
    COMPUTER VISION - ECCV 2022, PT II, 2022, 13662 : 18 - 34
  • [3] Toward Domain Independence for Learning-Based Monocular Depth Estimation
    Mancini, Michele
    Costante, Gabriele
    Valigi, Paolo
    Ciarfuglia, Thomas A.
    Delmerico, Jeffrey
    Scaramuzza, Davide
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2017, 2 (03): : 1778 - 1785
  • [4] Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction
    Zhan, Huangying
    Garg, Ravi
    Weerasekera, Chamara Saroj
    Li, Kejie
    Agarwal, Harsh
    Reid, Ian
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 340 - 349
  • [5] Multilevel Pyramid Network for Monocular Depth Estimation Based on Feature Refinement and Adaptive Fusion
    Xu, Huihui
    Li, Fei
    ELECTRONICS, 2022, 11 (16)
  • [6] An adaptive observer framework for accurate feature depth estimation using an uncalibrated monocular camera
    Keshavan, Jishnu
    Escobar-Alvarez, Hector
    Humbert, J. Sean
    CONTROL ENGINEERING PRACTICE, 2016, 46 : 59 - 65
  • [7] Adaptive confidence thresholding for monocular depth estimation
    Choi, Hyesong
    Lee, Hunsang
    Kim, Sunkyung
    Kim, Sunok
    Kim, Seungryong
    Sohn, Kwanghoon
    Min, Dongbo
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 12788 - 12798
  • [8] Monocular Depth Estimation with Adaptive Geometric Attention
    Naderi, Taher
    Sadovnik, Amir
    Hayward, Jason
    Qi, Hairong
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 617 - 627
  • [9] Contrastive Feature Bin Loss for Monocular Depth Estimation
    Song, Jihun
    Hyun, Yoonsuk
    IEEE ACCESS, 2025, 13 : 49584 - 49596
  • [10] Efficient monocular depth estimation with transfer feature enhancement
    Yin M.
    International Journal of Circuits, Systems and Signal Processing, 2021, 15 : 1165 - 1173