Test-Time Synthetic-to-Real Adaptive Depth Estimation

被引:0
|
作者
Yi, Eojindl [1 ]
Kim, Junmo [1 ]
机构
[1] Korea Adv Inst Sci & Technol KAIST, Daejeon, South Korea
基金
新加坡国家研究基金会;
关键词
D O I
10.1109/ICRA48891.2023.10160773
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Is it possible for a synthetic to realistic domain adapted neural network in single image depth estimation to truly generalize on real world data? The resultant, adapted model will only generalize on the realistic domain dataset, which only reflects a small portion of the true, real world. As a result, the network still has to cope with the potential danger of domain shift between the realistic domain dataset and the real world data. Instead, a viable solution is to design the model to be capable of continuously adapting to the distribution of data it receives at test-time. In this paper, we propose a depth estimation method that is capable of adapting to the domain shift at test-time. Our method adapts to the unseen test-time domain, by updating the network using our proposed objective functions. Following former work, we reduce the entropy of the current prediction for refinement and adaptation. We propose a Logit Order Enforcement loss that can prevent the network from deviating into wrong solutions, which can result from the mere reduction of the aforementioned entropy. Qualitative and quantitative results show the effectiveness of our method. Our method reduces the dependency on training data by 5.8x on average, while achieving comparable performance to state-of-the-art unsupervised domain adaptation (UDA) and domain generalization methods (DG) on the KITTI dataset.
引用
收藏
页码:4938 / 4944
页数:7
相关论文
共 50 条
  • [41] Synthetic-to-Real Adaptation for Complex Action Recognition in Surveillance Applications
    Lu, Shuhong
    Jin, Zhangyu
    Rajendran, Vickram
    Harari, Michal
    Feng, Andrew
    De Melo, Celso M.
    SYNTHETIC DATA FOR ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: TOOLS, TECHNIQUES, AND APPLICATIONS II, 2024, 13035
  • [42] TTAGaze: Self-Supervised Test-Time Adaptation for Personalized Gaze Estimation
    Wu, Yong
    Chen, Guang
    Ye, Linwei
    Jia, Yuanning
    Liu, Zhi
    Wang, Yang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (11) : 10959 - 10971
  • [43] Global and Local Texture Randomization for Synthetic-to-Real Semantic Segmentation
    Peng, Duo
    Lei, Yinjie
    Liu, Lingqiao
    Zhang, Pingping
    Liu, Jun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 (30) : 6594 - 6608
  • [44] Prompt-Based Test-Time Real Image Dehazing: A Novel Pipeline
    Chen, Zixuan
    He, Zewei
    Lu, Ziqian
    Sun, Xuecheng
    Lu, Zhe-Ming
    COMPUTER VISION - ECCV 2024, PT LXXVI, 2025, 15134 : 432 - 449
  • [45] MT3: Meta Test-Time Training for Self-Supervised Test-Time Adaption
    Bartler, Alexander
    Buehler, Andre
    Wiewel, Felix
    Doebler, Mario
    Yang, Bin
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [46] Test-time Augmentation for Factual Probing
    Kamoda, Go
    Heinzerling, Benjamin
    Sakaguchi, Keisuke
    Inui, Kentaro
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 3650 - 3661
  • [47] Test-Time Training on Video Streams
    Wang, Renhao
    Sun, Yu
    Tandon, Arnuv
    Gandelsman, Yossi
    Chen, Xinlei
    Efros, Alexei A.
    Wang, Xiaolong
    JOURNAL OF MACHINE LEARNING RESEARCH, 2025, 26 : 1 - 29
  • [48] Train/Test-Time Adaptation with Retrieval
    Zancato, Luca
    Achille, Alessandro
    Liu, Tian Yu
    Trager, Matthew
    Perera, Pramuditha
    Soatto, Stefano
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 15911 - 15921
  • [49] Better Aggregation in Test-Time Augmentation
    Shanmugam, Divya
    Blalock, Davis
    Balakrishnan, Guha
    Guttag, John
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 1194 - 1203
  • [50] TEA: Test-time Energy Adaptation
    Yuan, Yige
    Xu, Bingbing
    Hou, Liang
    Sun, Fei
    Shen, Huawei
    Cheng, Xueqi
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 23901 - 23911