MAMo: Leveraging Memory and Attention for Monocular Video Depth Estimation

被引:4
|
作者
Yasarla, Rajeev [1 ]
Cai, Hong [1 ]
Jeong, Jisoo [1 ]
Shi, Yunxiao [1 ]
Garrepalli, Risheek [1 ]
Porikli, Fatih [1 ]
机构
[1] Qualcomm AI Res, San Diego, CA 92121 USA
关键词
D O I
10.1109/ICCV51070.2023.00804
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose MAMo, a novel memory and attention framework for monocular video depth estimation. MAMo can augment and improve any single-image depth estimation networks into video depth estimation models, enabling them to take advantage of the temporal information to predict more accurate depth. In MAMo, we augment model with memory which aids the depth prediction as the model streams through the video. Specifically, the memory stores learned visual and displacement tokens of the previous time instances. This allows the depth network to cross-reference relevant features from the past when predicting depth on the current frame. We introduce a novel scheme to continuously update the memory, optimizing it to keep tokens that correspond with both the past and the present visual information. We adopt attention- based approach to process memory features where we first learn the spatio-temporal relation among the resultant visual and displacement memory tokens using self-attention module. Further, the output features of self-attention are aggregated with the current visual features through cross-attention. The cross-attended features are finally given to a decoder to predict depth on the current frame. Through extensive experiments on several benchmarks, including KITTI, NYU-Depth V2, and DDAD, we show that MAMo consistently improves monocular depth estimation networks and sets new state-of-the-art (SOTA) accuracy. Notably, our MAMo video depth estimation provides higher accuracy with lower latency, when comparing to SOTA cost-volume-based video depth models.
引用
收藏
页码:8720 / 8730
页数:11
相关论文
共 50 条
  • [1] DEEP MONOCULAR VIDEO DEPTH ESTIMATION USING TEMPORAL ATTENTION
    Ren, Haoyu
    El-khamy, Mostafa
    Lee, Jungwon
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 1988 - 1992
  • [2] Leveraging Contextual Information for Monocular Depth Estimation
    Kim, Doyeon
    Lee, Sihaeng
    Lee, Janghyeon
    Kim, Junmo
    IEEE ACCESS, 2020, 8 : 147808 - 147817
  • [3] Online supervised attention-based recurrent depth estimation from monocular video
    Maslov D.
    Makarov I.
    Maslov, Dmitrii (dvmaslov@edu.hse.ru), 1600, PeerJ Inc. (06): : 1 - 22
  • [4] EdgeConv with Attention Module for Monocular Depth Estimation
    Lee, Minhyeok
    Hwang, Sangwon
    Park, Chaewon
    Lee, Sangyoun
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 2364 - 2373
  • [5] Online supervised attention-based recurrent depth estimation from monocular video
    Maslov, Dmitrii
    Makarov, Ilya
    PEERJ COMPUTER SCIENCE, 2020,
  • [6] Bidirectional Attention Network for Monocular Depth Estimation
    Aich, Shubhra
    Vianney, Jean Marie Uwabeza
    Islam, Md Amirul
    Kaur, Mannat
    Liu, Bingbing
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 11746 - 11752
  • [7] Monocular Depth Estimation with Adaptive Geometric Attention
    Naderi, Taher
    Sadovnik, Amir
    Hayward, Jason
    Qi, Hairong
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 617 - 627
  • [8] Depth-Relative Self Attention for Monocular Depth Estimation
    Shim, Kyuhong
    Kim, Jiyoung
    Lee, Gusang
    Shim, Byonghyo
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1396 - 1404
  • [9] Trap Attention: Monocular Depth Estimation with Manual Traps
    Ning, Chao
    Gan, Hongping
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 5033 - 5043
  • [10] Unsupervised Monocular Depth Estimation With Channel and Spatial Attention
    Wang, Zhuping
    Dai, Xinke
    Guo, Zhanyu
    Huang, Chao
    Zhang, Hao
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (06) : 7860 - 7870