CNNapsule: A Lightweight Network with Fusion Features for Monocular Depth Estimation

被引:1
|
作者
Wang, Yinchu [1 ]
Zhu, Haijiang [1 ]
Liu, Mengze [2 ]
机构
[1] Beijing Univ Chem Technol, Coll Informat Sci & Technol, Beijing, Peoples R China
[2] PetroChina Jidong Oilfield Co, Tangshan, Hebei, Peoples R China
基金
中国国家自然科学基金;
关键词
Monocular depth estimation; Matrix capsule; Fusion block;
D O I
10.1007/978-3-030-86362-3_41
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Depth estimation from 2D images is a fundamental task for many applications, for example, robotics and 3D reconstruction. Because of the weak ability to perspective transformation, the existing CNN methods have limited generalization performance and large number of parameters. To solve these problems, we propose CNNapsule network for monocular depth estimation. Firstly, we extract CNN and Matrix Capsule features. Next, we propose a Fusion Block to combine the CNN with Matrix Capsule features. Then the skip connections are used to transmit the extracted and fused features. Moreover, we design the loss function with the consideration of long-tailed distribution, gradient and structural similarity. At last, we compare our method with the existing methods on NYU Depth V2 dataset. The experiment shows that our method has higher accuracy than the traditional methods and similar networks without pre-trained. Compared with the state-of-the-art, the trainable parameters of our method decrease by 65%. In the test experiment of images collected in the Internet and real images collected by mobile phone, the generalization performance of our method is further verified.
引用
收藏
页码:507 / 518
页数:12
相关论文
共 50 条
  • [21] Dynamic Guided Network for Monocular Depth Estimation
    Xing, Xiaoxia
    Cai, Yinghao
    Wang, Yanqing
    Lu, Tao
    Yang, Yiping
    Wen, Dayong
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 5459 - 5465
  • [22] Bidirectional Attention Network for Monocular Depth Estimation
    Aich, Shubhra
    Vianney, Jean Marie Uwabeza
    Islam, Md Amirul
    Kaur, Mannat
    Liu, Bingbing
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 11746 - 11752
  • [23] Nano Quadcopter Obstacle Avoidance with a Lightweight Monocular Depth Network
    Liu, Cheng
    Xu, Yingfu
    van Kampen, Erik-Jan
    de Croon, Guido
    IFAC PAPERSONLINE, 2023, 56 (02): : 9312 - 9317
  • [24] MODE: Monocular omnidirectional depth estimation via consistent depth fusion
    Liu, Yunbiao
    Chen, Chunyi
    IMAGE AND VISION COMPUTING, 2023, 136
  • [25] DTTNet: Depth Transverse Transformer Network for Monocular Depth Estimation
    Kamath, Shreyas K. M.
    Rajeev, Srijith
    Panetta, Karen
    Agaian, Sos S.
    MULTIMODAL IMAGE EXPLOITATION AND LEARNING 2022, 2022, 12100
  • [26] BRNet: Exploring Comprehensive Features for Monocular Depth Estimation
    Han, Wencheng
    Yin, Junbo
    Jin, Xiaogang
    Dai, Xiangdong
    Shen, Jianbing
    COMPUTER VISION, ECCV 2022, PT XXXVIII, 2022, 13698 : 586 - 602
  • [27] MiniNet: An extremely lightweight convolutional neural network for real-time unsupervised monocular depth estimation
    Liu, Jun
    Li, Qing
    Cao, Rui
    Tang, Wenming
    Qiu, Guoping
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2020, 166 (166) : 255 - 267
  • [28] LDA-Mono: A lightweight dual aggregation network for self-supervised monocular depth estimation
    Zhao, Bowen
    He, Hongdou
    Xu, Hang
    Shi, Peng
    Hao, Xiaobing
    Huang, Guoyan
    KNOWLEDGE-BASED SYSTEMS, 2024, 304
  • [29] Lightweight Monocular Depth Estimation via Token-Sharing Transformer
    Lee, Dong-Jae
    Lee, Jae Young
    Shon, Hyunguk
    Yi, Eojindl
    Park, Yeong-Hun
    Cho, Sung-Sik
    Kim, Junmo
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 4895 - 4901
  • [30] LightDepthNet: Lightweight CNN Architecture for Monocular Depth Estimation on Edge Devices
    Liu, Qingliang
    Zhou, Shuai
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2024, 71 (04) : 2389 - 2393