Robust Scale-Aware Stereo Matching Network

被引:5
|
作者
Okae J. [1 ]
Li B. [1 ]
Du J. [1 ]
Hu Y. [1 ]
机构
[1] School of Automation Science and Engineering, South China University of Technology, Guangzhou
来源
关键词
Computer stereo vision; deep learning; disparity maps fusion; multiscale processing; stereo matching;
D O I
10.1109/TAI.2021.3115401
中图分类号
学科分类号
摘要
Recently, deep convolutional neural networks (CNNs) have emerged as powerful tools for the correspondence problem in stereo matching task. However, the existence of multiscale objects and inevitable ill-conditioned regions, such as textureless regions, in real-world scene images continue to challenge current CNN architectures. In this article, we present a robust scale-aware stereo matching network, which aims to predict multiscale disparity maps and fuse them to achieve a more accurate disparity map. To this end, powerful feature representations are extracted from stereo images and are concatenated into a 4-D feature volume. The feature volume is then fed into a series of connected encoder-decoder cost aggregation structures for the construction of multiscale cost volumes. Following this, we regress multiscale disparity maps from the multiscale cost volumes and feed them into a fusion module to predict final disparity map. However, uncertainty estimations at each scale and complex disparity relationships among neighboring pixels pose a challenge on the disparity fusion. To overcome this challenge, we design a robust learning-based scale-aware disparity map fusion model, which seeks to map multiscale disparity maps onto the ground truth disparity map by leveraging their complementary strengths. Experimental results show that the proposed network is more robust and outperforms recent methods on standard stereo evaluation benchmarks. © 2020 IEEE.
引用
收藏
页码:244 / 253
页数:9
相关论文
共 50 条
  • [21] Scale-aware shape manipulation
    Liu, Zheng
    Wang, Wei-ming
    Liu, Xiu-ping
    Liu, Li-gang
    JOURNAL OF ZHEJIANG UNIVERSITY-SCIENCE C-COMPUTERS & ELECTRONICS, 2014, 15 (09): : 764 - 775
  • [22] Scale-aware stereo direct visual odometry with online photometric calibration for agricultural environment
    Yu, Tao
    Yu, Xiaohan
    Liu, WenLi
    Xiong, Shengwu
    ADVANCED ROBOTICS, 2023, 37 (06) : 433 - 446
  • [23] Scale-Aware Face Detection
    Hao, Zekun
    Liu, Yu
    Qin, Hongwei
    Yan, Junjie
    Li, Xiu
    Hu, Xiaolin
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1913 - 1922
  • [24] Scale-aware token-matching for transformer-based object detector
    Jung, Aecheon
    Hong, Sungeun
    Hyun, Yoonsuk
    PATTERN RECOGNITION LETTERS, 2024, 185 : 197 - 202
  • [25] Scale-aware and Anti-interference Convolutional Network for Crowd Counting
    Yang, Qianqian
    Hao, Xiaoliang
    Xia, Yinfeng
    Peng, Sifan
    Yin, Baoqun
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [26] SaHAN: Scale-aware hierarchical attention network for scene text recognition
    Zhang, Jiaxin
    Luo, Canjie
    Jin, Lianwen
    Wang, Tianwei
    Li, Ziyan
    Zhou, Weiying
    PATTERN RECOGNITION LETTERS, 2020, 136 : 205 - 211
  • [27] Global to Local: A Scale-Aware Network for Remote Sensing Object Detection
    Gao, Tao
    Niu, Qianqian
    Zhang, Jing
    Chen, Ting
    Mei, Shaohui
    Jubair, Ahmad
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [28] PROGRESSIVE SCALE-AWARE NETWORK FOR REMOTE SENSING IMAGE CHANGE CAPTIONING
    Liu, Chenyang
    Yang, Jiajun
    Qi, Zipeng
    Zou, Zhengxia
    Shi, Zhenwei
    IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 6668 - 6671
  • [29] Instance Semantic Segmentation via Scale-Aware Patch Fusion Network
    Yang, Jinfu
    Zhang, Jingling
    Li, Mingai
    Wang, Meijie
    COMPUTER VISION, PT II, 2017, 772 : 521 - 532
  • [30] SCALE-AWARE DEEP NETWORK WITH HOLE CONVOLUTION FOR BLIND MOTION DEBLURRING
    Li, Jichun
    Li, Ke
    Yan, Bo
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 658 - 663