Robust Scale-Aware Stereo Matching Network

被引:5
|
作者
Okae J. [1 ]
Li B. [1 ]
Du J. [1 ]
Hu Y. [1 ]
机构
[1] School of Automation Science and Engineering, South China University of Technology, Guangzhou
来源
关键词
Computer stereo vision; deep learning; disparity maps fusion; multiscale processing; stereo matching;
D O I
10.1109/TAI.2021.3115401
中图分类号
学科分类号
摘要
Recently, deep convolutional neural networks (CNNs) have emerged as powerful tools for the correspondence problem in stereo matching task. However, the existence of multiscale objects and inevitable ill-conditioned regions, such as textureless regions, in real-world scene images continue to challenge current CNN architectures. In this article, we present a robust scale-aware stereo matching network, which aims to predict multiscale disparity maps and fuse them to achieve a more accurate disparity map. To this end, powerful feature representations are extracted from stereo images and are concatenated into a 4-D feature volume. The feature volume is then fed into a series of connected encoder-decoder cost aggregation structures for the construction of multiscale cost volumes. Following this, we regress multiscale disparity maps from the multiscale cost volumes and feed them into a fusion module to predict final disparity map. However, uncertainty estimations at each scale and complex disparity relationships among neighboring pixels pose a challenge on the disparity fusion. To overcome this challenge, we design a robust learning-based scale-aware disparity map fusion model, which seeks to map multiscale disparity maps onto the ground truth disparity map by leveraging their complementary strengths. Experimental results show that the proposed network is more robust and outperforms recent methods on standard stereo evaluation benchmarks. © 2020 IEEE.
引用
收藏
页码:244 / 253
页数:9
相关论文
共 50 条
  • [31] SRNet: Scale-Aware Representation Learning Network for Dense Crowd Counting
    Huang, Liangjun
    Zhu, Luning
    Shen, Shihui
    Zhang, Qing
    Zhang, Jianwei
    IEEE ACCESS, 2021, 9 : 136032 - 136044
  • [32] Scale-Aware Distillation Network for Lightweight Image Super-Resolution
    Lu, Haowei
    Lu, Yao
    Li, Gongping
    Sun, Yanbei
    Wang, Shunzhou
    Li, Yugang
    PATTERN RECOGNITION AND COMPUTER VISION,, PT III, 2021, 13021 : 128 - 139
  • [33] A Scale-Aware Pyramid Network for Multi-Scale Object Detection in SAR Images
    Tang, Linbo
    Tang, Wei
    Qu, Xin
    Han, Yuqi
    Wang, Wenzheng
    Zhao, Baojun
    REMOTE SENSING, 2022, 14 (04)
  • [34] Scale-aware direct monocular odometry
    Campos, Carlos
    Tardos, Juan D.
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 1360 - 1366
  • [35] Scale-Aware Modulation Meet Transformer
    Lin, Weifeng
    Wu, Ziheng
    Chen, Jiayu
    Huang, Jun
    Jin, Lianwen
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 5992 - 6003
  • [36] A Novel Scale-Aware Pansharpening Method
    Li X.
    Gao Y.-N.
    Yue S.
    Yuhang Xuebao/Journal of Astronautics, 2017, 38 (12): : 1348 - 1353
  • [37] Scale-Aware Detailed Matching for Few-Shot Aerial Image Semantic Segmentation
    Yao, Xiwen
    Cao, Qinglong
    Feng, Xiaoxu
    Cheng, Gong
    Han, Junwei
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [38] DEEP SCALE-AWARE IMAGE SMOOTHING
    Li, Jiachun
    Qin, Kunkun
    Xu, Ruotao
    Ji, Hui
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2105 - 2109
  • [39] ScopeViT: Scale-Aware Vision Transformer
    Nie, Xuesong
    Jin, Haoyuan
    Yan, Yunfeng
    Chen, Xi
    Zhu, Zhihang
    Qi, Donglian
    PATTERN RECOGNITION, 2024, 153
  • [40] Scale-Aware Spatially Guided Mapping
    Hao, Shijie
    Guo, Yanrong
    Hong, Richang
    Wang, Meng
    IEEE MULTIMEDIA, 2016, 23 (03) : 34 - 42