DMRA: Depth-Induced Multi-Scale Recurrent Attention Network for RGB-D Saliency Detection

被引:48
|
作者
Ji, Wei [1 ,2 ]
Yan, Ge [2 ]
Li, Jingjing [1 ,2 ]
Piao, Yongri [3 ]
Yao, Shunyu [2 ]
Zhang, Miao [4 ]
Cheng, Li [1 ]
Lu, Huchuan [3 ]
机构
[1] Univ Alberta, Dept Elect & Comp Engn, Edmonton, AB T5V 1A4, Canada
[2] Dalian Univ Technol, Sch Software Technol, Dalian 116024, Peoples R China
[3] Dalian Univ Technol, Sch Informat & Commun Engn, Fac Elect Informat & Elect Engn, Dalian 116024, Peoples R China
[4] Dalian Univ Technol, DUT RU Int Sch Informat & Software Engn, Key Lab Ubiquitous Network & Serv Software Liaoni, Dalian 116024, Peoples R China
基金
中国国家自然科学基金; 加拿大自然科学与工程研究理事会;
关键词
Feature extraction; Saliency detection; Semantics; Random access memory; Cameras; Analytical models; Visualization; RGB-D saliency detection; salient object detection; convolutional neural networks; cross-modal fusion; OBJECT DETECTION; FUSION; SEGMENTATION;
D O I
10.1109/TIP.2022.3154931
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we propose a novel depth-induced multi-scale recurrent attention network for RGB-D saliency detection, named as DMRA. It achieves dramatic performance especially in complex scenarios. There are four main contributions of our network that are experimentally demonstrated to have significant practical merits. First, we design an effective depth refinement block using residual connections to fully extract and fuse cross-modal complementary cues from RGB and depth streams. Second, depth cues with abundant spatial information are innovatively combined with multi-scale contextual features for accurately locating salient objects. Third, a novel recurrent attention module inspired by Internal Generative Mechanism of human brain is designed to generate more accurate saliency results via comprehensively learning the internal semantic relation of the fused feature and progressively optimizing local details with memory-oriented scene understanding. Finally, a cascaded hierarchical feature fusion strategy is designed to promote efficient information interaction of multi-level contextual features and further improve the contextual representability of model. In addition, we introduce a new real-life RGB-D saliency dataset containing a variety of complex scenarios that has been widely used as a benchmark dataset in recent RGB-D saliency detection research. Extensive empirical experiments demonstrate that our method can accurately identify salient objects and achieve appealing performance against 18 state-of-the-art RGB-D saliency models on nine benchmark datasets.
引用
收藏
页码:2321 / 2336
页数:16
相关论文
共 50 条
  • [21] Feature Enhancement and Multi-scale Cross-Modal Attention for RGB-D Salient Object Detection
    Wan, Xin
    Yang, Gang
    Zhou, Boyi
    Liu, Chang
    Wang, Hangxu
    Wang, Yutao
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2021, PT II, 2021, 13020 : 409 - 420
  • [22] RGB-D Saliency Detection Based on Optimized ELM and Depth Level
    Liu Zhengyi
    Xu Tianze
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2019, 41 (09) : 2224 - 2230
  • [23] Learning Selective Mutual Attention and Contrast for RGB-D Saliency Detection
    Liu, Nian
    Zhang, Ni
    Shao, Ling
    Han, Junwei
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (12) : 9026 - 9042
  • [24] Learnable Depth-Sensitive Attention for Deep RGB-D Saliency Detection with Multi-modal Fusion Architecture Search
    Sun, Peng
    Zhang, Wenhu
    Li, Songyuan
    Guo, Yilin
    Song, Congli
    Li, Xi
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2022, 130 (11) : 2822 - 2841
  • [25] Learnable Depth-Sensitive Attention for Deep RGB-D Saliency Detection with Multi-modal Fusion Architecture Search
    Peng Sun
    Wenhu Zhang
    Songyuan Li
    Yilin Guo
    Congli Song
    Xi Li
    International Journal of Computer Vision, 2022, 130 : 2822 - 2841
  • [26] AGRFNet: Two-stage cross-modal and multi-level attention gated recurrent fusion network for RGB-D saliency detection
    Liu, Zhengyi
    Wang, Yuan
    Tan, Yacheng
    Li, Wei
    Xiao, Yun
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2022, 104
  • [27] RGB-D Saliency detection Based on Saliency Center Prior and Saliency-depth Probability Adjustment
    Liu Z.
    Huang Z.
    Zhang Z.
    Huang, Zichao (1052041670@qq.com), 1600, Science Press (39): : 2945 - 2952
  • [28] Depth awakens: A depth-perceptual attention fusion network for RGB-D camouflaged object detection
    Liu, Xinran
    Qi, Lin
    Song, Yuxuan
    Wen, Qi
    IMAGE AND VISION COMPUTING, 2024, 143
  • [29] RGB-D Salient Object Detection Using Saliency and Edge Reverse Attention
    Ikeda, Tomoki
    Ikehara, Masaaki
    IEEE ACCESS, 2023, 11 : 68818 - 68825
  • [30] AMDFNet: Adaptive multi-level deformable fusion network for RGB-D saliency detection
    Li, Fei
    Zheng, Jiangbin
    Zhang, Yuan-fang
    Liu, Nian
    Jia, Wenjing
    NEUROCOMPUTING, 2021, 465 (465) : 141 - 156