DMRA: Depth-Induced Multi-Scale Recurrent Attention Network for RGB-D Saliency Detection

被引:48
|
作者
Ji, Wei [1 ,2 ]
Yan, Ge [2 ]
Li, Jingjing [1 ,2 ]
Piao, Yongri [3 ]
Yao, Shunyu [2 ]
Zhang, Miao [4 ]
Cheng, Li [1 ]
Lu, Huchuan [3 ]
机构
[1] Univ Alberta, Dept Elect & Comp Engn, Edmonton, AB T5V 1A4, Canada
[2] Dalian Univ Technol, Sch Software Technol, Dalian 116024, Peoples R China
[3] Dalian Univ Technol, Sch Informat & Commun Engn, Fac Elect Informat & Elect Engn, Dalian 116024, Peoples R China
[4] Dalian Univ Technol, DUT RU Int Sch Informat & Software Engn, Key Lab Ubiquitous Network & Serv Software Liaoni, Dalian 116024, Peoples R China
基金
中国国家自然科学基金; 加拿大自然科学与工程研究理事会;
关键词
Feature extraction; Saliency detection; Semantics; Random access memory; Cameras; Analytical models; Visualization; RGB-D saliency detection; salient object detection; convolutional neural networks; cross-modal fusion; OBJECT DETECTION; FUSION; SEGMENTATION;
D O I
10.1109/TIP.2022.3154931
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we propose a novel depth-induced multi-scale recurrent attention network for RGB-D saliency detection, named as DMRA. It achieves dramatic performance especially in complex scenarios. There are four main contributions of our network that are experimentally demonstrated to have significant practical merits. First, we design an effective depth refinement block using residual connections to fully extract and fuse cross-modal complementary cues from RGB and depth streams. Second, depth cues with abundant spatial information are innovatively combined with multi-scale contextual features for accurately locating salient objects. Third, a novel recurrent attention module inspired by Internal Generative Mechanism of human brain is designed to generate more accurate saliency results via comprehensively learning the internal semantic relation of the fused feature and progressively optimizing local details with memory-oriented scene understanding. Finally, a cascaded hierarchical feature fusion strategy is designed to promote efficient information interaction of multi-level contextual features and further improve the contextual representability of model. In addition, we introduce a new real-life RGB-D saliency dataset containing a variety of complex scenarios that has been widely used as a benchmark dataset in recent RGB-D saliency detection research. Extensive empirical experiments demonstrate that our method can accurately identify salient objects and achieve appealing performance against 18 state-of-the-art RGB-D saliency models on nine benchmark datasets.
引用
收藏
页码:2321 / 2336
页数:16
相关论文
共 50 条
  • [41] Bilateral Attention Network for RGB-D Salient Object Detection
    Zhang, Zhao
    Lin, Zheng
    Xu, Jun
    Jin, Wen-Da
    Lu, Shao-Ping
    Fan, Deng-Ping
    IEEE Transactions on Image Processing, 2021, 30 : 1949 - 1961
  • [42] Attention-based contextual interaction asymmetric network for RGB-D saliency prediction
    Zhang, Xinyue
    Jin, Ting
    Zhou, Wujie
    Lei, Jingsheng
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2021, 74 (74)
  • [43] M 2RNet: Multi-modal and multi-scale refined network for RGB-D salient object detection
    Fang, Xian
    Jiang, Mingfeng
    Zhu, Jinchao
    Shao, Xiuli
    Wang, Hongpeng
    PATTERN RECOGNITION, 2023, 135
  • [44] Multi-Scale Attention and Encoder-Decoder Network for Video Saliency Object Detection
    Hongbo Bi
    Huihui Zhu
    Lina Yang
    Ranwan Wu
    Pattern Recognition and Image Analysis, 2022, 32 : 340 - 350
  • [45] Multi-Scale Attention and Encoder-Decoder Network for Video Saliency Object Detection
    Bi, Hongbo
    Zhu, Huihui
    Yang, Lina
    Wu, Ranwan
    PATTERN RECOGNITION AND IMAGE ANALYSIS, 2022, 32 (02) : 340 - 350
  • [46] RGB-D Saliency Detection Based on Multi-Level Feature Fusion
    Shi, Yue
    Yu, Wanjun
    Chen, Ying
    Computer Engineering and Applications, 2023, 59 (07): : 207 - 213
  • [47] Cross-Modal Adaptive Interaction Network for RGB-D Saliency Detection
    Du, Qinsheng
    Bian, Yingxu
    Wu, Jianyu
    Zhang, Shiyan
    Zhao, Jian
    APPLIED SCIENCES-BASEL, 2024, 14 (17):
  • [48] Attentive Cross-Modal Fusion Network for RGB-D Saliency Detection
    Liu, Di
    Zhang, Kao
    Chen, Zhenzhong
    IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 967 - 981
  • [49] PointMBF: A Multi-scale Bidirectional Fusion Network for Unsupervised RGB-D Point Cloud Registration
    Yuan, Mingzhi
    Fu, Kexue
    Li, Zhihao
    Meng, Yucong
    Wang, Manning
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 17648 - 17659
  • [50] DPANet: Depth Potentiality-Aware Gated Attention Network for RGB-D Salient Object Detection
    Chen, Zuyao
    Cong, Runmin
    Xu, Qianqian
    Huang, Qingming
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 7012 - 7024