Pyramidal Attention for Saliency Detection

被引:12
|
作者
Hussain, Tanveer [1 ]
Anwar, Abbas [2 ]
Anwar, Saeed [3 ,4 ,5 ,6 ]
Petersson, Lars [4 ]
Baik, Sung Wook [1 ]
机构
[1] Sejong Univ, Seoul, South Korea
[2] Abdul Wali Khan Univ, Mardan, Khyber Pakhtunk, Pakistan
[3] Australian Natl Univ, Canberra, ACT, Australia
[4] Data61 CSIRO, Canberra, ACT, Australia
[5] Univ Technol Sydney, Sydney, NSW, Australia
[6] Univ Canberra, Canberra, ACT, Australia
关键词
D O I
10.1109/CVPRW56347.2022.00325
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Salient object detection (SOD) extracts meaningful contents from an input image. RGB-based SOD methods lack the complementary depth clues; hence, providing limited performance for complex scenarios. Similarly, RGB-D models process RGB and depth inputs, but the depth data availability during testing may hinder the model's practical applicability. This paper exploits only RGB images, estimates depth from RGB, and leverages the intermediate depth features. We employ a pyramidal attention structure to extract multi-level convolutional-transformer features to process initial stage representations and further enhance the subsequent ones. At each stage, the backbone transformer model produces global receptive fields and computing in parallel to attain fine-grained global predictions refined by our residual convolutional attention decoder for optimal saliency prediction. We report significantly improved performance against 21 and 40 state-of-the-art SOD methods on eight RGB and RGB-D datasets, respectively. Consequently, we present a new SOD perspective of generating RGB-D SOD without acquiring depth data during training and testing and assist RGB methods with depth clues for improved performance. The code and trained models are available at https://github.com/tanveer-hussain/EfficientSOD2
引用
收藏
页码:2877 / 2887
页数:11
相关论文
共 50 条
  • [21] Multiscale Cascaded Attention Network for Saliency Detection Based on ResNet
    Jian, Muwei
    Jin, Haodong
    Liu, Xiangyu
    Zhang, Linsong
    SENSORS, 2022, 22 (24)
  • [22] Paying More Attention to Saliency: Image Captioning with Saliency and Context Attention
    Cornia, Marcella
    Baraldi, Lorenzo
    Serra, Giuseppe
    Cucchiara, Rita
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2018, 14 (02)
  • [23] Recurrent reverse attention guided residual learning for saliency object detection
    Li, Tengpeng
    Song, Huihui
    Zhang, Kaihua
    Liu, Qingshan
    NEUROCOMPUTING, 2020, 389 (389) : 170 - 178
  • [24] A Fully Convolutional Network based on Spatial Attention for Saliency Object Detection
    Chen, Kai
    Wang, Yongxiong
    Hu, Chuanfei
    2019 CHINESE AUTOMATION CONGRESS (CAC2019), 2019, : 5707 - 5711
  • [25] INFRARED TARGET DETECTION USING INTENSITY SALIENCY AND SELF-ATTENTION
    Zhang, Ruiheng
    Xu, Min
    Shi, Yaxin
    Fan, Jian
    Mu, Chengpo
    Xu, Lixin
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 1991 - 1995
  • [26] Visual Saliency Detection Based on Global Guidance Map and Background Attention
    Peng, Yanbin
    Feng, Mingkun
    Zhai, Zhinian
    IEEE ACCESS, 2024, 12 : 95434 - 95446
  • [27] Attention-guided RGBD saliency detection using appearance information
    Zhou, Xiaofei
    Li, Gongyang
    Gong, Chen
    Liu, Zhi
    Zhang, Jiyong
    IMAGE AND VISION COMPUTING, 2020, 95 (95)
  • [28] Assessment of feature fusion strategies in visual attention mechanism for saliency detection
    Jian, Muwei
    Zhou, Quan
    Cui, Chaoran
    Nie, Xiushan
    Luo, Hanjiang
    Zhao, Jianli
    Yin, Yilong
    PATTERN RECOGNITION LETTERS, 2019, 127 : 37 - 47
  • [29] PiCANet: Learning Pixel-wise Contextual Attention for Saliency Detection
    Liu, Nian
    Han, Junwei
    Yang, Ming-Hsuan
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3089 - 3098
  • [30] Cascade Saliency Attention Network for Object Detection in Remote Sensing Images
    Yu, Dayang
    Zhang, Rong
    Qin, Shan
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 217 - 223