Spatial-aware global contrast representation for saliency detection

被引:3
|
作者
Xu, Dan [1 ]
Huang, Shucheng [1 ]
Zuo, Xin [1 ]
机构
[1] Jiangsu Univ Sci & Technol, Sch Comp Sci, Zhenjiang, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Saliency detection; convolutional neural networks; spatial-aware; global contrast cube; REGION DETECTION; MODEL;
D O I
10.3906/elk-1808-208
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning networks have been demonstrated to be helpful when used in salient object detection and achieved superior performance than the methods that are based on low-level hand-crafted features. In this paper, we propose a novel spatial-aware contrast cube-based convolution neural network (CNN) which can further improve the detection performance. From this cube data structure, the contrast of the superpixel is extracted. Meanwhile, the spatial information is preserved during the transformation. The proposed method has two advantages compared to the existing deep learning-based saliency methods. First, instead of feeding the deep learning networks with raw image patches or pixels, we explore the spatial-aware contrast cubes of superpixels as training samples of CNN. Our method is superior because the saliency of a region is more dependent on its contrast with the other regions than its appearance. Second, to adapt to the diversity of a real scene, both the color and textural cues are considered. Two CNNs, color CNN and textural CNN, are constructed to extract corresponding features. The saliency maps generated by two cues are concatenated in a dynamic way to achieve optimum results. The proposed method achieves the maximum precision of 0.9856, 0.9250, and 0.8949 on three benchmark datasets, MSRA1000, ECSSD, and PASCAL-S, respectively, which shows an improvement of performance in comparison to the state-of-the-art saliency detection methods.
引用
收藏
页码:2412 / 2429
页数:18
相关论文
共 50 条
  • [41] Exploiting local and global characteristics for contrast based visual saliency detection
    Xu X.
    Wang Y.-L.
    Zhang X.-L.
    Journal of Shanghai Jiaotong University (Science), 2015, 20 (01) : 14 - 20
  • [42] Exploiting Local and Global Characteristics for Contrast Based Visual Saliency Detection
    徐新
    王英林
    张晓龙
    Journal of Shanghai Jiaotong University(Science), 2015, 20 (01) : 14 - 20
  • [43] Domain Transform Filter and Spatial-Aware Collaborative Representation for Hyperspectral Image Classification Using Few Labeled Samples
    Karaca, Ali Can
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2021, 18 (07) : 1264 - 1268
  • [44] Spatial-Aware Remote Sensing Image Generation From Spatial Relationship Descriptions
    Lei, Yaxian
    Tong, Xiaochong
    Qiu, Chunping
    Song, Haoshuai
    Guo, Congzhou
    Li, He
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2025, 22
  • [45] Saliency Detection using Boundary Aware Regional Contrast Based Seam-map
    Islam, Aminul
    Ahsan, Sk. Md. Masudul
    Tan, Joo Kooi
    2018 INTERNATIONAL CONFERENCE ON INNOVATION IN ENGINEERING AND TECHNOLOGY (ICIET), 2018,
  • [46] Spatial-aware Iterative Integration of Crisis Management Information Systems
    Sojeva, Betim
    Xie, Jingquan
    PROCEEDINGS OF THE 2016 3RD INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGIES FOR DISASTER MANAGEMENT (ICT-DM), 2016, : 216 - 218
  • [47] Spatial-Aware Deep Reinforcement Learning for the Traveling Officer Problem
    Strauss, Niklas
    Schubert, Matthias
    PROCEEDINGS OF THE 2024 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2024, : 869 - 877
  • [48] Spatial-aware collaborative region mining for fine-grained recognition
    Weiwei Yang
    Jian Yin
    Multimedia Tools and Applications, 2024, 83 (9) : 25741 - 25767
  • [49] Spatial-Aware Multi-Task Learning Based Speech Separation
    Sun, Wei
    Wang, Mei
    Qiu, Lili
    2024 IEEE 21ST INTERNATIONAL CONFERENCE ON MOBILE AD-HOC AND SMART SYSTEMS, MASS 2024, 2024, : 100 - 108
  • [50] Crowd Counting using Deep Recurrent Spatial-Aware Network
    Liu, Lingbo
    Wang, Hongjun
    Li, Guanbin
    Ouyang, Wanli
    Lin, Liang
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 849 - 855