Saliency Tree: A Novel Saliency Detection Framework

被引:217
|
作者
Liu, Zhi [1 ,2 ]
Zou, Wenbin [3 ,4 ]
Le Meur, Olivier [5 ]
机构
[1] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 200444, Peoples R China
[2] Inst Rech Informat & Syst Aleatoires, F-35042 Rennes, France
[3] Shenzhen Univ, Coll Informat Engn, Shenzhen 518060, Peoples R China
[4] European Univ Brittany, Natl Inst Appl Sci Rennes, F-35708 Rennes, France
[5] Univ Rennes 1, F-35042 Rennes, France
基金
中国国家自然科学基金;
关键词
Saliency tree; saliency detection; saliency model; saliency map; regional saliency measure; region merging; salient node selection; VISUAL-ATTENTION; OBJECT DETECTION; MODEL; IMAGE; SEGMENTATION; MAXIMIZATION; COLOR; VIDEO;
D O I
10.1109/TIP.2014.2307434
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper proposes a novel saliency detection framework termed as saliency tree. For effective saliency measurement, the original image is first simplified using adaptive color quantization and region segmentation to partition the image into a set of primitive regions. Then, three measures, i.e., global contrast, spatial sparsity, and object prior are integrated with regional similarities to generate the initial regional saliency for each primitive region. Next, a saliency-directed region merging approach with dynamic scale control scheme is proposed to generate the saliency tree, in which each leaf node represents a primitive region and each non-leaf node represents a non-primitive region generated during the region merging process. Finally, by exploiting a regional center-surround scheme based node selection criterion, a systematic saliency tree analysis including salient node selection, regional saliency adjustment and selection is performed to obtain final regional saliency measures and to derive the high-quality pixel-wise saliency map. Extensive experimental results on five datasets with pixel-wise ground truths demonstrate that the proposed saliency tree model consistently outperforms the state-of-the-art saliency models.
引用
收藏
页码:1937 / 1952
页数:16
相关论文
共 50 条
  • [31] Delving into the Impact of Saliency Detector: A GeminiNet for Accurate Saliency Detection
    Zheng, Tao
    Li, Bo
    Zeng, Delu
    Zhou, Zhiheng
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: IMAGE PROCESSING, PT III, 2019, 11729 : 347 - 359
  • [32] A novel approach based on saliency edges to contour detection
    Dou, Yan
    Kong, Lingfu
    2008 INTERNATIONAL CONFERENCE ON AUDIO, LANGUAGE AND IMAGE PROCESSING, VOLS 1 AND 2, PROCEEDINGS, 2008, : 552 - 556
  • [33] A novel deep network and aggregation model for saliency detection
    Ye Liang
    Hongzhe Liu
    Nan Ma
    The Visual Computer, 2020, 36 : 1883 - 1895
  • [34] Inaccurate Supervised Saliency Detection Based on Iterative Feedback Framework
    Pang, Yu
    Wu, Yunhe
    Wu, Chengdong
    Yu, Xiaosheng
    Gao, Yuan
    IEEE ACCESS, 2020, 8 : 111482 - 111493
  • [35] A flexible framework of adaptive method selection for image saliency detection
    Zhang, Changqing
    Tao, Zhiqiang
    Wei, Xingxing
    Cao, Xiaochun
    PATTERN RECOGNITION LETTERS, 2015, 63 : 66 - 70
  • [36] A New Framework for Multiscale Saliency Detection Based on Image Patches
    Zhou, Jingbo
    Jin, Zhong
    NEURAL PROCESSING LETTERS, 2013, 38 (03) : 361 - 374
  • [37] Saliency detection via Boolean and foreground in a dynamic Bayesian framework
    Qi, Wei
    Han, Jing
    Zhang, Yi
    Bai, Lianfa
    VISUAL COMPUTER, 2017, 33 (02): : 209 - 220
  • [38] A novel deep network and aggregation model for saliency detection
    Liang, Ye
    Liu, Hongzhe
    Ma, Nan
    VISUAL COMPUTER, 2020, 36 (09): : 1883 - 1895
  • [39] A Novel Saliency Detection Model Based on Curvelet Transform
    Bai, Peiqing
    Cui, Ziguan
    Gan, Zongliang
    Tang, Guijin
    Liu, Feng
    2016 8TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS & SIGNAL PROCESSING (WCSP), 2016,
  • [40] RGB-D Saliency Detection under Bayesian Framework
    Wang, Song-Tao
    Zhou, Zhen
    Qu, Han-Bing
    Li, Bin
    2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2016, : 1881 - 1886