Saliency Tree: A Novel Saliency Detection Framework

被引:217
|
作者
Liu, Zhi [1 ,2 ]
Zou, Wenbin [3 ,4 ]
Le Meur, Olivier [5 ]
机构
[1] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 200444, Peoples R China
[2] Inst Rech Informat & Syst Aleatoires, F-35042 Rennes, France
[3] Shenzhen Univ, Coll Informat Engn, Shenzhen 518060, Peoples R China
[4] European Univ Brittany, Natl Inst Appl Sci Rennes, F-35708 Rennes, France
[5] Univ Rennes 1, F-35042 Rennes, France
基金
中国国家自然科学基金;
关键词
Saliency tree; saliency detection; saliency model; saliency map; regional saliency measure; region merging; salient node selection; VISUAL-ATTENTION; OBJECT DETECTION; MODEL; IMAGE; SEGMENTATION; MAXIMIZATION; COLOR; VIDEO;
D O I
10.1109/TIP.2014.2307434
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper proposes a novel saliency detection framework termed as saliency tree. For effective saliency measurement, the original image is first simplified using adaptive color quantization and region segmentation to partition the image into a set of primitive regions. Then, three measures, i.e., global contrast, spatial sparsity, and object prior are integrated with regional similarities to generate the initial regional saliency for each primitive region. Next, a saliency-directed region merging approach with dynamic scale control scheme is proposed to generate the saliency tree, in which each leaf node represents a primitive region and each non-leaf node represents a non-primitive region generated during the region merging process. Finally, by exploiting a regional center-surround scheme based node selection criterion, a systematic saliency tree analysis including salient node selection, regional saliency adjustment and selection is performed to obtain final regional saliency measures and to derive the high-quality pixel-wise saliency map. Extensive experimental results on five datasets with pixel-wise ground truths demonstrate that the proposed saliency tree model consistently outperforms the state-of-the-art saliency models.
引用
收藏
页码:1937 / 1952
页数:16
相关论文
共 50 条
  • [1] An adaptive framework for saliency detection
    Jia, Ning
    Liu, Xianhui
    Zhao, Weidong
    Zhang, Haotian
    Zhuo, Keqiang
    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2019, 29 (03) : 382 - 393
  • [2] Saliency bagging: a novel framework for robust salient object detection
    Vivek Kumar Singh
    Nitin Kumar
    The Visual Computer, 2020, 36 : 1423 - 1441
  • [3] Saliency bagging: a novel framework for robust salient object detection
    Singh, Vivek Kumar
    Kumar, Nitin
    VISUAL COMPUTER, 2020, 36 (07): : 1423 - 1441
  • [4] Saliency Boosting: a novel framework to refine salient object detection
    Singh, Vivek Kumar
    Kumar, Nitin
    Madhavan, Suresh
    ARTIFICIAL INTELLIGENCE REVIEW, 2020, 53 (05) : 3731 - 3772
  • [5] A novel edge-oriented framework for saliency detection enhancement
    Xu, Qingzhen
    Wang, Fengyun
    Gong, Yongyi
    Wang, Zhoutao
    Zeng, Kun
    Li, Qi
    Luo, Xiaonan
    IMAGE AND VISION COMPUTING, 2019, 87 : 1 - 12
  • [6] A Novel Method for Saliency Detection
    Zhang, Qiaorong
    Lv, Junya
    Xiao, Huimin
    2009 INTERNATIONAL SYMPOSIUM ON INTELLIGENT INFORMATION SYSTEMS AND APPLICATIONS, PROCEEDINGS, 2009, : 55 - 58
  • [7] Saliency Boosting: a novel framework to refine salient object detection
    Vivek Kumar Singh
    Nitin Kumar
    Suresh Madhavan
    Artificial Intelligence Review, 2020, 53 : 3731 - 3772
  • [8] Object-level saliency: Fusing objectness estimation and saliency detection into a uniform framework
    Zhang, Jianhua
    Zhao, Yanzhu
    Chen, Shengyong
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2018, 53 : 102 - 112
  • [9] A Weighted Sparse Coding Framework for Saliency Detection
    Li, Nianyi
    Sun, Bilin
    Yu, Jingyi
    2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2015, : 5216 - 5223
  • [10] An Edge-oriented Framework for Saliency Detection
    Xu, Qingzhen
    Wang, Fengyun
    Wang, Zhoutao
    Gong, Yongyi
    2017 IEEE 17TH INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOENGINEERING (BIBE), 2017, : 388 - 393