Low-level and high-level prior learning for visual saliency estimation

被引:24
|
作者
Song, Mingli [1 ]
Chen, Chun [1 ]
Wang, Senlin [1 ]
Yang, Yezhou [2 ]
机构
[1] Zhejiang Univ, Coll Comp Sci, Hangzhou 310027, Zhejiang, Peoples R China
[2] Univ Maryland, Dept Comp Sci, College Pk, MD 20742 USA
关键词
Visual saliency estimation; Low-level prior learning; High-level prior learning; SUPPORT VECTOR MACHINES; SCENE; ATTENTION; FEATURES; PREDICT;
D O I
10.1016/j.ins.2013.09.036
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Visual saliency estimation is an important issue in multimedia modeling and computer vision, and constitutes a research field that has been studied for decades. Many approaches have been proposed to solve this problem. In this study, we consider the visual attention problem blue with respect to two aspects: low-level prior learning and high-level prior learning. On the one hand, inspired by the concept of chance of happening, the low-level priors, i.e., Color Statistics-based Priors (CSP) and Spatial Correlation-based Priors (SCP), are learned to describe the color distribution and contrast distribution in natural images. On the other hand, the high-level priors, i.e., the relative relationships between objects, are learned to describe the conditional priority between different objects in the images. In particular, we first learn the low-level priors that are statistically based on a large set of natural images. Then, the high-level priors are learned to construct a conditional probability matrix blue that reflects the relative relationship between different objects. Subsequently, a saliency model is presented by integrating the low-level priors, the high-level priors and the Center Bias Prior (CBP), in which the weights that correspond to the low-level priors and the high-level priors are learned based on the eye tracking data set. The experimental results demonstrate that our approach outperforms the existing techniques. (C) 2013 Elsevier Inc. All rights reserved.
引用
收藏
页码:573 / 585
页数:13
相关论文
共 50 条
  • [1] Detecting high-level and low-level properties in visual images and visual percepts
    Rouw, R
    Kosslyn, SM
    Hamel, R
    COGNITION, 1997, 63 (02) : 209 - 226
  • [2] Visual high-level regions respond to high-level stimulus content in the absence of low-level confounds
    Schindler, Andreas
    Bartels, Andreas
    NEUROIMAGE, 2016, 132 : 520 - 525
  • [3] Incorporating high-level and low-level cues for pain intensity estimation
    Yang, Ruijing
    Hong, Xiaopeng
    Peng, Jinye
    Feng, Xiaoyi
    Zhao, Guoying
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 3495 - 3500
  • [4] Low-Level Spatiochromatic Grouping for Saliency Estimation
    Murray, Naila
    Vanrell, Maria
    Otazu, Xavier
    Alejandro Parraga, C.
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (11) : 2810 - 2816
  • [5] Learning Low-Level Behaviors and High-Level Strategies in Humanoid Soccer
    Simoes, David
    Amaro, Pedro
    Silva, Tiago
    Lau, Nuno
    Reis, Luis Paulo
    FOURTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, ROBOT 2019, VOL 2, 2020, 1093 : 537 - 548
  • [6] Image saliency detection via graph representation with fusing low-level and high-level features
    Gao, Sihan
    Zhang, Lei
    Li, Chenglong
    Tang, Jin
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2016, 28 (03): : 420 - 426
  • [7] Saliency Detection Based on the Combination of High-Level Knowledge and Low-Level Cues in Foggy Images
    Zhu, Xin
    Xu, Xin
    Mu, Nan
    ENTROPY, 2019, 21 (04)
  • [8] LOW-LEVEL WASTE, HIGH-LEVEL PROBLEM
    SKERRETT, PJ
    TECHNOLOGY REVIEW, 1991, 94 (06): : 9 - &
  • [9] High-level views on low-level representations
    Diatchki, IS
    Jones, MP
    Leslie, R
    ACM SIGPLAN NOTICES, 2005, 40 (09) : 168 - 179
  • [10] The High-Level Benefits of Low-Level Sandboxing
    Sammler, Michael
    Garg, Deepak
    Dreyer, Derek
    Litak, Tadeusz
    PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL, 2020, 4 (POPL):