Low-level and high-level prior learning for visual saliency estimation

被引:24
|
作者
Song, Mingli [1 ]
Chen, Chun [1 ]
Wang, Senlin [1 ]
Yang, Yezhou [2 ]
机构
[1] Zhejiang Univ, Coll Comp Sci, Hangzhou 310027, Zhejiang, Peoples R China
[2] Univ Maryland, Dept Comp Sci, College Pk, MD 20742 USA
关键词
Visual saliency estimation; Low-level prior learning; High-level prior learning; SUPPORT VECTOR MACHINES; SCENE; ATTENTION; FEATURES; PREDICT;
D O I
10.1016/j.ins.2013.09.036
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Visual saliency estimation is an important issue in multimedia modeling and computer vision, and constitutes a research field that has been studied for decades. Many approaches have been proposed to solve this problem. In this study, we consider the visual attention problem blue with respect to two aspects: low-level prior learning and high-level prior learning. On the one hand, inspired by the concept of chance of happening, the low-level priors, i.e., Color Statistics-based Priors (CSP) and Spatial Correlation-based Priors (SCP), are learned to describe the color distribution and contrast distribution in natural images. On the other hand, the high-level priors, i.e., the relative relationships between objects, are learned to describe the conditional priority between different objects in the images. In particular, we first learn the low-level priors that are statistically based on a large set of natural images. Then, the high-level priors are learned to construct a conditional probability matrix blue that reflects the relative relationship between different objects. Subsequently, a saliency model is presented by integrating the low-level priors, the high-level priors and the Center Bias Prior (CBP), in which the weights that correspond to the low-level priors and the high-level priors are learned based on the eye tracking data set. The experimental results demonstrate that our approach outperforms the existing techniques. (C) 2013 Elsevier Inc. All rights reserved.
引用
收藏
页码:573 / 585
页数:13
相关论文
共 50 条
  • [31] Connecting Low-Level Image Processing and High-Level Vision via Deep Learning
    Liu, Ding
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 5775 - 5776
  • [32] Inferring high-level behavior from low-level sensors
    Patterson, DJ
    Lin, LA
    Fox, D
    Kautz, H
    UBICOMP 2003: UBIQUITOUS COMPUTING, 2003, 2864 : 73 - 89
  • [34] Reconciling High-Level Optimizations and Low-Level Code in LLVM
    Lee, Juneyoung
    Hur, Chung-Kil
    Jung, Ralf
    Liu, Zhengyang
    Regehr, John
    Lopes, Nuno P.
    PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL, 2018, 2
  • [35] LOW-LEVEL RADIOACTIVE-WASTES, HIGH-LEVEL RISK
    NEWMAN, A
    ENVIRONMENTAL SCIENCE & TECHNOLOGY, 1994, 28 (11) : A488 - A491
  • [36] Drawing the boundary between low-level and high-level mindreading
    de Vignemont, Frederique
    PHILOSOPHICAL STUDIES, 2009, 144 (03) : 457 - 466
  • [37] Music Genre Prediction by Low-Level and High-Level Characteristics
    Vatolkin, Igor
    Roetter, Guenther
    Weihs, Claus
    DATA ANALYSIS, MACHINE LEARNING AND KNOWLEDGE DISCOVERY, 2014, : 427 - 434
  • [38] Drawing the boundary between low-level and high-level mindreading
    Frédérique de Vignemont
    Philosophical Studies, 2009, 144 : 457 - 466
  • [39] High-level soccer indexing on low-level feature space
    Sugano, M
    Uemura, K
    Nakajima, Y
    Yanagihara, H
    ICIP: 2004 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1- 5, 2004, : 1625 - 1628
  • [40] Psilocybin impairs high-level but not low-level motion perception
    Carter, OL
    Pettigrew, JD
    Burr, DC
    Alais, D
    Hasler, F
    Vollenweider, FX
    NEUROREPORT, 2004, 15 (12) : 1947 - 1951