Low-level and high-level prior learning for visual saliency estimation

被引:24
|
作者
Song, Mingli [1 ]
Chen, Chun [1 ]
Wang, Senlin [1 ]
Yang, Yezhou [2 ]
机构
[1] Zhejiang Univ, Coll Comp Sci, Hangzhou 310027, Zhejiang, Peoples R China
[2] Univ Maryland, Dept Comp Sci, College Pk, MD 20742 USA
关键词
Visual saliency estimation; Low-level prior learning; High-level prior learning; SUPPORT VECTOR MACHINES; SCENE; ATTENTION; FEATURES; PREDICT;
D O I
10.1016/j.ins.2013.09.036
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Visual saliency estimation is an important issue in multimedia modeling and computer vision, and constitutes a research field that has been studied for decades. Many approaches have been proposed to solve this problem. In this study, we consider the visual attention problem blue with respect to two aspects: low-level prior learning and high-level prior learning. On the one hand, inspired by the concept of chance of happening, the low-level priors, i.e., Color Statistics-based Priors (CSP) and Spatial Correlation-based Priors (SCP), are learned to describe the color distribution and contrast distribution in natural images. On the other hand, the high-level priors, i.e., the relative relationships between objects, are learned to describe the conditional priority between different objects in the images. In particular, we first learn the low-level priors that are statistically based on a large set of natural images. Then, the high-level priors are learned to construct a conditional probability matrix blue that reflects the relative relationship between different objects. Subsequently, a saliency model is presented by integrating the low-level priors, the high-level priors and the Center Bias Prior (CBP), in which the weights that correspond to the low-level priors and the high-level priors are learned based on the eye tracking data set. The experimental results demonstrate that our approach outperforms the existing techniques. (C) 2013 Elsevier Inc. All rights reserved.
引用
收藏
页码:573 / 585
页数:13
相关论文
共 50 条
  • [41] High-level to Low-level in Unity with GPU Shader Programming
    Hmeljak, Dimitrij
    PROCEEDINGS OF THE 53RD ACM TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION (SIGCSE 2022), VOL 2, 2022, : 1140 - 1140
  • [42] Reconciling High-Level Optimizations and Low-Level Code in LLVM
    Lee, Juneyoung
    Hur, Chung-Kil
    Jung, Ralf
    Liu, Zhengyang
    Regehr, John
    Lopes, Nuno P.
    PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL, 2018, 2
  • [43] CBIR: From low-level features to high-level semantics
    Zhou, XS
    Huang, TS
    IMAGE AND VIDEO COMMUNICATIONS AND PROCESSING 2000, 2000, 3974 : 426 - 431
  • [44] Unifying Low-Level and High-Level Music Similarity Measures
    Bogdanov, Dmitry
    Serra, Joan
    Wack, Nicolas
    Herrera, Perfecto
    Serra, Xavier
    IEEE TRANSACTIONS ON MULTIMEDIA, 2011, 13 (04) : 687 - 701
  • [45] Image Saliency Detection Based on Low-Level Features and Boundary Prior
    Jia, Chao
    Chen, Weili
    Kong, Fanshu
    PROCEEDINGS OF 2020 IEEE 4TH INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2020), 2020, : 137 - 141
  • [46] LPMs: high-level design uses low-level techniques
    Maxfield, Intergraph Computer Systems
    EDN, 10 (7pp):
  • [47] Beyond Saliency: Assessing Visual Balance with High-level Cues
    Kandemir, Baris
    Zhou, Zihan
    Li, Jia
    Wang, James Z.
    PROCEEDINGS OF THE THEMATIC WORKSHOPS OF ACM MULTIMEDIA 2017 (THEMATIC WORKSHOPS'17), 2017, : 26 - 34
  • [48] Informative subspaces for audio-visual processing: High-level function from low-level fusion
    Fisher, JW
    Darrell, T
    2002 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS I-IV, PROCEEDINGS, 2002, : 4104 - 4107
  • [49] Deep supervised visual saliency model addressing low-level features
    Zhou L.
    Gu X.
    Journal of Ambient Intelligence and Humanized Computing, 2023, 14 (12) : 15659 - 15672
  • [50] High-Level, but Not Low-Level, Motion Perception Is Impaired in Patients With Schizophrenia
    Kandil, Farid I.
    Pedersen, Anya
    Wehnes, Jana
    Ohrmann, Patricia
    NEUROPSYCHOLOGY, 2013, 27 (01) : 60 - 68