Extraction of visual attention with gaze duration and saliency map

被引:0
|
作者
Igarashi, Hiroshi [1 ]
Suzuki, Satoshi
Sugita, Tetsuro [2 ]
Kurisu, Masamitsu [3 ]
Kakikura, Masayoshi [2 ]
机构
[1] Tokyo Denki Univ, Century COE Project Off 21, Chiyoda Ku, 1202 Akihabara Daibiru, Tokyo, Japan
[2] Tokyo Denki Univ, Dept Elect Engn, 1202 Akihabara Daibiru, Tokyo, Japan
[3] Tokyo Denki Univ, Dept Mech Engn, 1202 Akihabara Daibiru, Tokyo, Japan
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Measurement of gaze is effective to evaluate human operator's attention, one's operation skills, perceptional capability and so on. Especially, gaze duration, called fixation time, is often utilized. Generally, it is said that long fixation time is detected when the operator pays attention to something intentionally. However, the duration also depends on saliency of displayed image, especially humans' perception characteristics are sensitive to intensities of an image. Although a lot of researchers have presented models of visual attention with the saliency map, the high saliency may attract a human gaze even if he/she do not have attention. Therefore, in order to estimate the human attention, we consider human vision characteristics with foveal vision. The foveal vision is used for scrutinizing highly detailed objects, and it also may relate to the attention. In this paper, we propose a new approach to estimate human visual attention by checking gaze duration and a saliency map considering human foveal vision characteristics. The estimation technique was experimented with five participants, and as the results, we found the technique makes aware of the attention more than conventional technique which considers only gaze duration.
引用
收藏
页码:291 / +
页数:2
相关论文
共 50 条
  • [21] Implementation of visual attention system using bottom-up saliency map model
    Park, SJ
    Ban, SW
    Shin, JK
    Lee, M
    ARTIFICAIL NEURAL NETWORKS AND NEURAL INFORMATION PROCESSING - ICAN/ICONIP 2003, 2003, 2714 : 678 - 685
  • [22] Connecting Gaze, Scene, and Attention: Generalized Attention Estimation via Joint Modeling of Gaze and Scene Saliency
    Chong, Eunji
    Ruiz, Nataniel
    Wang, Yongxin
    Zhang, Yun
    Rozga, Agata
    Rehg, James M.
    COMPUTER VISION - ECCV 2018, PT V, 2018, 11209 : 397 - 412
  • [23] A saliency map in primary visual cortex
    Li, ZP
    TRENDS IN COGNITIVE SCIENCES, 2002, 6 (01) : 9 - 16
  • [24] Multiscale Discriminant Saliency for Visual Attention
    Anh Cat Le Ngo
    Ang, Kenneth Li-Minn
    Qiu, Guoping
    Kah-Phooi, Jasmine Seng
    COMPUTATIONAL SCIENCE AND ITS APPLICATIONS, PT I, 2013, 7971 : 464 - 484
  • [25] Multiscale discriminant saliency for visual attention
    Le Ngo, Anh Cat, 1600, Springer Verlag (7971):
  • [26] Diffuse visual attention for saliency detection
    Liu, Risheng
    Zhong, Guangyu
    Cao, Junjie
    Su, Zhixun
    JOURNAL OF ELECTRONIC IMAGING, 2015, 24 (01)
  • [27] Saliency, Visual Attention and Image Quality
    Fredembach, Clement
    Wang, Jue
    Woolfe, Geoff J.
    COLOR SCIENCE AND ENGINEERING SYSTEMS, TECHNOLOGIES, AND APPLICATIONS: EIGHTEENTH COLOR AND IMAGING CONFERENCE, 2010, : 128 - 133
  • [28] SalGaze: Personalizing Gaze Estimation using Visual Saliency
    Chang, Zhuoqing
    Di Martino, Matias
    Qiu, Qiang
    Espinosa, Steven
    Sapiro, Guillermo
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 1169 - 1178
  • [29] Video Attention Deviation Estimation using Inter-Frame Visual Saliency Map Analysis
    Feng, Yunlong
    Cheung, Gene
    Le Callet, Patrick
    Ji, Yusheng
    VISUAL INFORMATION PROCESSING AND COMMUNICATION III, 2012, 8305
  • [30] Olfaction spontaneously highlights visual saliency map
    Chen, Kepu
    Zhou, Bin
    Chen, Shan
    He, Sheng
    Zhou, Wen
    PROCEEDINGS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES, 2013, 280 (1768)