Extraction of visual attention with gaze duration and saliency map

被引:0
|
作者
Igarashi, Hiroshi [1 ]
Suzuki, Satoshi
Sugita, Tetsuro [2 ]
Kurisu, Masamitsu [3 ]
Kakikura, Masayoshi [2 ]
机构
[1] Tokyo Denki Univ, Century COE Project Off 21, Chiyoda Ku, 1202 Akihabara Daibiru, Tokyo, Japan
[2] Tokyo Denki Univ, Dept Elect Engn, 1202 Akihabara Daibiru, Tokyo, Japan
[3] Tokyo Denki Univ, Dept Mech Engn, 1202 Akihabara Daibiru, Tokyo, Japan
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Measurement of gaze is effective to evaluate human operator's attention, one's operation skills, perceptional capability and so on. Especially, gaze duration, called fixation time, is often utilized. Generally, it is said that long fixation time is detected when the operator pays attention to something intentionally. However, the duration also depends on saliency of displayed image, especially humans' perception characteristics are sensitive to intensities of an image. Although a lot of researchers have presented models of visual attention with the saliency map, the high saliency may attract a human gaze even if he/she do not have attention. Therefore, in order to estimate the human attention, we consider human vision characteristics with foveal vision. The foveal vision is used for scrutinizing highly detailed objects, and it also may relate to the attention. In this paper, we propose a new approach to estimate human visual attention by checking gaze duration and a saliency map considering human foveal vision characteristics. The estimation technique was experimented with five participants, and as the results, we found the technique makes aware of the attention more than conventional technique which considers only gaze duration.
引用
收藏
页码:291 / +
页数:2
相关论文
共 50 条
  • [31] Classifier-Agnostic Saliency Map Extraction
    Zolna, Konrad
    Geras, Krzysztof J.
    Cho, Kyunghyun
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 10087 - 10088
  • [32] Classifier-agnostic saliency map extraction
    Zolna, Konrad
    Geras, Krzysztof J.
    Cho, Kyunghyun
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2020, 196 (196)
  • [33] The visual saliency map is non-retinotopic
    Vergeer, M.
    Boi, M.
    Oegmen, H.
    Herzog, M. H.
    PERCEPTION, 2011, 40 : 130 - 130
  • [34] Denoising saliency map for region of interest extraction
    Guo, Yandong
    Gu, Xiaodong
    Chen, Zhibo
    Chen, Quqing
    Wang, Charles
    ADVANCES IN VISUAL INFORMATION SYSTEMS, 2007, 4781 : 205 - 215
  • [35] What are you looking at? Improving visual gaze estimation by saliency
    Valenti R.
    Sebe N.
    Gevers T.
    International Journal of Computer Vision, 2012, 98 (3) : 324 - 334
  • [36] Applications and New Research of Visual Saliency and Attention
    Fan Hui
    Ren Lu
    Li Jinjiang
    MEASUREMENT TECHNOLOGY AND ENGINEERING RESEARCHES IN INDUSTRY, PTS 1-3, 2013, 333-335 : 1171 - +
  • [37] Mechanisms for allocating auditory attention: An auditory saliency map
    Kayser, C
    Petkov, CI
    Lippert, M
    Logothetis, NK
    CURRENT BIOLOGY, 2005, 15 (21) : 1943 - 1947
  • [38] Attention Model for Extracting Saliency Map in Driving Videos
    Aksoy, Ekrem
    Yazici, Ahmet
    2020 28TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2020,
  • [39] Visual Saliency Guided Gaze Target Estimation with Limited Labels
    Peng, Cheng
    Celiktutan, Oya
    2024 IEEE 18TH INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION, FG 2024, 2024,
  • [40] Weight estimation for feature integration and saliency region extraction in modeling computation of visual selective attention
    Liu, Qiong
    Qin, Shi-Yin
    Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2011, 24 (04): : 548 - 554