Decoding Attention from Gaze: A Benchmark Dataset and End-to-End Models

被引:0
|
作者
Uppal, Karan [1 ]
Kim, Jaeah [2 ]
Singh, Shashank [3 ]
机构
[1] Indian Inst Technol, Kharagpur, W Bengal, India
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[3] Max Planck Inst Intelligent Syst, Tubingen, Germany
来源
GAZE MEETS MACHINE LEARNING WORKSHOP, VOL 210 | 2022年 / 210卷
基金
美国国家科学基金会;
关键词
Gaze; Eye-Tracking; Deep Learning; Attentional Decoding; VISUAL WORLD PARADIGM; MOUNTED EYE-TRACKING;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Eye-tracking has potential to provide rich behavioral data about human cognition in ecologically valid environments. However, analyzing this rich data is often challenging. Most automated analyses are specific to simplistic artificial visual stimuli with well-separated, static regions of interest, while most analyses in the context of complex visual stimuli, such as most natural scenes, rely on laborious and time-consuming manual annotation. This paper studies using computer vision tools for "attention decoding", the task of assessing the locus of a participant's overt visual attention over time. We provide a publicly available Multiple Object Eye-Tracking (MOET) dataset, consisting of gaze data from participants tracking specific objects, annotated with labels and bounding boxes, in crowded real-world videos, for training and evaluating attention decoding algorithms. We also propose two end-to-end deep learning models for attention decoding and compare these to state-of-the-art heuristic methods.
引用
收藏
页码:219 / 240
页数:22
相关论文
共 50 条
  • [41] Baidu Driving Dataset and End-to-End Reactive Control Model
    Yu, Hao
    Yang, Shu
    Gu, Weihao
    Zhang, Shaoyu
    2017 28TH IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV 2017), 2017, : 341 - 346
  • [42] End-to-End Hand Kinematic Decoding from LFPs Using Temporal Convolutional Network
    Ahmadi, Nur
    Constandinou, Timothy G.
    Bouganis, Christos-Savvas
    2019 IEEE BIOMEDICAL CIRCUITS AND SYSTEMS CONFERENCE (BIOCAS 2019), 2019,
  • [43] Identify Speakers in Cocktail Parties with End-to-End Attention
    Zhu, Junzhe
    Hasegawa-Johnson, Mark
    Sari, Leda
    INTERSPEECH 2020, 2020, : 3092 - 3096
  • [44] End-to-End Multi-Task Learning with Attention
    Liu, Shikun
    Johns, Edward
    Davison, Andrew J.
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 1871 - 1880
  • [45] An End-to-End Scene Text Detector with Dynamic Attention
    Lin, Jingyu
    Yan, Yan
    Wang, Hanzi
    PROCEEDINGS OF THE 4TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA IN ASIA, MMASIA 2022, 2022,
  • [46] Gigapixel end-to-end training using streaming and attention
    Dooper, Stephan
    Pinckaers, Hans
    Aswolinskiy, Witali
    Hebeda, Konnie
    Jarkman, Sofia
    van der Laak, Jeroen
    Litjens, Geert
    BIGPICTURE Consortium
    MEDICAL IMAGE ANALYSIS, 2023, 88
  • [47] End-to-End Attention Pooling for Histopathology Image Classification
    Liu, Juan
    Zuo, Zhiqun
    Chen, Yuqi
    Xiao, Di
    Pang, Baochuan
    Cao, Dehua
    Wuhan Daxue Xuebao (Xinxi Kexue Ban)/Geomatics and Information Science of Wuhan University, 2024, 49 (07): : 1070 - 1078
  • [48] MixFormer: End-to-End Tracking With Iterative Mixed Attention
    Cui, Yutao
    Jiang, Cheng
    Wu, Gangshan
    Wang, Limin
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (06) : 4129 - 4146
  • [49] MixFormer: End-to-End Tracking with Iterative Mixed Attention
    Cui, Yutao
    Jiang, Cheng
    Wang, Limin
    Wu, Gangshan
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 13598 - 13608
  • [50] End-to-end Spatiotemporal Attention Model for Autonomous Driving
    Zhao, Ruijie
    Zhang, Yanxin
    Huang, Zhiqing
    Yin, Chenkun
    PROCEEDINGS OF 2020 IEEE 4TH INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2020), 2020, : 2649 - 2653