MYFix: Automated Fixation Annotation of Eye-Tracking Videos

被引:1
|
作者
Alinaghi, Negar [1 ]
Hollendonner, Samuel [1 ]
Giannopoulos, Ioannis [1 ]
机构
[1] Vienna Univ Technol, Res Div Geoinformat, Wiedner Hauptstr 8-E120, A-1040 Vienna, Austria
关键词
automatic fixation annotation; object detection; semantic segmentation; outdoor mobile eye-tracking; GAZE;
D O I
10.3390/s24092666
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In mobile eye-tracking research, the automatic annotation of fixation points is an important yet difficult task, especially in varied and dynamic environments such as outdoor urban landscapes. This complexity is increased by the constant movement and dynamic nature of both the observer and their environment in urban spaces. This paper presents a novel approach that integrates the capabilities of two foundation models, YOLOv8 and Mask2Former, as a pipeline to automatically annotate fixation points without requiring additional training or fine-tuning. Our pipeline leverages YOLO's extensive training on the MS COCO dataset for object detection and Mask2Former's training on the Cityscapes dataset for semantic segmentation. This integration not only streamlines the annotation process but also improves accuracy and consistency, ensuring reliable annotations, even in complex scenes with multiple objects side by side or at different depths. Validation through two experiments showcases its efficiency, achieving 89.05% accuracy in a controlled data collection and 81.50% accuracy in a real-world outdoor wayfinding scenario. With an average runtime per frame of 1.61 +/- 0.35 s, our approach stands as a robust solution for automatic fixation annotation.
引用
收藏
页数:21
相关论文
共 50 条
  • [11] Suitability of calibration polynomials for eye-tracking data with simulated fixation inaccuracies
    Rosengren, William
    Nystom, Marcus
    Hammar, Bjorn
    Stridh, Martin
    2018 ACM SYMPOSIUM ON EYE TRACKING RESEARCH & APPLICATIONS (ETRA 2018), 2018,
  • [12] Increased Neck Visual Fixation in Children With Tracheostomies: An Eye-Tracking Study
    Mavedatnia, Dorsa
    Levinsky, Justin
    Miao, Siyu
    Chopra, Meera
    Lim, Rachel
    Tepsich, Meghan
    Propst, Evan J.
    Wolter, Nikolaus E.
    Siu, Jennifer M.
    LARYNGOSCOPE, 2025,
  • [13] Fixation duration and the learning process: an eye tracking study with subtitled videos
    Negi, Shivsevak
    Mitra, Ritayan
    JOURNAL OF EYE MOVEMENT RESEARCH, 2020, 13 (06): : 1 - 15
  • [14] Eye-tracking in linguistics
    Vulchanova, Mila
    Kosutar, Sara
    LANGUAGE AND DIALOGUE, 2024, 14 (03) : 492 - 498
  • [15] Eye-tracking in interaction
    Ansani, Alessandro
    CORPUS PRAGMATICS, 2020, 4 (04) : 473 - 477
  • [16] Eye-Tracking Causality
    Gerstenberg, Tobias
    Peterson, Matthew F.
    Goodman, Noah D.
    Lagnado, David A.
    Tenenbaum, Joshua B.
    PSYCHOLOGICAL SCIENCE, 2017, 28 (12) : 1731 - 1744
  • [17] Eye Fixation Versus Pupil Diameter as Eye-Tracking Features for Virtual Reality Emotion Classification
    Zheng, Lim Jia
    Mountstephens, James
    Teo, Jason
    2021 IEEE INTERNATIONAL CONFERENCE ON COMPUTING (ICOCO), 2021, : 315 - 319
  • [18] An automated test of infant visual acuity using remote eye-tracking
    Jones, Pete R.
    Kalwarowsky, Sarah
    Wattam-Bell, John
    Nardini, Marko
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2014, 55 (13)
  • [19] Automated Measurement of Resolution Acuity in Infants Using Remote Eye-Tracking
    Jones, Pete R.
    Kalwarowsky, Sarah
    Atkinson, Janette
    Braddick, Oliver J.
    Nardini, Marko
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2014, 55 (12) : 8102 - 8110
  • [20] Availability Test of Automated Teller Machine Based on Eye-tracking Data
    Xu, Junjie
    Wang, Ying
    Lv, Fuqiang
    2018 11TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID), VOL 1, 2018, : 161 - 164