Learning an Appearance-Based Gaze Estimator from One Million Synthesised Images

被引:187
|
作者
Wood, Erroll [1 ]
Baltrusaitis, Tadas [2 ]
Morency, Louis-Philippe [2 ]
Robinson, Peter [1 ]
Bulling, Andreas [3 ]
机构
[1] Univ Cambridge, Cambridge CB2 1TN, England
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[3] Max Planck Inst Informat, Saarbrucken, Germany
关键词
appearance-based gaze estimation; learning-by-synthesis; 3D morphable model; real-time rendering;
D O I
10.1145/2857491.2857492
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning-based methods for appearance-based gaze estimation achieve state-of-the-art performance in challenging real-world settings but require large amounts of labelled training data. Learningby-synthesis was proposed as a promising solution to this problem but current methods are limited with respect to speed, appearance variability, and the head pose and gaze angle distribution they can synthesize. We present UnityEyes, a novel method to rapidly synthesize large amounts of variable eye region images as training data. Our method combines a novel generative 3D model of the human eye region with a real-time rendering framework. The model is based on high-resolution 3D face scans and uses real-time approximations for complex eyeball materials and structures as well as anatomically inspired procedural geometry methods for eyelid animation. We show that these synthesized images can be used to estimate gaze in difficult in-the-wild scenarios, even for extreme gaze angles or in cases in which the pupil is fully occluded. We also demonstrate competitive gaze estimation results on a benchmark in-the-wild dataset, despite only using a light-weight nearest-neighbor algorithm. We are making our UnityEyes synthesis framework available online for the benefit of the research community.
引用
收藏
页码:131 / 138
页数:8
相关论文
共 50 条
  • [31] Real-time gaze tracking with appearance-based models
    Javier Orozco
    F. Xavier Roca
    Jordi Gonzàlez
    Machine Vision and Applications, 2009, 20 : 353 - 364
  • [32] Evaluation of Appearance-Based Methods and Implications for Gaze-Based Applications
    Zhang, Xucong
    Sugano, Yusuke
    Bulling, Andreas
    CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
  • [33] Real-time gaze tracking with appearance-based models
    Orozco, Javier
    Xavier Roca, F.
    Gonzalez, Jordi
    MACHINE VISION AND APPLICATIONS, 2009, 20 (06) : 353 - 364
  • [34] iMon: Appearance-based Gaze Tracking System on Mobile Devices
    Huynh, Sinh
    Balan, Rajesh Krishna
    Ko, JeongGil
    PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2021, 5 (04):
  • [35] A Functional Usability Analysis of Appearance-Based Gaze Tracking for Accessibility
    Park, Youn Soo
    Manduchi, Roberto
    PROCEEDINGS OF THE 2024 ACM SYMPOSIUM ON EYE TRACKING RESEARCH & APPLICATIONS, ETRA 2024, 2024,
  • [36] Appearance-based gaze estimation under slight head motion
    Guo, Zhizhi
    Zhou, Qianxiang
    Liu, Zhongqi
    MULTIMEDIA TOOLS AND APPLICATIONS, 2017, 76 (02) : 2203 - 2222
  • [37] Appearance-Based Gaze Estimation Using Dilated-Convolutions
    Chen, Zhaokang
    Shi, Bertram E.
    COMPUTER VISION - ACCV 2018, PT VI, 2019, 11366 : 309 - 324
  • [38] Appearance-based Gaze Estimation using Attention and Difference Mechanism
    Murthy, L. R. D.
    Biswas, Pradipta
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 3137 - 3146
  • [39] Iris Geometric Transformation Guided Deep Appearance-Based Gaze Estimation
    Nie, Wei
    Wang, Zhiyong
    Ren, Weihong
    Zhang, Hanlin
    Liu, Honghai
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 1616 - 1631
  • [40] A Head Pose-free Approach for Appearance-based Gaze Estimation
    Lu, Feng
    Okabe, Takahiro
    Sugano, Yusuke
    Sato, Yoichi
    PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2011, 2011,