Accurate Regression-Based 3D Gaze Estimation Using Multiple Mapping Surfaces

被引:2
|
作者
Wan, Zhonghua [1 ]
Xiong, Caihua [1 ]
Li, Quanlin [1 ]
Chen, Wenbin [1 ]
Wong, Kelvin Kian Loong [2 ]
Wu, Shiqian [2 ]
机构
[1] Huazhong Univ Sci & Technol, Inst Rehabil & Med Robot, State Key Lab Digital Mfg Equipment & Technol, Wuhan 430074, Peoples R China
[2] Wuhan Univ Sci & Technol, Inst Robot & Intelligent Syst, Sch Informat Sci & Engn, Wuhan 430081, Peoples R China
来源
IEEE ACCESS | 2020年 / 8卷
基金
中国国家自然科学基金;
关键词
Estimation; Calibration; Cameras; Three-dimensional displays; Head; Gaze tracking; Two dimensional displays; Head-mounted eye tracking; 3D gaze estimation; gaze direction estimation; eyeball center; mapping surface; TRACKING;
D O I
10.1109/ACCESS.2020.3023448
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Accurate 3D gaze estimation using a simple setup remains a challenging issue for head-mounted eye tracking. Current regression-based gaze direction estimation methods implicitly assume that all gaze directions intersect at one point called the eyeball pseudo-center. The effect of this implicit assumption on gaze estimation is unknown. In this paper, we find that this assumption is approximate based on a simulation of all intersections of gaze directions, and it is conditional based on a sensitivity analysis of the assumption in gaze estimation. Hence, we propose a gaze direction estimation method with one mapping surface that satisfies conditions of the assumption by configuring one mapping surface and achieving a high-quality calibration of the eyeball pseudo-center. This method only adds two additional calibration points outside the mapping surface. Furthermore, replacing the eyeball pseudo-center with an additional calibrated surface, we propose a gaze direction estimation method with two mapping surfaces that further improves the accuracy of gaze estimation. This method improves accuracy on the state-of-the-art method by 20 percent (from a mean error of 1.84 degrees to 1.48 degrees) on a public dataset with a usage range of 1 meter and by 17 percent (from a mean error of 2.22 degrees to 1.85 degrees) on a public dataset with a usage range of 2 meters.
引用
收藏
页码:166460 / 166471
页数:12
相关论文
共 50 条
  • [1] Regression-based 3D Hand Pose Estimation using Heatmaps
    Bandi, Chaitanya
    Thomas, Ulrike
    PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS, VOL 5: VISAPP, 2020, : 636 - 643
  • [2] Wearable Binocular Eye Tracking targets in 3-D Environment Using 2-D Regression-based Gaze Estimation
    Chang, Chi-Jeng
    Huang, Chi-Wu
    Hu, Chun-Wei
    2016 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS-TAIWAN (ICCE-TW), 2016, : 227 - 228
  • [3] Regression-based convolutional 3D pose estimation from single image
    Ershadi-Nasab, S.
    Kasaei, S.
    Sanaei, E.
    ELECTRONICS LETTERS, 2018, 54 (05) : 292 - 293
  • [4] Regression-Based 3D Hand Pose Estimation for Human-Robot Interaction
    Bandi, Chaitanya
    Thomas, Ulrike
    COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS, VISIGRAPP 2020, 2022, 1474 : 507 - 529
  • [5] Model-Based 3D Gaze Estimation Using a TOF Camera
    Shen, Kuanxin
    Li, Yingshun
    Guo, Zhannan
    Gao, Jintao
    Wu, Yingjian
    SENSORS, 2024, 24 (04)
  • [6] A Regression-Based User Calibration Framework for Real-Time Gaze Estimation
    Arar, Nuri Murat
    Gao, Hua
    Thiran, Jean-Philippe
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2017, 27 (12) : 2623 - 2638
  • [8] Region-wise Polynomial Regression for 3D Mobile Gaze Estimation
    Su, Dan
    Li, You Fu
    Chen, Hao
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 907 - 913
  • [9] Highly Accurate and Fully Automatic 3D Head Pose Estimation and Eye Gaze Estimation Using RGB-D Sensors and 3D Morphable Models
    Ghiass, Reza Shoja
    Arandjelovic, Ognjen
    Laurendeau, Denis
    SENSORS, 2018, 18 (12)
  • [10] 3D gaze estimation and interaction
    Ki, Jeongseok
    Kwon, Yong-Moo
    2008 3DTV-CONFERENCE: THE TRUE VISION - CAPTURE, TRANSMISSION AND DISPLAY OF 3D VIDEO, 2008, : 353 - 356