Gaze in the Dark: Gaze Estimation in a Low-Light Environment with Generative Adversarial Networks

被引:6
|
作者
Kim, Jung-Hwa [1 ]
Jeong, Jin-Woo [1 ]
机构
[1] Kumoh Natl Inst Technol, Dept Comp Engn, Gumi 39177, South Korea
基金
新加坡国家研究基金会;
关键词
adversarial network; deep learning; gaze estimation; low-light environment; TRACKING;
D O I
10.3390/s20174935
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In smart interactive environments, such as digital museums or digital exhibition halls, it is important to accurately understand the user's intent to ensure successful and natural interaction with the exhibition. In the context of predicting user intent, gaze estimation technology has been considered one of the most effective indicators among recently developed interaction techniques (e.g., face orientation estimation, body tracking, and gesture recognition). Previous gaze estimation techniques, however, are known to be effective only in a controlled lab environment under normal lighting conditions. In this study, we propose a novel deep learning-based approach to achieve a successful gaze estimation under various low-light conditions, which is anticipated to be more practical for smart interaction scenarios. The proposed approach utilizes a generative adversarial network (GAN) to enhance users' eye images captured under low-light conditions, thereby restoring missing information for gaze estimation. Afterward, the GAN-recovered images are fed into the convolutional neural network architecture as input data to estimate the direction of the user gaze. Our experimental results on the modified MPIIGaze dataset demonstrate that the proposed approach achieves an average performance improvement of 4.53%-8.9% under low and dark light conditions, which is a promising step toward further research.
引用
收藏
页码:1 / 20
页数:20
相关论文
共 50 条
  • [1] Eye Gaze Correction Using Generative Adversarial Networks
    Yamamoto, Takahiko
    Seo, Masataka
    Kitajima, Toshihiko
    Chen, Yen-Wei
    2018 IEEE 7TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE 2018), 2018, : 276 - 277
  • [2] Low-light image enhancement using generative adversarial networks
    Wang, Litian
    Zhao, Liquan
    Zhong, Tie
    Wu, Chunming
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [3] GSA-Gaze: Generative Self-adversarial Learning for Domain Generalized Driver Gaze Estimation
    Han, Hongcheng
    Tian, Zhiqiang
    Liu, Yuying
    Li, Shengpeng
    Zhang, Dong
    Du, Shaoyi
    2023 IEEE 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, ITSC, 2023, : 1610 - 1615
  • [4] Generative adversarial network for low-light image enhancement
    Li, Fei
    Zheng, Jiangbin
    Zhang, Yuan-fang
    IET IMAGE PROCESSING, 2021, 15 (07) : 1542 - 1552
  • [5] Low-light image enhancement base on brightness attention mechanism generative adversarial networks
    Fu, Jiarun
    Yan, Lingyu
    Peng, Yulin
    Zheng, KunPeng
    Gao, Rong
    Ling, HeFei
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (04) : 10341 - 10365
  • [6] Low-light image enhancement base on brightness attention mechanism generative adversarial networks
    Jiarun Fu
    Lingyu Yan
    Yulin Peng
    KunPeng Zheng
    Rong Gao
    HeFei Ling
    Multimedia Tools and Applications, 2024, 83 : 10341 - 10365
  • [7] Photo-Realistic Monocular Gaze Redirection Using Generative Adversarial Networks
    He, Zhe
    Spurr, Adrian
    Zhang, Xucong
    Hilliges, Otmar
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6931 - 6940
  • [8] Low-Light Image Enhancement Based on Generative Adversarial Network
    Abirami, R. Nandhini
    Vincent, P. M. Durai Raj
    FRONTIERS IN GENETICS, 2021, 12
  • [9] GazeREC-Net: Advancing Gaze Restoration in Low-Light Conditions
    Ku, Jiayin
    Wang, Li
    IAENG International Journal of Computer Science, 2024, 51 (12) : 2034 - 2042
  • [10] Deep Future Gaze: Gaze Anticipation on Egocentric Videos Using Adversarial Networks
    Zhang, Mengmi
    Ma, Keng Teck
    Lim, Joo Hwee
    Zhao, Qi
    Feng, Jiashi
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3539 - 3548