Gender and gaze gesture recognition for human-computer interaction

被引:35
|
作者
Zhang, Wenhao [1 ]
Smith, Melvyn L. [1 ]
Smith, Lyndon N. [1 ]
Farooq, Abdul [1 ]
机构
[1] Univ W England, Bristol Robot Lab, Ctr Machine Vis, T Block,Frenchay Campus,Coldharbour Lane, Bristol BS16 1QY, Avon, England
基金
英国工程与自然科学研究理事会;
关键词
Assistive HCI; Gender recognition; Eye centre localisation; Gaze analysis; Directed advertising; CLASSIFICATION; IMAGES; SHAPE;
D O I
10.1016/j.cviu.2016.03.014
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The identification of visual cues in facial images has been widely explored in the broad area of computer vision. However theoretical analyses are often not transformed into widespread assistive Human Computer Interaction (HCI) systems, due to factors such as inconsistent robustness, low efficiency, large computational expense or strong dependence on complex hardware. We present a novel gender recognition algorithm, a modular eye centre localisation approach and a gaze gesture recognition method, aiming to escalate the intelligence, adaptability and interactivity of HCI systems by combining demographic data (gender) and behavioural data (gaze) to enable development of a range of real-world assistive-technology applications. The gender recognition algorithm utilises Fisher Vectors as facial features which are encoded from low-level local features in facial images. We experimented with four types of low-level features: greyscale values, Local Binary Patterns (LBP), LBP histograms and Scale Invariant Feature Transform (SIFT). The corresponding Fisher Vectors were classified using a linear Support Vector Machine. The algorithm has been tested on the FERET database, the LFW database and the FRGCv2 database, yielding 97.7%, 92.5% and 96.7% accuracy respectively. The eye centre localisation algorithm has a modular approach, following a coarse-to-fine, global-to regional scheme and utilising isophote and gradient features. A Selective Oriented Gradient filter has been specifically designed to detect and remove strong gradients from eyebrows, eye corners and self shadows (which sabotage most eye centre localisation methods). The trajectories of the eye centres are then defined as gaze gestures for active HCI. The eye centre localisation algorithm has been compared with 10 other state-of-the-art algorithms with similar functionality and has outperformed them in terms of accuracy while maintaining excellent real-time performance. The above methods have been employed for development of a data recovery system that can be employed for implementation of advanced assistive technology tools. The high accuracy, reliability and real-time performance achieved for attention monitoring, gaze gesture control and recovery of demographic data, can enable the advanced human-robot interaction that is needed for developing systems that can provide assistance with everyday actions, thereby improving the quality of life for the elderly and/or disabled. (C) 2016 Elsevier Inc. All rights reserved.
引用
收藏
页码:32 / 50
页数:19
相关论文
共 50 条
  • [21] Generic System for Human-Computer Gesture Interaction
    Trigueiros, Paulo
    Ribeiro, Fernando
    Reis, Luis Paulo
    2014 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC), 2014, : 175 - 180
  • [22] Real-Time Continuous Gesture Recognition for Natural Human-Computer Interaction
    Yin, Ying
    Davis, Randall
    2014 IEEE SYMPOSIUM ON VISUAL LANGUAGES AND HUMAN-CENTRIC COMPUTING (VL/HCC 2014), 2014, : 113 - 120
  • [23] Convolutional neural network for gesture recognition human-computer interaction system design
    Niu, Peixin
    PLOS ONE, 2025, 20 (02):
  • [24] Improving Human-Computer Interaction by Gaze Tracking
    Janko, Zsolt
    Hajder, Levente
    3RD IEEE INTERNATIONAL CONFERENCE ON COGNITIVE INFOCOMMUNICATIONS (COGINFOCOM 2012), 2012, : 131 - 136
  • [25] Intelligent Human-Computer Interaction for Building Information Models Using Gesture Recognition
    Zhang, Tianyi
    Wang, Yukang
    Zhou, Xiaoping
    Liu, Deli
    Ji, Jingyi
    Feng, Junfu
    INVENTIONS, 2025, 10 (01)
  • [26] Design of Human-Computer Interaction Gesture Recognition System Based on a Flexible Biosensor
    Chen, Qianhui
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2024, 17 (01)
  • [27] Vision-Based Hand Gesture Recognition for Human-Computer Interaction——A Survey
    GAO Yongqiang
    LU Xiong
    SUN Junbin
    TAO Xianglin
    HUANG Xiaomei
    YAN Yuxing
    LIU Jia
    Wuhan University Journal of Natural Sciences, 2020, 25 (02) : 169 - 184
  • [28] Gaze tracking for multimodal human-computer interaction
    Stiefelhagen, R
    Yang, J
    1997 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS I - V: VOL I: PLENARY, EXPERT SUMMARIES, SPECIAL, AUDIO, UNDERWATER ACOUSTICS, VLSI; VOL II: SPEECH PROCESSING; VOL III: SPEECH PROCESSING, DIGITAL SIGNAL PROCESSING; VOL IV: MULTIDIMENSIONAL SIGNAL PROCESSING, NEURAL NETWORKS - VOL V: STATISTICAL SIGNAL AND ARRAY PROCESSING, APPLICATIONS, 1997, : 2617 - 2620
  • [29] Human Hand Gesture Recognition Using Spatio-Temporal Volumes for Human-computer Interaction
    Vafadar, Maryam
    Behrad, Afireza
    2008 INTERNATIONAL SYMPOSIUM ON TELECOMMUNICATIONS, VOLS 1 AND 2, 2008, : 713 - 718
  • [30] Sandwich-structured flexible strain sensors for gesture recognition in human-computer interaction
    Chen, Guanzheng
    Zhang, Xin
    Sun, Zeng
    Luo, Xuanzi
    Fang, Guoqing
    Wu, Huaping
    Cheng, Lin
    Liu, Aiping
    EUROPEAN PHYSICAL JOURNAL-SPECIAL TOPICS, 2025,