Learning End-User Customized Mid-Air Hand Gestures Using a Depth Image Sensor

被引:1
|
作者
Nai, Weizhi [1 ]
Liu, Yue [2 ]
Wang, Qinglong [1 ]
Sun, Xiaoying [1 ]
机构
[1] Jilin Univ, Coll Commun Engn, Changchun 130012, Peoples R China
[2] Beijing Inst Technol, Sch Opt & Photon, Beijing 100081, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Gesture recognition; Heuristic algorithms; Trajectory; Image sensors; Human computer interaction; Sensor phenomena and characterization; Depth sensor; hand gesture; human-computer interaction; user customization; RECOGNITION; MOTION; MODEL;
D O I
10.1109/JSEN.2022.3190913
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Interacting with computer applications using actions that are designed by end users themselves instead of pre-defined ones has advantages such as better memorability in some Human-Computer Interaction (HCI) scenarios. In this paper we propose a method for allowing users to use self-defined mid-air hand gestures as commands for HCI after they provide a few training samples for each gesture in front of a depth image sensor. The gesture detection and recognition algorithm is mainly based on pattern matching using 3 separate sets of features, which carry both finger-action and hand-motion information. An experiment in which each subject designed their own set of 8 gestures, provided about 5 samples for each, and then used them to play a game is conducted all in one sitting. During the experiment a recognition rate of 66.7% is achieved with a false positive ratio of 22.2%. Further analyses on the collected dataset shows that a higher recognition rate of up to about 85% can be achieved if more wrong detections were allowed.
引用
收藏
页码:16994 / 17004
页数:11
相关论文
共 50 条
  • [1] Hotspotizer: End-User Authoring of Mid-Air Gestural Interactions
    Baytas, Mehmet Aydin
    Yemez, Yucel
    Ozcan, Oguzhan
    PROCEEDINGS OF THE NORDICHI'14: THE 8TH NORDIC CONFERENCE ON HUMAN-COMPUTER INTERACTION: FUN, FAST, FOUNDATIONAL, 2014, : 677 - 686
  • [2] MagicalHands: Mid-Air Hand Gestures for Animating in VR
    Arora, Rahul
    Kazi, Rubaiat Habib
    Kaufman, Danny
    Li, Wilmot
    Singh, Karan
    PROCEEDINGS OF THE 32ND ANNUAL ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY (UIST 2019), 2019, : 463 - 477
  • [3] Are mid-air dynamic gestures applicable to user identification?
    Liu, Heng
    Dai, Liangliang
    Hou, Shudong
    Han, Jungong
    Liu, Hongshen
    PATTERN RECOGNITION LETTERS, 2019, 117 : 179 - 185
  • [4] Analyzing Mid-Air Hand Gestures to Confirm Selections on Displays
    Erazo, Orlando
    Vicuna, Ariosto
    Pico, Roberto
    Oviedo, Byron
    TECHNOLOGY TRENDS, 2019, 895 : 341 - 352
  • [5] The Effect of Rhythm in Mid-air Gestures on the User Experience in Virtual Reality
    Reynaert, Vincent
    Berthaut, Florent
    Rekik, Yosra
    Grisoni, Laurent
    HUMAN-COMPUTER INTERACTION, INTERACT 2021, PT III, 2021, 12934 : 182 - 191
  • [6] GesMessages: Using Mid-air Gestures to Manage Notifications
    Li, Xiang
    Chen, Yuzheng
    Tang, Xiaohang
    ACM SYMPOSIUM ON SPATIAL USER INTERACTION, SUI 2023, 2023,
  • [7] User-defined mid-air gestures for multiscale GIS interface interaction
    Zhou, Xiaozhou
    Bai, Ruidong
    CARTOGRAPHY AND GEOGRAPHIC INFORMATION SCIENCE, 2023, 50 (05) : 481 - 494
  • [8] Simple mappings, expressive movement: a qualitative investigation into the end-user mapping design of experienced mid-air musicians
    Brown, Dom
    Nash, Chris
    Mitchell, Tom
    DIGITAL CREATIVITY, 2018, 29 (2-3) : 129 - 148
  • [9] A User-based Mid-air Hand Gesture Set for Spreadsheets
    Takayama, Yuta
    Ichikawa, Yuu
    Shizuki, Buntarou
    Kawaguchi, Ikkaku
    Takahashi, Shin
    5TH ASIAN CHI SYMPOSIUM PROCEEDINGS, 2021, : 122 - 128
  • [10] User-Defined Interaction for Smart Homes: Voice, Touch, or Mid-Air Gestures?
    Hoffmann, Fabian
    Tyroller, Miriam-Ida
    Wende, Felix
    Henze, Niels
    MUM 2019: 18TH INTERNATIONAL CONFERENCE ON MOBILE AND UBIQUITOUS MULTIMEDIA, 2019,