An Integrative Framework of Human Hand Gesture Segmentation for Human-Robot Interaction

被引:40
|
作者
Ju, Zhaojie [1 ]
Ji, Xiaofei [2 ]
Li, Jing [3 ,4 ]
Liu, Honghai [1 ]
机构
[1] Univ Portsmouth, Sch Comp, Portsmouth PO1 2UP, Hants, England
[2] Shenyang Aerosp Univ, Sch Automat, Shenyang 110136, Liaoning, Peoples R China
[3] Nanchang Univ, Sch Informat Engn, Nanchang 330047, Jiangxi, Peoples R China
[4] Nanchang Univ, Jiangxi Prov Key Lab Intelligent Informat Syst, Nanchang 330047, Jiangxi, Peoples R China
来源
IEEE SYSTEMS JOURNAL | 2017年 / 11卷 / 03期
基金
中国国家自然科学基金; 英国工程与自然科学研究理事会;
关键词
Alignment; hand gesture segmentation; human-computer interaction (HCI); RGB-depth (RGB-D); CAMERA CALIBRATION; KINECT SENSOR; RECOGNITION; DEPTH;
D O I
10.1109/JSYST.2015.2468231
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper proposes a novel framework to segment hand gestures in RGB-depth (RGB-D) images captured by Kinect using humanlike approaches for human-robot interaction. The goal is to reduce the error of Kinect sensing and, consequently, to improve the precision of hand gesture segmentation for robot NAO. The proposed framework consists of two main novel approaches. First, the depth map and RGB image are aligned by using the genetic algorithm to estimate key points, and the alignment is robust to uncertainties of the extracted point numbers. Then, a novel approach is proposed to refine the edge of the tracked hand gestures in RGB images by applying a modified expectation-maximization (EM) algorithm based on Bayesian networks. The experimental results demonstrate that the proposed alignment method is capable of precisely matching the depth maps with RGB images, and the EM algorithm further effectively adjusts the RGB edges of the segmented hand gestures. The proposed framework has been integrated and validated in a system of human-robot interaction to improve NAO robot's performance of understanding and interpretation.
引用
收藏
页码:1326 / 1336
页数:11
相关论文
共 50 条
  • [21] Space, Speech, and Gesture in Human-Robot Interaction
    Mead, Ross
    ICMI '12: PROCEEDINGS OF THE ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2012, : 333 - 336
  • [22] Gesture spotting and recognition for human-robot interaction
    Yang, Hee-Deok
    Park, A-Yeon
    Lee, Seong-Whan
    IEEE TRANSACTIONS ON ROBOTICS, 2007, 23 (02) : 256 - 270
  • [23] A Gesture Based Interface for Human-Robot Interaction
    Stefan Waldherr
    Roseli Romero
    Sebastian Thrun
    Autonomous Robots, 2000, 9 : 151 - 173
  • [24] A gesture based interface for human-robot interaction
    Waldherr, S
    Romero, R
    Thrun, S
    AUTONOMOUS ROBOTS, 2000, 9 (02) : 151 - 173
  • [25] Diver’s hand gesture recognition and segmentation for human–robot interaction on AUV
    Yu Jiang
    Minghao Zhao
    Chong Wang
    Fenglin Wei
    Kai Wang
    Hong Qi
    Signal, Image and Video Processing, 2021, 15 : 1899 - 1906
  • [26] Robot Gesture and User Acceptance of Information in Human-Robot Interaction
    Kim, Aelee
    Kum, Hyejin
    Roh, Ounjung
    You, Sangseok
    Lee, Sukhan
    HRI'12: PROCEEDINGS OF THE SEVENTH ANNUAL ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2012, : 279 - 280
  • [27] Computer vision-based hand gesture recognition for human-robot interaction: a review
    Qi, Jing
    Ma, Li
    Cui, Zhenchao
    Yu, Yushu
    COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (01) : 1581 - 1606
  • [28] Serial-Parallel Dynamic Hand Gesture Recognition Network for Human-Robot Interaction
    Zhao, Yinan
    Zhou, Jian
    Ju, Zhaojie
    Chen, Junkang
    Gao, Qing
    2023 29TH INTERNATIONAL CONFERENCE ON MECHATRONICS AND MACHINE VISION IN PRACTICE, M2VIP 2023, 2023,
  • [29] Toward a framework for human-robot interaction
    Thrun, S
    HUMAN-COMPUTER INTERACTION, 2004, 19 (1-2): : 9 - 24
  • [30] An Attachment Framework for Human-Robot Interaction
    Rabb, Nicholas
    Law, Theresa
    Chita-Tegmark, Meia
    Scheutz, Matthias
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2022, 14 (02) : 539 - 559