People detection and tracking using RGB-D cameras for mobile robots

被引:8
|
作者
Liu, Hengli [1 ]
Luo, Jun [1 ]
Wu, Peng [1 ]
Xie, Shaorong [1 ]
Li, Hengyu [1 ]
机构
[1] Shanghai Univ, Sch Elect & Automat Engn, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
Human perception; plan view maps; point cloud clustering; data association; RGB-D camera; SYSTEM; VISION;
D O I
10.1177/1729881416657746
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
People detection and tracking is an essential capability for mobile robots in order to achieve natural human-robot interaction. In this article, a human detection and tracking system is designed and validated for mobile robots using color data with depth information RGB-depth (RGB-D) cameras. The whole framework is composed of human detection, tracking and re-identification. Firstly, ground points and ceiling planes are removed to reduce computation effort. A prior-knowledge guided random sample consensus fitting algorithm is used to detect the ground plane and ceiling points. All left points are projected onto the ground plane and subclusters are segmented for candidate detection. Meanshift clustering with an Epanechnikov kernel is conducted to partition different points into subclusters. We propose the new idea of spatial region of interest plan view maps which are employed to identify human candidates from point cloud subclusters. Here, a depthweighted histogram is extracted online to feature a human candidate. Then, a particle filter algorithm is adopted to track the human's motion. The integration of the depth-weighted histogram and particle filter provides a precise tool to track the motion of human objects. Finally, data association is set up to re-identify humans who are tracked. Extensive experiments are conducted to demonstrate the effectiveness and robustness of our human detection and tracking system.
引用
收藏
页码:1 / 11
页数:11
相关论文
共 50 条
  • [31] Multimodal Person Reidentification Using RGB-D Cameras
    Pala, Federico
    Satta, Riccardo
    Fumera, Giorgio
    Roli, Fabio
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2016, 26 (04) : 788 - 799
  • [32] Tracking people within groups with RGB-D data
    Munaro, Matteo
    Basso, Filippo
    Menegatti, Emanuele
    2012 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2012, : 2101 - 2107
  • [33] Hybrid Uncalibrated Visual Servoing Control of Harvesting Robots With RGB-D Cameras
    Li, Tao
    Yu, Jinpeng
    Qiu, Quan
    Zhao, Chunjiang
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2023, 70 (03) : 2729 - 2738
  • [34] Embedded Mobile ROS Platform for SLAM Application with RGB-D Cameras
    Newman, Andrew
    Yang, Guojun
    Wang, Boyang
    Arnold, David
    Saniie, Jafar
    2020 IEEE INTERNATIONAL CONFERENCE ON ELECTRO INFORMATION TECHNOLOGY (EIT), 2020, : 449 - 453
  • [35] Effective Free-Driving Region Detection for Mobile Robots by Uncertainty Estimation Using RGB-D Data
    Nguyen, Toan-Khoa
    Nguyen, Phuc Thanh-Thien
    Nguyen, Dai-Dong
    Kuo, Chung-Hsien
    SENSORS, 2022, 22 (13)
  • [36] The Moving Target Recognition and Tracking Using RGB-D Data with the Mobile Robot
    Liu, Yuanhao
    Zheng, Yang
    Han, Linghao
    Liu, Jingmeng
    Pan, Zhongjie
    Sun, Fengchi
    PROCEEDINGS OF THE 38TH CHINESE CONTROL CONFERENCE (CCC), 2019, : 4342 - 4347
  • [37] A Catadioptric Extension for RGB-D Cameras
    Endres, Felix
    Sprunk, Christoph
    Kuemmerle, Rainer
    Burgard, Wolfram
    2014 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2014), 2014, : 466 - 471
  • [38] Feature extraction for RGB-D cameras
    Abd Ali, Reeman Jumaa
    Almamori, Aqiel
    INTERNATIONAL JOURNAL OF NONLINEAR ANALYSIS AND APPLICATIONS, 2022, 13 (01): : 3991 - 3995
  • [39] Visual SLAM with RGB-D Cameras
    Jin, Qiongyao
    Liu, Yungang
    Man, Yongchao
    Li, Fengzhong
    PROCEEDINGS OF THE 38TH CHINESE CONTROL CONFERENCE (CCC), 2019, : 4072 - 4077
  • [40] Building 3D semantic maps for mobile robots using RGB-D camera
    Zhe Zhao
    Xiaoping Chen
    Intelligent Service Robotics, 2016, 9 : 297 - 309