Tracking object poses in the context of robust body pose estimates

被引:0
|
作者
Darby, John [1 ]
Li, Baihua [1 ]
Costen, Nicholas [1 ]
机构
[1] Manchester Metropolitan Univ, Sch Comp Math & Digital Technol, Manchester M1 5GD, Lancs, England
关键词
Human-object interaction; Object localisation; Object tracking; Depth data; RGB-D;
D O I
10.1016/j.cviu.2014.06.009
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work focuses on tracking objects being used by humans. These objects are often small, fast moving and heavily occluded by the user. Attempting to recover their 3D position and orientation over time is a challenging research problem. To make progress we appeal to the fact that these objects are often used in a consistent way. The body poses of different people using the same object tend to have similarities, and, when considered relative to those body poses, so do the respective object poses. Our intuition is that, in the context of recent advances in body-pose tracking from RGB-D data, robust object-pose tracking during human-object interactions should also be possible. We propose a combined generative and discriminative tracking framework able to follow gradual changes in object-pose over time but also able to re-initialise object-pose upon recognising distinctive body-poses. The framework is able to predict object-pose relative to a set of independent coordinate systems, each one centred upon a different part of the body. We conduct a quantitative investigation into which body parts serve as the best predictors of object-pose over the course of different interactions. We find that while object-translation should be predicted from nearby body parts, object-rotation can be more robustly predicted by using a much wider range of body parts. Our main contribution is to provide the first object-tracking system able to estimate 3D translation and orientation from RGB-D observations of human-object interactions. By tracking precise changes in object-pose, our method opens up the possibility of more detailed computational reasoning about human-object interactions and their outcomes. For example, in assistive living systems that go beyond just recognising the actions and objects involved in everyday tasks such as sweeping or drinking, to reasoning that a person has "missed sweeping under the chair" or "not drunk enough water today". (C) 2014 Elsevier Inc. All rights reserved.
引用
收藏
页码:57 / 72
页数:16
相关论文
共 50 条
  • [1] Robust Human Body Shape and Pose Tracking
    Huang, Chun-Hao
    Boyer, Edmond
    Ilic, Slobodan
    2013 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2013), 2013, : 287 - 294
  • [2] Temporal Attention for Robust Multiple Object Pose Tracking
    Li, Zhongluo
    Yoshimoto, Junichiro
    Ikeda, Kazushi
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT IV, 2024, 14450 : 551 - 561
  • [3] Robust monocular object pose tracking for large pose shift using 2D tracking
    Qiufu Wang
    Jiexin Zhou
    Zhang Li
    Xiaoliang Sun
    Qifeng Yu
    Visual Intelligence, 1 (1):
  • [4] Robust object tracking with active context learning
    Quan, Wei
    Jiang, Yongquan
    Zhang, Jianjun
    Chen, Jim X.
    VISUAL COMPUTER, 2015, 31 (10): : 1307 - 1318
  • [5] Robust object tracking with active context learning
    Wei Quan
    Yongquan Jiang
    Jianjun Zhang
    Jim X. Chen
    The Visual Computer, 2015, 31 : 1307 - 1318
  • [6] Robust Object Pose Tracking for Augmented Reality Guidance and Teleoperation
    Black, David
    Salcudean, Septimiu
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73 : 1 - 15
  • [7] ROBUST OBJECT TRACKING USING A CONTEXT BASED ON THE RELATION OF OBJECT AND BACKGROUND
    Yamashita, Takayoshi
    Fujiyoshi, Hironobu
    2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 1788 - 1792
  • [8] A robust object tracking method under pose variation and partial occlusion
    Hotta, Kazuhiro
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2006, E89D (07): : 2132 - 2141
  • [9] Semantic and context features integration for robust object tracking
    Yao, Jinzhen
    Zhang, Jianlin
    Wang, Zhixing
    Shao, Linsong
    IET IMAGE PROCESSING, 2022, 16 (05) : 1268 - 1279
  • [10] Robust coverless video steganography based on pose estimation and object tracking
    Li, Nan
    Qin, Jiaohua
    Xiang, Xuyu
    Tan, Yun
    JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2024, 87