Vision-Based Measurement and Prediction of Object Trajectory for Robotic Manipulation in Dynamic and Uncertain Scenarios

被引:12
|
作者
Xia, Chongkun [1 ]
Weng, Ching-Yen [2 ]
Zhang, Yunzhou [1 ]
Chen, I-Ming [2 ]
机构
[1] Northeastern Univ, Coll Informat Sci & Engn, Shenyang 110819, Peoples R China
[2] Nanyang Technol Univ, Robot Res Ctr, Singapore 639798, Singapore
基金
中国国家自然科学基金;
关键词
Trajectory; Manipulator dynamics; Dynamics; Measurement uncertainty; Visualization; Predictive models; Dynamic and uncertain scenarios; long short-term memory (LSTM); robotic manipulation; time granularity; trajectory prediction; visual measurement; TIME; ARIMA; SYSTEM;
D O I
10.1109/TIM.2020.2994602
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Vision-based measurement and prediction (VMP) are the very important and challenging part for autonomous robotic manipulation, especially in dynamic and uncertain scenarios. However, due to the potential limitations of visual measurement in such an environment such as occlusion, lighting, and hardware limitations, it is not easy to acquire the accurate positions of an object as the observations. Moreover, manipulating a dynamic object with unknown or uncertain motion rules usually requires an accurate prediction of motion trajectory at the desired moment, which dramatically increases the difficulty. To address the problem, we propose a time granularity-based vision prediction framework whose core is an integrated prediction model based on multiple [i.e., long short-term memory (LSTM)] neural networks. At first, we use the vision sensor to acquire raw measurements and adopt the preprocessing method (e.g., data completion, error compensation, and filtering) to turn raw measurements into the standard trajectory data. Then, we devise a novel integration strategy based on time granularity boost (TG-Boost) to select appropriate base predictors and further utilize these history trajectory data to construct the high-precision prediction model. Finally, we use the simulation and a series of dynamic manipulation experiments to validate the proposed methodology. The results also show that our method outperforms the state-of-the-art prediction algorithms in terms of prediction accuracy, success rate, and robustness.
引用
收藏
页码:8939 / 8952
页数:14
相关论文
共 50 条
  • [21] Vision for robotic object manipulation in domestic settings
    Kragic, D
    Björkman, M
    Christensen, HI
    Eklundh, JO
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2005, 52 (01) : 85 - 100
  • [22] Vision-based Robotic Arm in Defect Detection and Object Classification Applications
    Lin, Cheng-Jian
    Jhang, Jyun-Yu
    Gao, Yi-Jyun
    Huang, Hsiu-Mei
    SENSORS AND MATERIALS, 2024, 36 (02) : 655 - 670
  • [23] Automated Robotic Manipulation of Individual Colloidal Particles Using Vision-Based Control
    Zimmermann, Soeren
    Tiemerding, Tobias
    Fatikow, Sergej
    IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2015, 20 (05) : 2031 - 2038
  • [24] Vision-Based Imitation Learning of Needle Reaching Skill for Robotic Precision Manipulation
    Li, Ying
    Qin, Fangbo
    Du, Shaofeng
    Xu, De
    Zhang, Jianqiang
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2021, 101 (01)
  • [25] Minsight: A Fingertip-Sized Vision-Based Tactile Sensor for Robotic Manipulation
    Andrussow, Iris
    Sun, Huanbo
    Kuchenbecker, Katherine J. J.
    Martius, Georg
    ADVANCED INTELLIGENT SYSTEMS, 2023, 5 (08)
  • [26] Q-Attention: Enabling Efficient Learning for Vision-Based Robotic Manipulation
    James, Stephen
    Davison, Andrew J.
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02) : 1612 - 1619
  • [27] Vision-Based Imitation Learning of Needle Reaching Skill for Robotic Precision Manipulation
    Ying Li
    Fangbo Qin
    Shaofeng Du
    De Xu
    Jianqiang Zhang
    Journal of Intelligent & Robotic Systems, 2021, 101
  • [28] Vision-Based Dynamic Displacement Measurement of Isolation Bearing
    He, Yizhe
    Dang, Yu
    ADVANCES IN CIVIL ENGINEERING, 2022, 2022
  • [29] UPG: 3D vision-based prediction framework for robotic grasping in multi-object scenes
    Li, Xiaohan
    Zhang, Xiaozhen
    Zhou, Xiang
    Chen, I-Ming
    KNOWLEDGE-BASED SYSTEMS, 2023, 270
  • [30] Vision-based Hand Representation and Intuitive Virtual Object Manipulation in Mixed Reality
    Bach, F.
    Cakmak, H.
    Maass, H.
    BIOMEDICAL ENGINEERING-BIOMEDIZINISCHE TECHNIK, 2012, 57 : 462 - 465