Vision-Based Measurement and Prediction of Object Trajectory for Robotic Manipulation in Dynamic and Uncertain Scenarios

被引:12
|
作者
Xia, Chongkun [1 ]
Weng, Ching-Yen [2 ]
Zhang, Yunzhou [1 ]
Chen, I-Ming [2 ]
机构
[1] Northeastern Univ, Coll Informat Sci & Engn, Shenyang 110819, Peoples R China
[2] Nanyang Technol Univ, Robot Res Ctr, Singapore 639798, Singapore
基金
中国国家自然科学基金;
关键词
Trajectory; Manipulator dynamics; Dynamics; Measurement uncertainty; Visualization; Predictive models; Dynamic and uncertain scenarios; long short-term memory (LSTM); robotic manipulation; time granularity; trajectory prediction; visual measurement; TIME; ARIMA; SYSTEM;
D O I
10.1109/TIM.2020.2994602
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Vision-based measurement and prediction (VMP) are the very important and challenging part for autonomous robotic manipulation, especially in dynamic and uncertain scenarios. However, due to the potential limitations of visual measurement in such an environment such as occlusion, lighting, and hardware limitations, it is not easy to acquire the accurate positions of an object as the observations. Moreover, manipulating a dynamic object with unknown or uncertain motion rules usually requires an accurate prediction of motion trajectory at the desired moment, which dramatically increases the difficulty. To address the problem, we propose a time granularity-based vision prediction framework whose core is an integrated prediction model based on multiple [i.e., long short-term memory (LSTM)] neural networks. At first, we use the vision sensor to acquire raw measurements and adopt the preprocessing method (e.g., data completion, error compensation, and filtering) to turn raw measurements into the standard trajectory data. Then, we devise a novel integration strategy based on time granularity boost (TG-Boost) to select appropriate base predictors and further utilize these history trajectory data to construct the high-precision prediction model. Finally, we use the simulation and a series of dynamic manipulation experiments to validate the proposed methodology. The results also show that our method outperforms the state-of-the-art prediction algorithms in terms of prediction accuracy, success rate, and robustness.
引用
收藏
页码:8939 / 8952
页数:14
相关论文
共 50 条
  • [31] Vision-Based Traffic Conflict Detection Using Trajectory Learning and Prediction
    Sun, Zongyuan
    Chen, Yuren
    Wang, Pin
    Fang, Shouen
    Tang, Boming
    IEEE ACCESS, 2021, 9 : 34558 - 34569
  • [32] Stereo Vision-Based Object Recognition and Manipulation by Regions with Convolutional Neural Network
    Du, Yi-Chun
    Muslikhin, Muslikhin
    Hsieh, Tsung-Han
    Wang, Ming-Shyan
    ELECTRONICS, 2020, 9 (02)
  • [33] Vision-based guidance of a robotic arm for object handling operations - The white'R vision framework
    Theoharatos, Christos
    Kastaniotis, Dimitris
    Besiris, Dimitris
    Fragoulis, Nikos
    2016 IEEE 2ND INTERNATIONAL FORUM ON RESEARCH AND TECHNOLOGIES FOR SOCIETY AND INDUSTRY LEVERAGING A BETTER TOMORROW (RTSI), 2016, : 323 - 328
  • [34] Vision-Based Robotic System for Polyhedral Object Grasping using Kinect Sensor
    Gonzalez, Pablo
    Cheng, Ming-Yang
    Kuo, Wei-Liang
    2016 INTERNATIONAL AUTOMATIC CONTROL CONFERENCE (CACS), 2016, : 71 - 76
  • [35] Vision-based Hyper-Real-Time Object Tracker for Robotic Applications
    Kolarow, Alexander
    Brauckmann, Michael
    Eisenbach, Markus
    Schenk, Konrad
    Einhorn, Erik
    Debes, Klaus
    Gross, Horst-Michael
    2012 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2012, : 2108 - 2115
  • [36] Vision-Based Robotic Object Grasping-A Deep Reinforcement Learning Approach
    Chen, Ya-Ling
    Cai, Yan-Rou
    Cheng, Ming-Yang
    MACHINES, 2023, 11 (02)
  • [37] A hybrid vision-based surface coverage measurement method for robotic inspection
    Shahid, Lubna
    Janabi-Sharifi, Farrokh
    Keenan, Patrick
    ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2019, 57 : 138 - 145
  • [38] Kinematics of stratified vision-based manipulation
    Wei, YJ
    Goodwine, B
    Skaar, SB
    ELEVENTH WORLD CONGRESS IN MECHANISM AND MACHINE SCIENCE, VOLS 1-5, PROCEEDINGS, 2004, : 1900 - 1905
  • [39] Adaptive vision-based force/position tracking of robotic manipulators interacting with uncertain environment
    Wang, Lijiao
    Meng, Bin
    PROCEEDINGS OF THE 2019 31ST CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2019), 2019, : 5126 - 5131
  • [40] Vision-Based Robotic Manipulation of Intelligent Wheelchair with Human-Computer Shared Control
    Du, Siyi
    Wang, Fei
    Zhou, Guilin
    Li, Jiaqi
    Yang, Lintao
    Wang, Dongxu
    PROCEEDINGS OF THE 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2021), 2021, : 3252 - 3257