Vision-guided state estimation and control of robotic manipulators which lack proprioceptive sensors

被引:0
|
作者
Ortenzi, Valerio [1 ]
Marturi, Naresh [1 ,3 ]
Stolkin, Rustam [1 ]
Kuo, Jeffrey A. [4 ]
Mistry, Michael [2 ]
机构
[1] Univ Birmingham, Sch Engn, Birmingham, W Midlands, England
[2] Univ Birmingham, Sch Comp Sci, Birmingham, W Midlands, England
[3] Kuka Robot UK Ltd, Wednesbury, England
[4] Natl Nucl Lab NNL Ltd, Wednesbury, England
基金
英国工程与自然科学研究理事会; 欧盟地平线“2020”; “创新英国”项目;
关键词
TRACKING;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a vision-based approach for estimating the configuration of, and providing control signals for, an under-sensored robot manipulator using a single monocular camera. Some remote manipulators, used for de-commissioning tasks in the nuclear industry, lack proprioceptive sensors because electronics are vulnerable to radiation. Additionally, even if proprioceptive joint sensors could be retrofitted, such heavy-duty manipulators are often deployed on mobile vehicle platforms, which are significantly and erratically perturbed when powerful hydraulic drilling or cutting tools are deployed at the end -effector. In these scenarios, it would be beneficial to use external sensory information, e.g. vision, for estimating the robot configuration with respect to the scene or task. Conventional visual servoing methods typically rely on joint encoder values for controlling the robot. In contrast, our framework assumes that no joint encoders are available, and estimates the robot configuration by visually tracking several parts of the robot, and then enforcing equality between a set of transformation matrices which relate the frames of the camera, world and tracked robot parts. To accomplish this, we propose two alternative methods based on optimisation. We evaluate the performance of our developed framework by visually tracking the pose of a conventional robot arm, where the joint encoders are used to provide ground-truth for evaluating the precision of the vision system. Additionally, we evaluate the precision with which visual feedback can be used to control the robot's end -effector to follow a desired trajectory.
引用
收藏
页码:3567 / 3574
页数:8
相关论文
共 50 条
  • [21] Vision-Guided Hierarchical Control and Autonomous Positioning for Aerial Manipulator
    Ye, Xia
    Cui, Haohao
    Wang, Lidong
    Xie, Shangjun
    Ni, Hong
    APPLIED SCIENCES-BASEL, 2023, 13 (22):
  • [22] Issues and experimental results in vision-guided robotic grasping of static or moving objects
    Papanikolopoulos, Nikolaos
    Smith, Christopher E.
    Industrial Robot, 1998, 25 (02): : 134 - 140
  • [23] GRIBBOT - Robotic 3D vision-guided harvesting of chicken fillets
    Misimi, Ekrem
    Oye, Elling Ruud
    Eilertsen, Aleksander
    Mathiassen, John Reidar
    Asebo, Olav Berg
    Gjerstad, Tone
    Buljo, Jan
    Skotheim, Oystein
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2016, 121 : 84 - 100
  • [24] Vision-guided flight stability and control for micro air vehicles
    Ettinger, SM
    Nechyba, MC
    Ifju, PG
    Waszak, M
    ADVANCED ROBOTICS, 2003, 17 (07) : 617 - 640
  • [25] Vision-guided flight stability and control for micro air vehicles
    Ettinger, SM
    Nechyba, MC
    Ifju, PG
    Waszak, M
    2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-3, PROCEEDINGS, 2002, : 2134 - 2140
  • [26] Vision-Guided Robot Manipulation Predictive Control for Automating Manufacturing
    Lazar, Corneliu
    Burlacu, Adrian
    Archip, Alexandru
    SERVICE ORIENTATION IN HOLONIC AND MULTI-AGENT MANUFACTURING AND ROBOTICS, 2014, 544 : 313 - 328
  • [27] Research on path tracking predictive control for Vision-guided AGV
    Zhang Jingtian
    Li Zhongming
    Weng Xun
    Yang Fuxing
    2014 SIXTH INTERNATIONAL CONFERENCE ON MEASURING TECHNOLOGY AND MECHATRONICS AUTOMATION (ICMTMA), 2014, : 524 - 528
  • [28] Deep learning-based method for vision-guided robotic grasping of unknown objects
    Bergamini, Luca
    Sposato, Mario
    Pellicciari, Marcello
    Peruzzini, Margherita
    Calderara, Simone
    Schmidt, Juliana
    ADVANCED ENGINEERING INFORMATICS, 2020, 44
  • [29] A vision-guided adaptive and optimized robotic fabric gripping system for garment manufacturing automation
    Choi, Young Woon
    Lee, Jiho
    Lee, Yongho
    Lee, Suhyun
    Jeong, Wonyoung
    Lim, Dae Young
    Lee, Sang Won
    ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2025, 92
  • [30] Mutual information-enhanced digital twin promotes vision-guided robotic grasping
    Hu, Fuwen
    ADVANCED ENGINEERING INFORMATICS, 2022, 52