An RGB-D Visual Application for Error Detection in Robot Grasping Tasks

被引:2
|
作者
Martinez-Martin, Ester [1 ]
Fischinger, David [2 ]
Vincze, Markus [2 ]
del Pobil, Angel P. [1 ]
机构
[1] UJI, Robot Intelligence Lab, Avda Sos Baynat S-N, Castellon de La Plana 12071, Spain
[2] Vienna Univ Technol TU Wien, Dept Elect Engn, Inst Automatisierungs & Regelungstech ACIN, Gusshausstr 27-29, A-1040 Vienna, Austria
来源
关键词
Service robotics; Grasping; Computer vision; RECOGNITION; MANIPULATION;
D O I
10.1007/978-3-319-48036-7_18
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The ability to grasp is a fundamental requirement for service robots in order to perform meaningful tasks in ordinary environments. However, its robustness can be compromised by the inaccuracy (or lack) of tactile and proprioceptive sensing, especially in the presence of unforeseen slippage. As a solution, vision can be instrumental in detecting grasp errors. In this paper, we present an RGB-D visual application for discerning the success or failure in robot grasping of unknown objects, when a poor proprioceptive information and/or a deformable gripper without tactile information is used. The proposed application is divided into two stages: the visual gripper detection and recognition, and the grasping assessment (i.e. checking whether a grasping error has occurred). For that, three different visual cues are combined: colour, depth and edges. This development is supported by the experimental results on the Hobbit robot which is provided with an elastically deformable gripper.
引用
收藏
页码:243 / 254
页数:12
相关论文
共 50 条
  • [21] RGB-D Sensor-based Visual Target Detection and Tracking for an Intelligent Wheelchair Robot in Indoors Environments
    Xiao, Hanzhen
    Li, Zhijun
    Yang, Chenguang
    Yuan, Wang
    Wang, Liyang
    INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS, 2015, 13 (03) : 521 - 529
  • [22] RGB-D sensor-based visual target detection and tracking for an intelligent wheelchair robot in indoors environments
    Hanzhen Xiao
    Zhijun Li
    Chenguang Yang
    Wang Yuan
    Liyang Wang
    International Journal of Control, Automation and Systems, 2015, 13 : 521 - 529
  • [23] Traget Position and Posture Recognition Based on RGB-D Images for Autonomous Grasping Robot Arm Manipulation
    Yang, Chen
    Li, Zhuohan
    Cai, Zhiwei
    Gao, Yanan
    Xu, Te
    He, Guojian
    Yan, Fei
    Shao, Cheng
    2020 10TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND TECHNOLOGY (ICIST), 2020, : 65 - 70
  • [24] Clothes Grasping and Unfolding Based on RGB-D Semantic Segmentation
    Zhu, Xingyu
    Wang, Xin
    Freer, Jonathan
    Chang, Hyung Jin
    Gao, Yixing
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 9471 - 9477
  • [25] Deep Robotic Grasping Prediction with Hierarchical RGB-D Fusion
    Yaoxian Song
    Jun Wen
    Dongfang Liu
    Changbin Yu
    International Journal of Control, Automation and Systems, 2022, 20 : 243 - 254
  • [26] Deep Robotic Grasping Prediction with Hierarchical RGB-D Fusion
    Song, Yaoxian
    Wen, Jun
    Liu, Dongfang
    Yu, Changbin
    INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS, 2022, 20 (01) : 243 - 254
  • [27] Calibration of RGB-D sensors for Robot SLAM
    Chi, Chen-Tung
    Yang, Shih-Chien
    Wang, Yin-Tien
    APPLIED SCIENCE AND PRECISION ENGINEERING INNOVATION, PTS 1 AND 2, 2014, 479-480 : 677 - +
  • [28] Depth Error Elimination for RGB-D Cameras
    Gao, Yue
    Yang, You
    Zhen, Yi
    Dai, Qionghai
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2015, 6 (02)
  • [29] Toward a Unified Framework for RGB and RGB-D Visual Navigation
    Du, Heming
    Huang, Zi
    Chapman, Scott
    Yu, Xin
    ADVANCES IN ARTIFICIAL INTELLIGENCE, AI 2023, PT II, 2024, 14472 : 363 - 375
  • [30] People Detection in RGB-D Data
    Spinello, Luciano
    Arras, Kai O.
    2011 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, 2011, : 3838 - 3843