Learning Object-specific Grasp Affordance Densities

被引:0
|
作者
Detry, R. [1 ]
Baseski, E. [1 ]
Popovic, M. [1 ]
Touati, Y. [1 ]
Krueger, N. [1 ]
Kroemer, O. [2 ]
Peters, J. [2 ]
Piater, J. [1 ]
机构
[1] Univ Liege, B-4000 Liege, Belgium
[2] MPI Biol Cybernet, Tubingen, Germany
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper addresses the issue of learning and representing object grasp affordances, i.e. object-gripper relative configurations that lead to successful grasps. The purpose of grasp affordances is to organize and store the whole knowledge that an agent has about the grasping of an object, in order to facilitate reasoning on grasping solutions and their achievability. The affordance representation consists in a continuous probability density function defined on the 6D gripper pose space - 3D position and orientation -, within an object-relative reference frame. Grasp affordances are initially learned from various sources, e.g. from imitation or from visual cues, leading to grasp hypothesis densities. Grasp densities are attached to a learned 3D visual object model, and pose estimation of the visual model allows a robotic agent to execute samples from a grasp hypothesis density under various object poses. Grasp outcomes are used to learn grasp empirical densities, i.e. grasps that have been confirmed through experience. We show the result of learning grasp hypothesis densities from both imitation and visual cues, and present grasp empirical densities learned from physical experience by a robot.
引用
收藏
页码:92 / +
页数:3
相关论文
共 50 条
  • [31] Learning Grasp Affordance Reasoning Through Semantic Relations
    Ardon, Paola
    Pairet, Eric
    Petrick, Ronald P. A.
    Ramamoorthy, Subramanian
    Lohan, Katrin S.
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (04) : 4571 - 4578
  • [32] Incorporating Object Intrinsic Features Within Deep Grasp Affordance Prediction
    Veres, Matthew
    Cabral, Ian
    Moussa, Medhat
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (04) : 6009 - 6016
  • [33] Object-specific and non-specific contributions to repetition priming
    Andrews, S
    Stein, R
    AUSTRALIAN JOURNAL OF PSYCHOLOGY, 2001, 53 (03) : 176 - 176
  • [34] Structured deep learning based object-specific distance estimation from a monocular image
    Yu Shi
    Tao Lin
    Biao Chen
    Ruixia Wang
    Yabo Zhang
    International Journal of Machine Learning and Cybernetics, 2023, 14 : 4151 - 4161
  • [35] Structured deep learning based object-specific distance estimation from a monocular image
    Shi, Yu
    Lin, Tao
    Chen, Biao
    Wang, Ruixia
    Zhang, Yabo
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2023, 14 (12) : 4151 - 4161
  • [36] Weakly Supervised Object Detection via Object-Specific Pixel Gradient
    Shen, Yunhang
    Ji, Rongrong
    Wang, Changhu
    Li, Xi
    Li, Xuelong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (12) : 5960 - 5970
  • [37] MRI simulator with object-specific field map calculations
    Yoder, DA
    Zhao, YS
    Paschal, CB
    Fitzpatrick, JM
    MAGNETIC RESONANCE IMAGING, 2004, 22 (03) : 315 - 328
  • [38] The object-specific flood damage database HOWAS 21
    Kellermann, Patric
    Schroeter, Kai
    Thieken, Annegret H.
    Haubrock, Soeren-Nils
    Kreibich, Heidi
    NATURAL HAZARDS AND EARTH SYSTEM SCIENCES, 2020, 20 (09) : 2503 - 2519
  • [39] Object-Specific Semantic Coding in Human Perirhinal Cortex
    Clarke, Alex
    Tyler, Lorraine K.
    JOURNAL OF NEUROSCIENCE, 2014, 34 (14): : 4766 - 4775
  • [40] Object-Specific Role-Based Access Control
    Mundbrod, Nicolas
    Reichert, Manfred
    INTERNATIONAL JOURNAL OF COOPERATIVE INFORMATION SYSTEMS, 2019, 28 (01)