Learning Object-specific Grasp Affordance Densities

被引:0
|
作者
Detry, R. [1 ]
Baseski, E. [1 ]
Popovic, M. [1 ]
Touati, Y. [1 ]
Krueger, N. [1 ]
Kroemer, O. [2 ]
Peters, J. [2 ]
Piater, J. [1 ]
机构
[1] Univ Liege, B-4000 Liege, Belgium
[2] MPI Biol Cybernet, Tubingen, Germany
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper addresses the issue of learning and representing object grasp affordances, i.e. object-gripper relative configurations that lead to successful grasps. The purpose of grasp affordances is to organize and store the whole knowledge that an agent has about the grasping of an object, in order to facilitate reasoning on grasping solutions and their achievability. The affordance representation consists in a continuous probability density function defined on the 6D gripper pose space - 3D position and orientation -, within an object-relative reference frame. Grasp affordances are initially learned from various sources, e.g. from imitation or from visual cues, leading to grasp hypothesis densities. Grasp densities are attached to a learned 3D visual object model, and pose estimation of the visual model allows a robotic agent to execute samples from a grasp hypothesis density under various object poses. Grasp outcomes are used to learn grasp empirical densities, i.e. grasps that have been confirmed through experience. We show the result of learning grasp hypothesis densities from both imitation and visual cues, and present grasp empirical densities learned from physical experience by a robot.
引用
收藏
页码:92 / +
页数:3
相关论文
共 50 条
  • [21] Object-specific adaptation in the auditory cortex of bats
    Pastyrik, Jan D.
    Firzlaff, Uwe
    JOURNAL OF NEUROPHYSIOLOGY, 2022, 128 (03) : 556 - 567
  • [22] Cortical control of object-specific grasp relies on adjustments of both activity and effective connectivity: a common marmoset study
    Tia, Banty
    Takemi, Mitsuaki
    Kosugi, Akito
    Castagnola, Elisa
    Ansaldo, Alberto
    Nakamura, Takafumi
    Ricci, Davide
    Ushiba, Junichi
    Fadiga, Luciano
    Iriki, Atsushi
    JOURNAL OF PHYSIOLOGY-LONDON, 2017, 595 (23): : 7203 - 7221
  • [23] Learning spatial relations for object-specific segmentation using Bayesian network model
    Iker Gondra
    Fahim Irfan Alam
    Signal, Image and Video Processing, 2014, 8 : 1441 - 1450
  • [24] Learning spatial relations for object-specific segmentation using Bayesian network model
    Gondra, Iker
    Alam, Fahim Irfan
    SIGNAL IMAGE AND VIDEO PROCESSING, 2014, 8 (08) : 1441 - 1450
  • [25] Object-object interaction affordance learning
    Sun, Yu
    Ren, Shaogang
    Lin, Yun
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2014, 62 (04) : 487 - 496
  • [26] Neural representation of object-specific attentional priority
    Liu, Taosheng
    NEUROIMAGE, 2016, 129 : 15 - 24
  • [27] How Object-Specific Are Object Files? Evidence for Integration by Location
    van Dam, Wessel O.
    Hommel, Bernhard
    JOURNAL OF EXPERIMENTAL PSYCHOLOGY-HUMAN PERCEPTION AND PERFORMANCE, 2010, 36 (05) : 1184 - 1192
  • [28] Object-specific figure-ground segregation
    Yu, SX
    Shi, JB
    2003 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL II, PROCEEDINGS, 2003, : 39 - 45
  • [29] Object-specific priming in apparent motion display
    Yoshida, H
    Kawahara, J
    Maedo, S
    Toshima, T
    JAPANESE JOURNAL OF PSYCHOLOGY, 1995, 66 (05): : 354 - 360
  • [30] Sources of object-specific effects in representational momentum
    Vinson, NG
    Reed, CL
    VISUAL COGNITION, 2002, 9 (1-2) : 41 - 65