A Model of Shared Grasp Affordances from Demonstration

被引:25
|
作者
Sweeney, John D. [1 ]
Grupen, Rod [1 ]
机构
[1] Univ Massachusetts, Lab Perceptual Robot, Amherst, MA 01003 USA
来源
HUMANOIDS: 2007 7TH IEEE-RAS INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS | 2007年
关键词
D O I
10.1109/ICHR.2007.4813845
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper presents a hierarchical, statistical topic model for representing the grasp preshapes of a set of objects. Observations provided by teleoperation are clustered into latent affordances shared among all objects. Each affordance defines a joint distribution over position and orientation of the hand relative to the object and conditioned on visual appearance. The parameters of the model are learned using a Gibbs sampling method. After training, the model can be used to compute grasp preshapes for a novel object based on its visual appearance. The model is evaluated experimentally on a set of objects for its ability to generate grasp preshapes that lead to successful grasps, and compared to a baseline approach.
引用
收藏
页码:27 / 35
页数:9
相关论文
共 50 条
  • [31] Interactive grasp learning based on human demonstration
    Ekvall, S
    Kragic, D
    2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1- 5, PROCEEDINGS, 2004, : 3519 - 3524
  • [32] Visual object-action recognition: Inferring object affordances from human demonstration
    Kjellstrom, Hedvig
    Romero, Javier
    Kragic, Danica
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2011, 115 (01) : 81 - 90
  • [33] MOOC Affordances Model
    Economides, Anastasios A.
    Perifanou, Maria A.
    PROCEEDINGS OF 2018 IEEE GLOBAL ENGINEERING EDUCATION CONFERENCE (EDUCON) - EMERGING TRENDS AND CHALLENGES OF ENGINEERING EDUCATION, 2018, : 599 - 607
  • [34] Robot grasp synthesis from virtual demonstration and topology-preserving environment reconstruction
    Aleotti, Jacopo
    Caselli, Stefano
    2007 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-9, 2007, : 2698 - 2703
  • [35] Grasp planning and implementation for dexterous hands by human demonstration
    Li, JT
    Zhang, YR
    Su, WK
    Quo, WD
    ELEVENTH WORLD CONGRESS IN MECHANISM AND MACHINE SCIENCE, VOLS 1-5, PROCEEDINGS, 2004, : 1838 - 1841
  • [36] Robot Grasp Learning by Demonstration without Predefined Rules
    Fernandez, Cesar
    Asuncion Vicente, Maria
    Pedro Neco, Ramon
    Puerto, Rafael
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2011, 8 (05): : 156 - 168
  • [37] Dynamic grasp recognition within the framework of programming by demonstration
    Zöllner, R
    Rogalla, O
    Dillmann, R
    Zöllner, JM
    ROBOT AND HUMAN COMMUNICATION, PROCEEDINGS, 2001, : 418 - 423
  • [38] Modelling the structure of object-independent human affordances of approaching to grasp for robotic hands
    Cotugno, Giuseppe
    Konstantinova, Jelizaveta
    Althoefer, Kaspar
    Nanayakkara, Thrishantha
    PLOS ONE, 2018, 13 (12):
  • [39] Self-supervised learning of grasp dependent tool affordances on the iCub Humanoid robot
    Mar, Tanis
    Tikhanoff, Vadim
    Metta, Giorgio
    Natale, Lorenzo
    2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2015, : 3200 - 3206
  • [40] Localizing Handle-Like Grasp Affordances in 3D Point Clouds
    ten Pas, Andreas
    Platt, Robert
    EXPERIMENTAL ROBOTICS, 2016, 109 : 623 - 638