A Model of Shared Grasp Affordances from Demonstration

被引:25
|
作者
Sweeney, John D. [1 ]
Grupen, Rod [1 ]
机构
[1] Univ Massachusetts, Lab Perceptual Robot, Amherst, MA 01003 USA
关键词
D O I
10.1109/ICHR.2007.4813845
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper presents a hierarchical, statistical topic model for representing the grasp preshapes of a set of objects. Observations provided by teleoperation are clustered into latent affordances shared among all objects. Each affordance defines a joint distribution over position and orientation of the hand relative to the object and conditioned on visual appearance. The parameters of the model are learned using a Gibbs sampling method. After training, the model can be used to compute grasp preshapes for a novel object based on its visual appearance. The model is evaluated experimentally on a set of objects for its ability to generate grasp preshapes that lead to successful grasps, and compared to a baseline approach.
引用
收藏
页码:27 / 35
页数:9
相关论文
共 50 条
  • [1] Learning to grasp and extract affordances: the Integrated Learning of Grasps and Affordances (ILGA) model
    Bonaiuto, James
    Arbib, Michael A.
    BIOLOGICAL CYBERNETICS, 2015, 109 (06) : 639 - 669
  • [2] Learning to grasp and extract affordances: the Integrated Learning of Grasps and Affordances (ILGA) model
    James Bonaiuto
    Michael A. Arbib
    Biological Cybernetics, 2015, 109 : 639 - 669
  • [3] Visual Grasp Affordances From Appearance-Based Cues
    Song, Hyun Oh
    Fritz, Mario
    Gu, Chunhui
    Darrell, Trevor
    2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCV WORKSHOPS), 2011,
  • [4] Learning from Demonstration in Robots using the Shared Circuits Model
    Suleman, Khawaja M. U.
    Awais, Mian M.
    IEEE TRANSACTIONS ON AUTONOMOUS MENTAL DEVELOPMENT, 2014, 6 (04) : 244 - 258
  • [5] Recognizing the grasp intention from human demonstration
    de Souza, Ravin
    El-Khoury, Sahar
    Santos-Victor, Jose
    Billard, Aude
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2015, 74 : 108 - 121
  • [6] Learning Grasp Affordances with Variable Centroid Offsets
    Palmer, Thomas J.
    Fagg, Andrew H.
    2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, 2009, : 1265 - 1271
  • [7] Grasp Affordances in Bistable Perception of the Necker Cube
    Brooks, Thomas R.
    Frank, Till D.
    Dixon, James A.
    NONLINEAR DYNAMICS PSYCHOLOGY AND LIFE SCIENCES, 2020, 24 (02) : 143 - 157
  • [8] Grasp Planning Based on Strategy Extracted from Demonstration
    Lin, Yun
    Sun, Yu
    2014 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2014), 2014, : 4458 - 4463
  • [9] SingleDemoGrasp: Learning to Grasp From a Single Image Demonstration
    Sefat, Amir Mehman
    Angleraud, Alexandre
    Rahtu, Esa
    Pieters, Roel
    2022 IEEE 18TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2022, : 390 - 396
  • [10] The Sound of Grasp Affordances: Influence of Grasp-Related Size of Categorized Objects on Vocalization
    Vainio, Lari
    Vainio, Martti
    Lipsanen, Jari
    Ellis, Rob
    COGNITIVE SCIENCE, 2019, 43 (10)