Domain Randomization and Generative Models for Robotic Grasping

被引:0
|
作者
Tobin, Josh [1 ,2 ]
Biewald, Lukas [3 ]
Duan, Rocky [4 ]
Andrychowicz, Marcin [1 ]
Handa, Ankur [5 ]
Kumar, Vikash [1 ,2 ]
McGrew, Bob [1 ]
Ray, Alex [1 ]
Schneider, Jonas [1 ]
Welinder, Peter [1 ]
Zaremba, Wojciech [1 ]
Abbeel, Pieter [2 ,4 ]
机构
[1] OpenAI, San Francisco, CA 94110 USA
[2] Univ Calif Berkeley, Berkeley, CA 94720 USA
[3] Weights & Biases Inc, San Francisco, CA USA
[4] Embodied Intelligence, Emeryville, CA USA
[5] NVIDIA, Santa Clara, CA USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning-based robotic grasping has made significant progress thanks to algorithmic improvements and increased data availability. However, state-of-the-art models are often trained on as few as hundreds or thousands of unique object instances, and as a result generalization can be a challenge. In this work, we explore a novel data generation pipeline for training a deep neural network to perform grasp planning that applies the idea of domain randomization to object synthesis. We generate millions of unique, unrealistic procedurally generated objects, and train a deep neural network to perform grasp planning on these objects. Since the distribution of successful grasps for a given object can be highly multimodal, we propose an autoregressive grasp planning model that maps sensor inputs of a scene to a probability distribution over possible grasps. This model allows us to sample grasps efficiently at test time (or avoid sampling entirely). We evaluate our model architecture and data generation pipeline in simulation and the real world. We find we can achieve a >90% success rate on previously unseen realistic objects at test time in simulation despite having only been trained on random objects. We also demonstrate an 80% success rate on real-world grasp attempts despite having only been trained on random simulated objects.
引用
收藏
页码:3482 / 3489
页数:8
相关论文
共 50 条
  • [21] Robotic Grasping for Instrument Manipulations
    Sun, Yu
    Lin, Yun
    Huang, Yongqiang
    2016 13TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS AND AMBIENT INTELLIGENCE (URAI), 2016, : 302 - 304
  • [22] Robotic Hands, Grasping, and Manipulation
    Bianchi, Matteo
    Bohg, Jeannette
    Moon, Hyungpil
    Platt, Robert
    Walker, Rich
    IEEE ROBOTICS & AUTOMATION MAGAZINE, 2021, 28 (02) : 131 - 133
  • [23] Mechanisms for Robotic Grasping and Manipulation
    Babin, Vincent
    Gosselin, Clement
    ANNUAL REVIEW OF CONTROL, ROBOTICS, AND AUTONOMOUS SYSTEMS, VOL 4, 2021, 2021, 4 : 573 - 593
  • [24] A robotic mechanism for grasping sacks
    Kazerooni, H
    Foley, C
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2005, 2 (02) : 111 - 120
  • [25] A Reusable Robotic Grasping Creator
    Li, Ying
    Keesling, Justin
    Pholsiri, Chalongrath
    Tardella, Neil
    English, James
    UNMANNED SYSTEMS TECHNOLOGY XII, 2010, 7692
  • [26] Fine-Tuning Generative Models as an Inference Method for Robotic Tasks
    Krupnik, Orr
    Shafer, Elisei
    Jurgenson, Tom
    Tamar, Aviv
    CONFERENCE ON ROBOT LEARNING, VOL 229, 2023, 229
  • [27] Enhancement of dexterity in robotic grasping referring to characteristics of human grasping
    Bae, JH
    Arimoto, S
    Ozawa, R
    Sekimoto, M
    2005 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), VOLS 1-4, 2005, : 1203 - 1209
  • [28] Object Detection Using Sim2Real Domain Randomization for Robotic Applications
    Horvath, Daniel
    Erdos, Gabor
    Istenes, Zoltan
    Horvath, Tomas
    Foldi, Sandor
    IEEE TRANSACTIONS ON ROBOTICS, 2023, 39 (02) : 1225 - 1243
  • [29] DDGC: Generative Deep Dexterous Grasping in Clutter
    Lundell, Jens
    Verdoja, Francesco
    Kyrki, Ville
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (04) : 6899 - 6906
  • [30] Domain randomization-enhanced deep learning models for bird detection
    Mao, Xin
    Chow, Jun Kang
    Tan, Pin Siang
    Liu, Kuan-fu
    Wu, Jimmy
    Su, Zhaoyu
    Cheong, Ye Hur
    Ooi, Ghee Leng
    Pang, Chun Chiu
    Wang, Yu-Hsing
    SCIENTIFIC REPORTS, 2021, 11 (01)