Multifingered Grasp Planning via Inference in Deep Neural Networks: Outperforming Sampling by Learning Differentiable Models

被引:38
|
作者
Lu, Qingkai [1 ,2 ]
Van der Merwe, Mark [1 ,2 ]
Sundaralingam, Balakumar [1 ,2 ]
Hermans, Tucker [1 ,2 ]
机构
[1] Univ Utah, Sch Comp, Salt Lake City, UT 84112 USA
[2] Univ Utah, Robot Ctr, Salt Lake City, UT 84112 USA
基金
美国国家科学基金会;
关键词
Robots; Planning; Grasping; Visualization; Artificial neural networks; Three-dimensional displays; Optimization; ALGORITHM;
D O I
10.1109/MRA.2020.2976322
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose a novel approach to multifingered grasp planning that leverages learned deep neural network (DNN) models. We trained a voxel-based 3D convolutional neural network (CNN) to predict grasp-success probability as a function of both visual information of an object and grasp configuration. From this, we formulated grasp planning as inferring the grasp configuration that maximizes the probability of grasp success. In addition, we learned a prior over grasp configurations as a mixture-density network (MDN) conditioned on our voxel-based object representation. We show that this object-conditional prior improves grasp inference when used with the learned grasp success-prediction network compared to a learned, objectagnostic prior or an uninformed uniform prior. Our work is the first to directly plan high-quality multifingered grasps in configuration space using a DNN without the need of an external planner. We validated our inference method by performing multifinger grasping on a physical robot. Our experimental results show that our planning method outperforms existing grasp-planning methods for neural networks (NNs).
引用
收藏
页码:55 / 65
页数:11
相关论文
共 50 条
  • [11] OpenVINO Deep Learning Workbench: Comprehensive Analysis and Tuning of Neural Networks Inference
    Demidovskij, Alexander
    Gorbachev, Yury
    Fedorov, Mikhail
    Slavutin, Iliya
    Tugarev, Artyom
    Fatekhov, Marat
    Tarkan, Yaroslav
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 783 - 787
  • [12] Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
    Hoefler, Torsten
    Alistarh, Dan
    Ben-Nun, Tal
    Dryden, Nikoli
    Peste, Alexandra
    JOURNAL OF MACHINE LEARNING RESEARCH, 2021, 23
  • [13] Could deep learning in neural networks improve the QSAR models?
    Gini, G.
    Zanoli, F.
    Gamba, A.
    Raitano, G.
    Benfenati, E.
    SAR AND QSAR IN ENVIRONMENTAL RESEARCH, 2019, 30 (09) : 617 - 642
  • [14] Learning Deep Architectures via Generalized Whitened Neural Networks
    Luo, Ping
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [15] Enhancing deep neural networks via multiple kernel learning
    Lauriola, Ivano
    Gallicchio, Claudio
    Aiolli, Fabio
    PATTERN RECOGNITION, 2020, 101
  • [16] Learning Deep Graph Representations via Convolutional Neural Networks
    Ye, Wei
    Askarisichani, Omid
    Jones, Alex
    Singh, Ambuj
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (05) : 2268 - 2279
  • [17] Deep neural networks watermark via universal deep hiding and metric learning
    Zhicheng Ye
    Xinpeng Zhang
    Guorui Feng
    Neural Computing and Applications, 2024, 36 : 7421 - 7438
  • [18] Deep neural networks watermark via universal deep hiding and metric learning
    Ye, Zhicheng
    Zhang, Xinpeng
    Feng, Guorui
    NEURAL COMPUTING & APPLICATIONS, 2024, 36 (13): : 7421 - 7438
  • [19] AN ACCELERATION STRATEGY FOR RANDOMIZE-THEN-OPTIMIZE SAMPLING VIA DEEP NEURAL NETWORKS*
    Yan, Liang
    Zhou, Tao
    JOURNAL OF COMPUTATIONAL MATHEMATICS, 2021, 39 (06): : 848 - 864
  • [20] Membership Inference Attacks Against Deep Learning Models via Logits Distribution
    Yan, Hongyang
    Li, Shuhao
    Wang, Yajie
    Zhang, Yaoyuan
    Sharif, Kashif
    Hu, Haibo
    Li, Yuanzhang
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (05) : 3799 - 3808