Robot learning by Single Shot Imitation for Manipulation Tasks

被引:2
|
作者
Vohra, Mohit [1 ]
Behera, Laxmidhar [1 ,2 ]
机构
[1] Indian Inst Technol, Kanpur, Uttar Pradesh, India
[2] TCS Innovat Labs, Noida, India
来源
2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2022年
关键词
D O I
10.1109/IJCNN55064.2022.9892529
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we present a Programming by imitation for a robotic manipulation system, which can be programmed for various tasks from only a single demonstration. The system is primarily based on the three components: i) scene parsing, ii) action classification, and iii) dynamic primitive shape fitting. All the above modules are developed by leveraging state-of-the-art techniques in 2D and 3D visual perception. The primary contribution of this system paper is an imitationbased robotic system that can replicate highly complex tasks by executing elementary task-specific program templates, thus avoiding extensive and exhaustive manual coding. In addition, we contribute by introducing a primitive shape fitting module by which it becomes easier to grasp objects of various shapes and sizes. To evaluate the system performance, the proposed robotic system has been tested on the task of multiple object sorting and reports 91.8% accuracy in human demonstrated action detection, 76.1% accuracy in action execution, and overall accuracy of 80%. We also examine the proposed system's component-wise performance to demonstrate the efficacy and deployability in industrial and household scenarios.
引用
收藏
页数:7
相关论文
共 50 条
  • [31] Learning of temporal sequences for motor control of a robot system in complex manipulation tasks
    Hohm, K
    Liu, YM
    Boetselaars, HG
    INTELLIGENT AUTONOMOUS SYSTEMS: IAS-5, 1998, : 280 - 287
  • [32] A kernel-based approach to learning contact distributions for robot manipulation tasks
    Kroemer, Oliver
    Leischnig, Simon
    Luettgen, Stefan
    Peters, Jan
    AUTONOMOUS ROBOTS, 2018, 42 (03) : 581 - 600
  • [33] Curriculum Learning for Robot Manipulation Tasks With Sparse Reward Through Environment Shifts
    Sayar, Erdi
    Iacca, Giovanni
    Knoll, Alois
    IEEE Access, 2024, 12 : 46626 - 46635
  • [34] Learning Dense Visual Descriptors using Image Augmentations for Robot Manipulation Tasks
    Graf, Christian
    Adrian, David B.
    Weil, Joshua
    Gabriel, Miroslav
    Schillinger, Philipp
    Spies, Markus
    Neumann, Heiko
    Kupcsik, Andras
    CONFERENCE ON ROBOT LEARNING, VOL 205, 2022, 205 : 871 - 880
  • [35] Curriculum Learning for Robot Manipulation Tasks With Sparse Reward Through Environment Shifts
    Sayar, Erdi
    Iacca, Giovanni
    Knoll, Alois
    IEEE ACCESS, 2024, 12 : 46626 - 46635
  • [36] A Process-Oriented Framework for Robot Imitation Learning in Human-Centered Interactive Tasks
    Hou, Muhan
    Hindriks, Koen
    Eiben, A. E.
    Baraka, Kim
    2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, : 1745 - 1752
  • [37] Enhancing construction robot learning for collaborative and long-horizon tasks using generative adversarial imitation learning
    Li, Rui
    Zou, Zhengbo
    ADVANCED ENGINEERING INFORMATICS, 2023, 58
  • [38] Deep Imitation Learning for Broom-Manipulation Tasks Using Small-Sized Training Data
    Sasatake, Harumo
    Tasaki, Ryosuke
    Uchiyama, Naoki
    2020 7TH INTERNATIONAL CONFERENCE ON CONTROL, DECISION AND INFORMATION TECHNOLOGIES (CODIT'20), VOL 1, 2020, : 733 - 738
  • [39] Imitation Learning of Long-Horizon Manipulation Tasks Through Temporal Sub-action Sequencing
    Singh, Niharika
    Dutta, Samrat
    Jain, Aditya
    Prakash, Ravi
    Majumder, Anima
    Sinha, Rajesh
    Behera, Laxmidhar
    Sandhan, Tushar
    COMPUTER VISION AND IMAGE PROCESSING, CVIP 2023, PT II, 2024, 2010 : 347 - 361
  • [40] Learning Articulated Constraints From a One-Shot Demonstration for Robot Manipulation Planning
    Liu, Yizhou
    Zha, Fusheng
    Sun, Lining
    Li, Jingxuan
    Li, Mantian
    Wang, Xin
    IEEE ACCESS, 2019, 7 : 172584 - 172596