Co-manipulation of soft-materials estimating deformation from depth images

被引:4
|
作者
Nicola, G. [1 ]
Villagrossi, E. [1 ]
Pedrocchi, N. [1 ]
机构
[1] Natl Res Council Italy, Inst Intelligent Ind Technol & Syst Adv Mfg, Via A Corti 12, I-20133 Milan, Italy
关键词
Human-robot collaborative transportation; Soft materials co-manipulation; Vision-based robot manual guidance;
D O I
10.1016/j.rcim.2023.102630
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Human-robot manipulation of soft materials, such as fabrics, composites, and sheets of paper/cardboard, is a challenging operation that presents several relevant industrial applications. Estimating the deformation state of the manipulated material is one of the main challenges. Viable methods provide the indirect measure by calculating the human-robot relative distance. In this paper, we develop a data-driven model to estimate the deformation state of the material from a depth image through a Convolutional Neural Network (CNN). First, we define the deformation state of the material as the relative roto-translation from the current robot pose and a human grasping position. The model estimates the current deformation state through a Convolutional Neural Network, specifically, DenseNet-121 pretrained on ImageNet. The delta between the current and the desired deformation state is fed to the robot controller that outputs twist commands. The paper describes the developed approach to acquire, preprocess the dataset and train the model. The model is compared with the current state-of-the-art method based on a camera skeletal tracker. Results show that the approach achieves better performances and avoids the drawbacks of a skeletal tracker. The model was also validated over three different materials showing its generalization ability. Finally, we also studied the model performance according to different architectures and dataset dimensions to minimize the time required for dataset acquisition.
引用
收藏
页数:13
相关论文
共 42 条
  • [1] Human-robot co-manipulation of soft materials: enable a robot manual guidance using a depth map feedback
    Nicola, Giorgio
    Villagrossi, Enrico
    Pedrocchi, Nicola
    2022 31ST IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (IEEE RO-MAN 2022), 2022, : 498 - 504
  • [2] Robotic co-manipulation of deformable linear objects for large deformation tasks
    Almaghout, Karam
    Cherubini, Andrea
    Klimchik, Alexandr
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2024, 175
  • [3] Social Pressure in Co-Manipulation: From Verification of Refrains in Communication During Fusion Avatar Manipulation
    Sono, Taichi
    Osawa, Hirotaka
    COLLABORATION TECHNOLOGIES AND SOCIAL COMPUTING, COLLABTECH 2023, 2023, 14199 : 226 - 233
  • [4] Robots Learning from Robots: A Proof of Concept Study for Co-Manipulation Tasks
    Peternel, Luka
    Ajoudani, Arash
    2017 IEEE-RAS 17TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTICS (HUMANOIDS), 2017, : 484 - 490
  • [5] Use of CNNs for Estimating Depth from Stereo Images
    Satushe, Vaidehi
    Vyas, Vibha
    SMART TRENDS IN COMPUTING AND COMMUNICATIONS, VOL 1, SMARTCOM 2024, 2024, 945 : 45 - 58
  • [6] Depth Generation Network: Estimating Real World Depth from Stereo and Depth Images
    Dong, Zhipeng
    Gao, Yi
    Ren, Qinyuan
    Yan, Yunhui
    Chen, Fei
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 7201 - 7206
  • [7] Can Robots Mold Soft Plastic Materials by Shaping Depth Images?
    Gursoy, Ege
    Tarbouriech, Sonny
    Cherubini, Andrea
    IEEE TRANSACTIONS ON ROBOTICS, 2023, 39 (05) : 3620 - 3635
  • [8] A combined approach for estimating patchlets from PMD depth images and stereo intensity images
    Beder, Christian
    Bartczak, Bogumil
    Koch, Reinhard
    PATTERN RECOGNITION, PROCEEDINGS, 2007, 4713 : 11 - +
  • [9] Estimating backfat depth, loin depth, and intramuscular fat percentage from ultrasound images in swine
    Peppmeier, Z. C.
    Howard, J. T.
    Knauer, M. T.
    Leonard, S. M.
    ANIMAL, 2023, 17 (10)
  • [10] Extracting Interpretable EEG Features from a Deep Learning Model to Assess the Quality of Human-Robot Co-manipulation
    Manjunatha, Hemanth
    Esfahani, Ehsan T.
    2021 10TH INTERNATIONAL IEEE/EMBS CONFERENCE ON NEURAL ENGINEERING (NER), 2021, : 339 - 342