6D pose estimation of textureless shiny objects using random ferns for bin-picking

被引:0
|
作者
Rodrigues, Jose Jeronimo [1 ,3 ]
Kim, Jun-Sik [1 ]
Furukawa, Makoto [4 ]
Xavier, Joao [2 ,3 ]
Aguiar, Pedro [2 ,3 ]
Kanade, Takeo [1 ]
机构
[1] Carnegie Mellon Univ, Inst Robot, Pittsburgh, PA 15213 USA
[2] Univ Tecn Lisboa, Inst Super Tecn, Lisbon, Portugal
[3] Univ Tecn Lisboa, Inst Syst & Robot, Lisbon, Portugal
[4] Honda Engn Co Ltd, Kyoto, Japan
关键词
RECOGNITION;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We address the problem of 6D pose estimation of a textureless and shiny object from single-view 2D images, for a bin-picking task. For a textureless object like a mechanical part, conventional visual feature matching usually fails due to the absence of rich texture features. Hierarchical template matching assumes that few templates can cover all object appearances. However, the appearance of a shiny object largely depends on its pose and illumination. Furthermore, in a bin-picking task, we must cope with partial occlusions, shadows, and inter-reflections. In this paper, we propose a purely data-driven method to tackle the pose estimation problem. Motivated by photometric stereo, we build an imaging system with multiple lights where each image channel is obtained under different lightning conditions. In an offline stage, we capture images of an object in several poses. Then, we train random ferns to map the appearance of small image patches into votes on the pose space. At runtime, each patch of the input image votes on possible pose hypotheses. We further show how to increase the accuracy of the object poses from our discretized pose hypotheses. Our experiments show that the proposed method can detect and estimate poses of textureless and shiny objects accurately and robustly within half a second.
引用
收藏
页码:3334 / 3341
页数:8
相关论文
共 50 条
  • [21] A fast monocular 6D pose estimation method for textureless objects based on perceptual hashing and template matching
    Araya-Martinez, Jose Moises
    Matthiesen, Vinicius Soares
    Bogh, Simon
    Lambrecht, Jens
    de Figueiredo, Rui Pimentel
    FRONTIERS IN ROBOTICS AND AI, 2025, 11
  • [22] 6D Pose Estimation of Transparent Objects Using Synthetic Data
    Byambaa, Munkhtulga
    Koutaki, Gou
    Choimaa, Lodoiravsal
    FRONTIERS OF COMPUTER VISION (IW-FCV 2022), 2022, 1578 : 3 - 17
  • [23] Data-Driven Object Pose Estimation in a Practical Bin-Picking Application
    Kozak, Viktor
    Sushkov, Roman
    Kulich, Miroslav
    Preucil, Libor
    SENSORS, 2021, 21 (18)
  • [24] A Novel Metric-Learning-Based Method for Multi-Instance Textureless Objects' 6D Pose Estimation
    Wu, Chenrui
    Chen, Long
    Wu, Shiqing
    APPLIED SCIENCES-BASEL, 2021, 11 (22):
  • [25] Real-time 3D pose estimation of small ring-shaped bin-picking objects using deep learning and ICP algorithm
    Lee J.
    Lee M.
    Kang S.-S.
    Park S.-Y.
    Journal of Institute of Control, Robotics and Systems, 2019, 25 (09): : 760 - 769
  • [26] 6D Pose Estimation of Objects: Recent Technologies and Challenges
    He, Zaixing
    Feng, Wuxi
    Zhao, Xinyue
    Lv, Yongfeng
    APPLIED SCIENCES-BASEL, 2021, 11 (01): : 1 - 18
  • [27] Marker-Less 3d Object Recognition and 6d Pose Estimation for Homogeneous Textureless Objects: An RGB-D Approach
    Hajari, Nasim
    Bustillo, Gabriel Lugo
    Sharma, Harsh
    Cheng, Irene
    SENSORS, 2020, 20 (18) : 1 - 22
  • [28] Sim-to-Real 6D Object Pose Estimation via Iterative Self-training for Robotic Bin Picking
    Chen, Kai
    Cao, Rui
    James, Stephen
    Li, Yichuan
    Liu, Yun-Hui
    Abbeel, Pieter
    Dou, Qi
    COMPUTER VISION, ECCV 2022, PT XXXIX, 2022, 13699 : 533 - 550
  • [29] Mutual Hypothesis Verification for 6D Pose Estimation of Natural Objects
    Park, Kiru
    Prankl, Johann
    Vincze, Markus
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017), 2017, : 2192 - 2199
  • [30] CAD-based Pose Estimation Design for Random Bin Picking using a RGB-D Camera
    Kai-Tai Song
    Cheng-Hei Wu
    Sin-Yi Jiang
    Journal of Intelligent & Robotic Systems, 2017, 87 : 455 - 470