Median-shape Representation Learning for Category-level Object Pose Estimation in Cluttered Environments

被引:1
|
作者
Tatemichi, Hiroki [1 ]
Kawanishi, Yasutomo [1 ]
Deguchi, Daisuke [1 ]
Ide, Ichiro [1 ]
Amma, Ayako [2 ]
Murase, Hiroshi [1 ]
机构
[1] Nagoya Univ, Nagoya, Aichi, Japan
[2] Toyota Motor Co Ltd, Toyota, Japan
关键词
RECOGNITION;
D O I
10.1109/ICPR48806.2021.9412318
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose an occlusion-robust pose estimation method of an unknown object instance in an object category from a depth image. In a cluttered environment, objects are often occluded mutually. For estimating the pose of an object in such a situation, a method that de-occludes the unobservable area of the object would be effective. However, there are two difficulties; occlusion causes the offset between the center of the actual object and its observable area, and different instances in a category may have different shapes. To cope with these difficulties, we propose a two-stage Encoder-Decoder model to extract features with objects whose centers are aligned to the image center. In the model, we also propose the Median-shape Reconstructor as the second stage to absorb shape variations in a category. By evaluating the method with both a large-scale virtual dataset and a real dataset, we confirmed the proposed method achieves good performance on pose estimation of an occluded object from a depth image.
引用
收藏
页码:4473 / 4480
页数:8
相关论文
共 50 条
  • [21] GS-Pose: Category-Level Object Pose Estimation via Geometric and Semantic Correspondence
    Wang, Pengyuan
    Ikeda, Takuya
    Lee, Robert
    Nishiwaki, Koichi
    COMPUTER VISION - ECCV 2024, PT XXVII, 2025, 15085 : 108 - 126
  • [22] HS-Pose: Hybrid Scope Feature Extraction for Category-level Object Pose Estimation
    Zheng, Linfang
    Wang, Chen
    Sun, Yinghan
    Dasgupta, Esha
    Chen, Hua
    Leonardis, Ales
    Zhang, Wei
    Chang, Hyung Jin
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 17163 - 17173
  • [23] An efficient network for category-level 6D object pose estimation
    Sun, Shantong
    Liu, Rongke
    Sun, Shuqiao
    Yang, Xinxin
    Lu, Guangshan
    SIGNAL IMAGE AND VIDEO PROCESSING, 2021, 15 (07) : 1643 - 1651
  • [24] CatFormer: Category-Level 6D Object Pose Estimation with Transformer
    Yu, Sheng
    Zhai, Di-Hua
    Xia, Yuanqing
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 7, 2024, : 6808 - 6816
  • [25] RANSAC Optimization for Category-level 6D Object Pose Estimation
    Chen, Ying
    Kang, Guixia
    Wang, Yiping
    2020 5TH INTERNATIONAL CONFERENCE ON MECHANICAL, CONTROL AND COMPUTER ENGINEERING (ICMCCE 2020), 2020, : 50 - 56
  • [26] Category-Level Object Detection, Pose Estimation and Reconstruction from Stereo Images
    Zhang, Chuanrui
    Ling, Yonggen
    Lu, Minglei
    Qin, Minghan
    Wang, Haoqian
    COMPUTER VISION - ECCV 2024, PT XXXIV, 2025, 15092 : 332 - 349
  • [27] An efficient network for category-level 6D object pose estimation
    Shantong Sun
    Rongke Liu
    Shuqiao Sun
    Xinxin Yang
    Guangshan Lu
    Signal, Image and Video Processing, 2021, 15 : 1643 - 1651
  • [28] Robotic Grasp Detection Based on Category-Level Object Pose Estimation With Self-Supervised Learning
    Yu, Sheng
    Zhai, Di-Hua
    Xia, Yuanqing
    IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2024, 29 (01) : 625 - 635
  • [29] GeoReF: Geometric Alignment Across Shape Variation for Category-level Object Pose Refinement
    Zheng, Linfang
    Tse, Tze Ho Elden
    Wang, Chen
    Sun, Yinghan
    Chen, Hua
    Leonardis, Ales
    Zhang, Wei
    Chang, Hyung Jin
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 10693 - 10703
  • [30] UDA-COPE: Unsupervised Domain Adaptation for Category-level Object Pose Estimation
    Lee, Taeyeop
    Lee, Byeong-Uk
    Shin, Inkyu
    Choe, Jaesung
    Shin, Ukcheol
    Kweon, In So
    Yoon, Kuk-Jin
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 14871 - 14880