Median-shape Representation Learning for Category-level Object Pose Estimation in Cluttered Environments

被引:1
|
作者
Tatemichi, Hiroki [1 ]
Kawanishi, Yasutomo [1 ]
Deguchi, Daisuke [1 ]
Ide, Ichiro [1 ]
Amma, Ayako [2 ]
Murase, Hiroshi [1 ]
机构
[1] Nagoya Univ, Nagoya, Aichi, Japan
[2] Toyota Motor Co Ltd, Toyota, Japan
关键词
RECOGNITION;
D O I
10.1109/ICPR48806.2021.9412318
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose an occlusion-robust pose estimation method of an unknown object instance in an object category from a depth image. In a cluttered environment, objects are often occluded mutually. For estimating the pose of an object in such a situation, a method that de-occludes the unobservable area of the object would be effective. However, there are two difficulties; occlusion causes the offset between the center of the actual object and its observable area, and different instances in a category may have different shapes. To cope with these difficulties, we propose a two-stage Encoder-Decoder model to extract features with objects whose centers are aligned to the image center. In the model, we also propose the Median-shape Reconstructor as the second stage to absorb shape variations in a category. By evaluating the method with both a large-scale virtual dataset and a real dataset, we confirmed the proposed method achieves good performance on pose estimation of an occluded object from a depth image.
引用
收藏
页码:4473 / 4480
页数:8
相关论文
共 50 条
  • [31] Fine segmentation and difference-aware shape adjustment for category-level 6DoF object pose estimation
    Chongpei Liu
    Wei Sun
    Jian Liu
    Xing Zhang
    Shimeng Fan
    Qiang Fu
    Applied Intelligence, 2023, 53 : 23711 - 23728
  • [32] Bi-directional attention based RGB-D fusion for category-level object pose and shape estimation
    Tang, Kaifeng
    Xu, Chi
    Chen, Ming
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (17) : 53043 - 53063
  • [33] Fine segmentation and difference-aware shape adjustment for category-level 6DoF object pose estimation
    Liu, Chongpei
    Sun, Wei
    Liu, Jian
    Zhang, Xing
    Fan, Shimeng
    Fu, Qiang
    APPLIED INTELLIGENCE, 2023, 53 (20) : 23711 - 23728
  • [34] Adversarial imitation learning-based network for category-level 6D object pose estimation
    Sun, Shantong
    Bao, Xu
    Kaushik, Aryan
    MACHINE VISION AND APPLICATIONS, 2024, 35 (05)
  • [35] DualPoseNet: Category-level 6D Object Pose and Size Estimation Using Dual Pose Network with Refined Learning of Pose Consistency
    Lin, Jiehong
    Wei, Zewei
    Li, Zhihao
    Xu, Songcen
    Jia, Kui
    Li, Yuanqing
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 3540 - 3549
  • [36] GarmentNets: Category-Level Pose Estimation for Garments via Canonical Space Shape Completion
    Chi, Cheng
    Song, Shuran
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 3304 - 3313
  • [37] Normalized Object Coordinate Space for Category-Level 6D Object Pose and Size Estimation
    Wang, He
    Sridhar, Srinath
    Huang, Jingwei
    Valentin, Julien
    Song, Shuran
    Guibas, Leonidas J.
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 2637 - 2646
  • [38] Bi-directional attention based RGB-D fusion for category-level object pose and shape estimation
    Kaifeng Tang
    Chi Xu
    Ming Chen
    Multimedia Tools and Applications, 2024, 83 : 53043 - 53063
  • [39] SD-Pose: Structural Discrepancy Aware Category-Level 6D Object Pose Estimation
    Li, Guowei
    Zhu, Dongchen
    Zhang, Guanghui
    Shi, Wenjun
    Zhang, Tianyu
    Zhang, Xiaolin
    Li, Jiamao
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 5674 - 5683
  • [40] Best Next-Viewpoint Recommendation by Selecting Minimum Pose Ambiguity for Category-Level Object Pose Estimation
    Hashim N.M.Z.
    Kawanishi Y.
    Deguchi D.
    Ide I.
    Amma A.
    Kobori N.
    Murase H.
    Seimitsu Kogaku Kaishi/Journal of the Japan Society for Precision Engineering, 2021, 87 (05): : 440 - 446