PERCH: Perception via Search for Multi-Object Recognition and Localization

被引:0
|
作者
Narayanan, Venkatraman [1 ]
Likhachev, Maxim [1 ]
机构
[1] Carnegie Mellon Univ, Inst Robot, Pittsburgh, PA 15213 USA
关键词
OBJECT RECOGNITION; 3D; MODELS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In many robotic domains such as flexible automated manufacturing or personal assistance, a fundamental perception task is that of identifying and localizing objects whose 3D models are known. Canonical approaches to this problem include discriminative methods that find correspondences between feature descriptors computed over the model and observed data. While these methods have been employed successfully, they can be unreliable when the feature descriptors fail to capture variations in observed data; a classic cause being occlusion. As a step towards deliberative reasoning, we present PERCH: PErception via SeaRCH, an algorithm that seeks to find the best explanation of the observed sensor data by hypothesizing possible scenes in a generative fashion. Our contributions are: i) formulating the multi-object recognition and localization task as an optimization problem over the space of hypothesized scenes, ii) exploiting structure in the optimization to cast it as a combinatorial search problem on what we call the Monotone Scene Generation Tree, and iii) leveraging parallelization and recent advances in multi-heuristic search in making combinatorial search tractable. We prove that our system can guaranteedly produce the best explanation of the scene under the chosen cost function, and validate our claims on real world RGB-D test data. Our experimental results show that we can identify and localize objects under heavy occlusioncases where state-of-the-art methods struggle.
引用
收藏
页码:5052 / 5059
页数:8
相关论文
共 50 条
  • [21] UncertaintyTrack: Exploiting Detection and Localization Uncertainty in Multi-Object Tracking
    Lee, Chang Won
    Waslander, Steven L.
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, : 4946 - 4953
  • [22] A System for Generalized 3D Multi-Object Search
    Zheng, Kaiyu
    Paul, Anirudha
    Tellex, Stefanie
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 1638 - 1644
  • [23] Multi-object tracking via discriminative appearance modeling
    Huang, Shucheng
    Jiang, Shuai
    Zhu, Xia
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2016, 153 : 77 - 87
  • [24] Multi-Object Classification via Crowdsourcing With a Reject Option
    Li, Qunwei
    Vempaty, Aditya
    Varshney, Lav R.
    Varshney, Pramod K.
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2017, 65 (04) : 1068 - 1081
  • [25] UNet Based on Multi-Object Segmentation and Convolution Neural Network for Object Recognition
    Almujally, Nouf Abdullah
    Chughtai, Bisma Riaz
    Al Mudawi, Naif
    Alazeb, Abdulwahab
    Algarni, Asaad
    Alzahrani, Hamdan A.
    Park, Jeongmin
    CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 80 (01): : 1563 - 1580
  • [26] Multi-Object Tracking via Constrained Sequential Labeling
    Chen, Sheng
    Fern, Alan
    Todorovic, Sinisa
    2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 1130 - 1137
  • [27] Aerial Multi-object Tracking via Information Weighting
    Wu, Pengnian
    Fan, Bangkui
    Zhang, Ruiyu
    Xu, Yulong
    Xue, Dong
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT VII, ICIC 2024, 2024, 14868 : 208 - 217
  • [28] Aerial Multi-object Tracking via Information Weighting
    Wu, Pengnian
    Fan, Bangkui
    Zhang, Ruiyu
    Xu, Yulong
    Xue, Dong
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2024, 14868 LNCS : 208 - 217
  • [29] Adaptive superpixel-based multi-object pedestrian recognition
    Tianhe Yu
    Chengdong Wang
    Xiao Liu
    Ming Zhu
    Machine Vision and Applications, 2021, 32
  • [30] Multi-object recognition by optimized hierarchical temporal memory network
    Xie, Liming
    Yang, Kai
    Gao, Xiaorong
    OPTIK, 2016, 127 (19): : 7594 - 7601