Object segmentation in cluttered and visually complex environments

被引:0
|
作者
Dmitri Ignakov
Guangjun Liu
Galina Okouneva
机构
[1] Ryerson University,
[2] Magna Electronics,undefined
来源
Autonomous Robots | 2014年 / 37卷
关键词
Segmentation; Conditional Random Fields; Mobile robots; Object localization; Service robotics; Computer vision;
D O I
暂无
中图分类号
学科分类号
摘要
Object segmentation is essential for systems that acquire object models online for robotic grasping. However, it remains a major technical challenge in visually complex and uncontrolled environments. Segmentation algorithms that rely on image features alone can perform poorly under certain lighting conditions, or if the object and the background have similar appearance. In parallel, known object segmentation algorithms that rely exclusively on three dimensional (3D) geometric data are derived under strong assumptions about the geometry of the scene. A promising approach to performing object segmentation is to use a combination of appearance and 3D features. In this paper, an object segmentation algorithm is presented that combines multiple appearance and geometric cues. The segmentation is formulated as a binary labeling problem. The Conditional Random Fields (CRF) framework is used to model the conditional probability of the labeling given the appearance and geometric data. The maximum a posteriori estimation of the labeling is obtained by minimizing the energy function corresponding to the CRF using graph cuts. A simple and efficient method for initializing the proposed algorithm is also presented. Experimental results have demonstrated the effectiveness of the proposed algorithm.
引用
收藏
页码:111 / 135
页数:24
相关论文
共 50 条
  • [41] A video object segmentation-based fish individual recognition method for underwater complex environments
    Zheng, Tao
    Wu, Junfeng
    Kong, Han
    Zhao, Haiyan
    Qu, Boyu
    Liu, Liang
    Yu, Hong
    Zhou, Chunyu
    ECOLOGICAL INFORMATICS, 2024, 82
  • [42] A Benchmark for Multi-Robot Planning in Realistic, Complex and Cluttered Environments
    Schaefer, Simon
    Palinieri, Luigi
    Heuer, Lukas
    Dillmann, Ruediger
    Koenig, Sven
    Kleiner, Alexander
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 9231 - 9237
  • [43] Three-dimensional model-based object recognition and segmentation in cluttered scenes
    Mian, Ajmal S.
    Bennamoun, Mohammed
    Owens, Robyn
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2006, 28 (10) : 1584 - 1601
  • [44] SupeRGB-D: Zero-Shot Instance Segmentation in Cluttered Indoor Environments
    Oernek, Evin Pnar
    Krishnan, Aravindhan K.
    Gayaka, Shreekant
    Kuo, Cheng-Hao
    Sen, Arnie
    Navab, Nassir
    Tombari, Federico
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (06) : 3709 - 3716
  • [45] Remote Telemanipulation with Adapting Viewpoints in Visually Complex Environments
    Rakita, Daniel
    Mutlu, Bilge
    Gleicher, Michael
    ROBOTICS: SCIENCE AND SYSTEMS XV, 2019,
  • [46] Segmentation of moving object in complex environment
    Yong, Y
    Wang, JR
    Zhang, QH
    Electronic Imaging and Multimedia Technology IV, 2005, 5637 : 195 - 202
  • [47] UAV Framework for Autonomous Onboard Navigation and People/Object Detection in Cluttered Indoor Environments
    Sandino, Juan
    Vanegas, Fernando
    Maire, Frederic
    Caccetta, Peter
    Sanderson, Conrad
    Gonzalez, Felipe
    REMOTE SENSING, 2020, 12 (20) : 1 - 31
  • [48] Contact-Aware Non-prehensile Manipulation for Object Retrieval in Cluttered Environments
    Jiang, Yongpeng
    Jia, Yongyi
    Li, Xiang
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 10604 - 10611
  • [49] Movement in cluttered virtual environments
    Ruddle, RA
    Jones, DM
    PRESENCE-TELEOPERATORS AND VIRTUAL ENVIRONMENTS, 2001, 10 (05) : 511 - 524
  • [50] Integrated learning of saliency, complex features, and object detectors from cluttered scenes
    Gao, DS
    Vasconcelos, N
    2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 2, PROCEEDINGS, 2005, : 282 - 287