A Confidence Ranked Co-Occurrence Approach for Accurate Object Recognition in Highly Complex Scenes

被引:3
|
作者
Angin, Pelin [1 ]
Bhargava, Bharat [1 ,2 ]
机构
[1] Purdue Univ, Dept Comp Sci, W Lafayette, IN 47907 USA
[2] Purdue Univ, Dept Comp Sci, Sch Elect & Comp Engn, W Lafayette, IN 47907 USA
来源
JOURNAL OF INTERNET TECHNOLOGY | 2013年 / 14卷 / 01期
关键词
Computer vision; Object recognition; Co-occurrence; Confidence; Real-Time;
D O I
10.6138/JIT.2013.14.1.02
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Real-time and accurate classification of objects in highly complex scenes is an important problem for the Computer Vision community due to its many application areas. While boosting methods with the sliding window approach provide fast processing and accurate results for particular object categories, they cannot achieve the desired performance for more involved categories of objects. Recent research in Computer Vision has shown that exploiting object context through relational dependencies between object categories leads to improved accuracy in object recognition. While efforts in collective classification in images have resulted in complex algorithms suitable for offline processing, the real-time nature of the problem requires the use of simpler algorithms. In this paper, we propose a simple iterative algorithm for collective classification of all objects in an image, exploiting the global co-occurrence frequencies of object categories. The proposed algorithm uses multiple detectors trained using Gentle Boosting, where the category of the most confident estimate is propagated through the co-occurrence relations to determine the categories of the remaining unclassified objects. Experiments on a real-world dataset demonstrate the superiority of our approach over using Gentle Boosting alone as well as classic collective classification approaches modeling the full joint distribution for each object in the scene.
引用
收藏
页码:13 / 19
页数:7
相关论文
共 50 条
  • [21] Object Classification Using Heterogeneous Co-occurrence Features
    Ito, Satoshi
    Kubota, Susumu
    COMPUTER VISION-ECCV 2010, PT II, 2010, 6312 : 209 - 222
  • [22] CLITIC OBJECT SEQUENCE AND CO-OCCURRENCE RESTRICTIONS IN FRENCH
    BURSTON, JL
    LINGUISTIC ANALYSIS, 1983, 11 (03): : 247 - 275
  • [23] Discriminative feature co-occurrence selection for object detection
    Mita, Takeshi
    Kaneko, Toshimitsu
    Stenger, Bjorn
    Hori, Osamu
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2008, 30 (07) : 1257 - 1269
  • [24] Visual Co-occurrence Network: Using Context for Large-Scale Object Recognition in Retail
    Advani, Siddharth
    Smith, Brigid
    Tanabe, Yasuki
    Irick, Kevin
    Cotter, Matthew
    Sampson, Jack
    Narayanan, Vijaykrishnan
    2015 13TH IEEE SYMPOSIUM ON EMBEDDED SYSTEMS FOR REAL-TIME MULTIMEDIA, 2015, : 103 - 112
  • [25] Semantic-guided modeling of spatial relation and object co-occurrence for indoor scene recognition
    Song, Chuanxin
    Wu, Hanbo
    Ma, Xin
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 270
  • [26] Robust object recognition using a color co-occurrence histogram and the spatial relations of image patches
    Bang, Heebeom
    Lee, Sanghoon
    Yu, Dongjin
    Suh, Il Hong
    ARTIFICIAL LIFE AND ROBOTICS, 2009, 13 (02) : 488 - 492
  • [27] Contextual co-occurrence information for object representation and categorization
    Sheikhbahaei, Soheila
    Sadeghi, Zahra
    International Journal of Database Theory and Application, 2015, 8 (01): : 95 - 104
  • [28] Object categorization using co-occurrence, location and appearance
    Galleguillos, Carolina
    Rabinovich, Andrew
    Belongie, Serge
    2008 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-12, 2008, : 3552 - 3559
  • [29] Co-occurrence of Intensity and Gradient Features for Object Detection
    Hidaka, Akinori
    Kurita, Takio
    NEURAL INFORMATION PROCESSING, PT 2, PROCEEDINGS, 2009, 5864 : 38 - +
  • [30] Object Classification Using Heterogeneous Co-occurrence Features
    Ito, Satoshi
    Kubota, Susumu
    COMPUTER VISION-ECCV 2010, PT V, 2010, 6315 : 701 - 714