Discriminative cluster refinement: Improving object category recognition given limited training data

被引:0
|
作者
Yang, Liu [1 ]
Jin, Rong [1 ]
Pantofaru, Caroline [2 ]
Sukthankar, Rahul [2 ,3 ]
机构
[1] Michigan State Univ, Dept Comp Sci & Engn, E Lansing, MI 48824 USA
[2] Carnegie Mellon Univ, Inst Robot, Pittsburgh, PA 15213 USA
[3] Intel Res, Pittsburgh, PA 15213 USA
关键词
D O I
暂无
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
A popular approach to problems in image classification is to represent the image as a bag of visual words and then employ a classifier to categorize the image. Unfortunately, a significant shortcoming of this approach is that the clustering and classification are disconnected. Since the clustering into visual words is unsupervised, the representation does not necessarily capture the aspects of the data that are most useful for classification. More seriously, the semantic relationship between clusters is lost, causing the overall classification performance to suffer. We introduce "discriminative cluster refinement" (DCR), a method that explicitly models the pairwise relationships between different visual words by exploiting their co-occurrence information. The assigned class labels are used to identify the co-occurrence patterns that are most informative for object classification. DCR employs a maximum-margin approach to generate an optimal kernel matrix for classification. One important benefit of DCR is that it integrates smoothly into existing bag-of-words information retrieval systems by employing the set of visual words generated by any clustering method. While DCR could improve a broad class of information retrieval systems, this paper focuses on object category recognition. We present a direct comparison with a state-of-the art method on the PASCAL 2006 database and show that cluster refinement results in a significant improvement in classification accuracy given a small number of training examples.
引用
收藏
页码:2303 / +
页数:2
相关论文
共 50 条
  • [31] FVGNN: A Novel GNN to Finger Vein Recognition from Limited Training Data
    Li, Jinghui
    Fang, Peiyu
    PROCEEDINGS OF 2019 IEEE 8TH JOINT INTERNATIONAL INFORMATION TECHNOLOGY AND ARTIFICIAL INTELLIGENCE CONFERENCE (ITAIC 2019), 2019, : 144 - 148
  • [32] Multi-stream CNN for facial expression recognition in limited training data
    Javad Abbasi Aghamaleki
    Vahid Ashkani Chenarlogh
    Multimedia Tools and Applications, 2019, 78 : 22861 - 22882
  • [33] Improving Speech Recognition with Augmented Synthesized Data and Conditional Model Training
    Xue, Shaofei
    Tang, Jian
    Liu, Yazhu
    2022 13TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP), 2022, : 443 - 447
  • [34] Improving Object Detector Training on Synthetic Data by Starting With a Strong Baseline Methodology
    Ruis, Frank A.
    Liezenga, Alma M.
    Heslinga, Friso G.
    Ballan, Luca
    Eker, Thijs A.
    den Hollander, Richard J. M.
    van Leeuwen, Martin C.
    Dijk, Judith
    Huizinga, Wyke
    SYNTHETIC DATA FOR ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: TOOLS, TECHNIQUES, AND APPLICATIONS II, 2024, 13035
  • [35] Training Convolutional Neural Networks with Synthesized Data for Object Recognition in Industrial Manufacturing
    Li, Jason
    Gotvall, Per-Lage
    Provost, Julien
    Akesson, Knut
    2019 24TH IEEE INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES AND FACTORY AUTOMATION (ETFA), 2019, : 1544 - 1547
  • [36] Saliency for fine-grained object recognition in domains with scarce training data
    Figueroa Flores, Carola
    Gonzalez-Garcia, Abel
    van de Weijer, Joost
    Raducanu, Bogdan
    PATTERN RECOGNITION, 2019, 94 : 62 - 73
  • [37] ConTraNet: A hybrid network for improving the classification of EEG and EMG signals with limited training data
    Ali, Omair
    Saif-ur-Rehman, Muhammad
    Glasmachers, Tobias
    Iossifidis, Ioannis
    Klaes, Christian
    COMPUTERS IN BIOLOGY AND MEDICINE, 2024, 168
  • [38] Reusing training data with generative/discriminative hybrid model for practical acceleration-based activity recognition
    Quan Kong
    Takuya Maekawa
    Computing, 2014, 96 : 875 - 895
  • [39] Reusing training data with generative/discriminative hybrid model for practical acceleration-based activity recognition
    Kong, Quan
    Maekawa, Takuya
    COMPUTING, 2014, 96 (09) : 875 - 895
  • [40] MdpCaps-Csl for SAR Image Target Recognition With Limited Labeled Training Data
    Hou, Yuchao
    Xu, Ting
    Hu, Hongping
    Wang, Peng
    Xue, Hongxin
    Bai, Yanping
    IEEE ACCESS, 2020, 8 : 176217 - 176231