Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks

被引:1995
|
作者
Oquab, Maxime [1 ]
Bottou, Leon [2 ]
Laptev, Ivan [1 ]
Sivic, Josef [1 ]
机构
[1] INRIA, Paris, France
[2] MSR, New York, NY USA
关键词
D O I
10.1109/CVPR.2014.222
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large-scale visual recognition challenge (ILSVRC2012). The success of CNNs is attributed to their ability to learn rich mid-level image representations as opposed to hand-designed low-level features used in other image classification methods. Learning CNNs, however, amounts to estimating millions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be efficiently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred representation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.
引用
收藏
页码:1717 / 1724
页数:8
相关论文
共 50 条
  • [21] Mining Mid-level Features for Image Classification
    Basura Fernando
    Elisa Fromont
    Tinne Tuytelaars
    International Journal of Computer Vision, 2014, 108 : 186 - 203
  • [22] Mining Mid-level Features for Image Classification
    Fernando, Basura
    Fromont, Elisa
    Tuytelaars, Tinne
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2014, 108 (03) : 186 - 203
  • [23] Disease Prediction using Synthetic Image Representations of Metagenomic data and Convolutional Neural Networks
    Thanh Hai Nguyen
    Prifti, Edi
    Sokolovska, Nataliya
    Zucker, Jean-Daniel
    2019 IEEE - RIVF INTERNATIONAL CONFERENCE ON COMPUTING AND COMMUNICATION TECHNOLOGIES (RIVF), 2019, : 231 - 236
  • [24] Evaluation of Image Representations for Player Detection in Field Sports Using Convolutional Neural Networks
    Sah, Melike
    Direkoglu, Cem
    13TH INTERNATIONAL CONFERENCE ON THEORY AND APPLICATION OF FUZZY SYSTEMS AND SOFT COMPUTING - ICAFS-2018, 2019, 896 : 107 - 115
  • [25] Indoor-Outdoor Image Classification using Mid-Level Cues
    Liu, Yang
    Li, Xue ing
    2013 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA), 2013,
  • [26] Composite Kernel of Mutual Learning on Mid-Level Features for Hyperspectral Image Classification
    Sima, Haifeng
    Wang, Jing
    Guo, Ping
    Sun, Junding
    Liu, Hongmin
    Xu, Mingliang
    Zou, Youfeng
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (11) : 12217 - 12230
  • [27] Scene analysis by mid-level attribute learning using 2D LSTM networks and an application to web-image tagging
    Byeon, Wonmin
    Liwicki, Marcus
    Breuel, Thomas M.
    PATTERN RECOGNITION LETTERS, 2015, 63 : 23 - 29
  • [28] Mid-level image representations for real-time heart view plane classification of echocardiograms
    Penatti, Otavio A. B.
    Werneck, Rafael de O.
    de Almeida, Waldir R.
    Stein, Bernardo V.
    Pazinato, Daniel V.
    Mendes Junior, Pedro R.
    Torres, Ricardo da S.
    Rocha, Anderson
    COMPUTERS IN BIOLOGY AND MEDICINE, 2015, 66 : 66 - 81
  • [29] Learning Deep Graph Representations via Convolutional Neural Networks
    Ye, Wei
    Askarisichani, Omid
    Jones, Alex
    Singh, Ambuj
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (05) : 2268 - 2279
  • [30] SuperPixel based mid-level image description for image recognition
    Tasli, H. Emrah
    Sicre, Ronan
    Gevers, Theo
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2015, 33 : 301 - 308