Multi-label Image Ranking based on Deep Convolutional Features

被引:0
|
作者
Song, Guanghui [1 ,2 ]
Jin, Xiaogang [1 ]
Chen, Genlang [2 ]
Nie, Yan [3 ]
机构
[1] Zhejiang Univ, Coll Comp Sci, Hangzhou, Zhejiang, Peoples R China
[2] Zhejiang Univ, Ningbo Inst Technol, Ningbo, Zhejiang, Peoples R China
[3] Ningbo Univ, Coll Sci & Technol, Ningbo, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
feature learning; deep convolutional neural network; multi-label ranking;
D O I
暂无
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Multi-label image ranking has many important applications in the real world, and it includes two core issues: image feature extraction approach and multi-label ranking algorithm. The existing works are mainly focused on the improvement of multi-label ranking algorithm based on the conventional visual features. Recently, image features extracted from the deep convolutional neural network have achieved impressive performance for a variety of vision tasks. Using these deep features as image representations have gained more and more attention on multi-label ranking problem. In this study, we evaluate the performance of the deep features using two baseline multi-label ranking algorithms. First, the deep convolutional neural network model pre-trained on ImageNet is fine-tuned to the target dataset. Second, the global deep features of raw image are extracted from the fine-tuned model and serve as the input data of ranking algorithms. Finally, experiments using the Tasmania Coral Point Count dataset demonstrate that the deep features enhance the expression ability in comparison with that of conventional visual features, and they can effectively improve multi-label ranking performance.
引用
收藏
页码:324 / 329
页数:6
相关论文
共 50 条
  • [31] An Improved Multi-label Classification Based on Label Ranking and Delicate Boundary SVM
    Chen, Benhui
    Gu, Weifeng
    Hu, Jinglu
    2010 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS IJCNN 2010, 2010,
  • [32] Extreme Multi-label Learning with Label Features for Warm-start Tagging, Ranking & Recommendation
    Prabhu, Yashoteja
    Kag, Anil
    Gopinath, Shilpa
    Dahiya, Kunal
    Harsola, Shrutendra
    Agrawal, Rahul
    Varma, Manik
    WSDM'18: PROCEEDINGS OF THE ELEVENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2018, : 441 - 449
  • [33] Deep Multi-Instance Multi-Label Learning for Image Annotation
    Guo, Hai-Feng
    Han, Lixin
    Su, Shoubao
    Sun, Zhou-Bao
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2018, 32 (03)
  • [34] Deep multilevel similarity hashing with fine-grained features for multi-label image retrieval
    Qin, Qibing
    Huang, Lei
    Wei, Zhiqiang
    NEUROCOMPUTING, 2020, 409 : 46 - 59
  • [35] Deep Multi-Similarity Hashing for Multi-label Image Retrieval
    Li, Tong
    Gao, Sheng
    Xu, Yajing
    CIKM'17: PROCEEDINGS OF THE 2017 ACM CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2017, : 2159 - 2162
  • [36] Multi-scale and Discriminative Part Detectors Based Features for Multi-label Image Classification
    Cheng, Gong
    Gao, Decheng
    Liu, Yang
    Han, Junwei
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 649 - 655
  • [37] Image multi-label learning algorithm based on label correlation
    Huang, Mengyue
    Zhao, Ping
    2021 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS AND COMPUTER ENGINEERING (ICCECE), 2021, : 606 - 609
  • [38] Subject Features and Hash Codes for Multi-label Image Retrieval
    Xiong, Changzhen
    Shan, Yanmei
    PROCEEDINGS OF 2018 IEEE 7TH DATA DRIVEN CONTROL AND LEARNING SYSTEMS CONFERENCE (DDCLS), 2018, : 808 - 812
  • [39] DEEP PAIRWISE RANKING WITH MULTI-LABEL INFORMATION FOR CROSS-MODAL RETRIEVAL
    Jian, Yangwo
    Xiao, Jing
    Cao, Yang
    Khan, Asad
    Zhu, Jia
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 1810 - 1815
  • [40] XRR: Extreme multi-label text classification with candidate retrieving and deep ranking
    Xiong, Jie
    Yu, Li
    Niu, Xi
    Leng, Youfang
    INFORMATION SCIENCES, 2023, 622 : 115 - 132