Learning multi-task local metrics for image annotation

被引:0
|
作者
Xing Xu
Atsushi Shimada
Hajime Nagahara
Rin-ichiro Taniguchi
机构
[1] Kyushu University,Department of Advanced Information and Technology
来源
关键词
Image annotation; Label prediction; Metric learning; Local metric; Multi-task learning;
D O I
暂无
中图分类号
学科分类号
摘要
The goal of image annotation is to automatically assign a set of textual labels to an image to describe the visual contents thereof. Recently, with the rapid increase in the number of web images, nearest neighbor (NN) based methods have become more attractive and have shown exciting results for image annotation. One of the key challenges of these methods is to define an appropriate similarity measure between images for neighbor selection. Several distance metric learning (DML) algorithms derived from traditional image classification problems have been applied to annotation tasks. However, a fundamental limitation of applying DML to image annotation is that it learns a single global distance metric over the entire image collection and measures the distance between image pairs in the image-level. For multi-label annotation problems, it may be more reasonable to measure similarity of image pairs in the label-level. In this paper, we develop a novel label prediction scheme utilizing multiple label-specific local metrics for label-level similarity measure, and propose two different local metric learning methods in a multi-task learning (MTL) framework. Extensive experimental results on two challenging annotation datasets demonstrate that 1) utilizing multiple local distance metrics to learn label-level distances is superior to using a single global metric in label prediction, and 2) the proposed methods using the MTL framework to learn multiple local metrics simultaneously can model the commonalities of labels, thereby facilitating label prediction results to achieve state-of-the-art annotation performance.
引用
收藏
页码:2203 / 2231
页数:28
相关论文
共 50 条
  • [41] An overview of multi-task learning
    Yu Zhang
    Qiang Yang
    National Science Review, 2018, 5 (01) : 30 - 43
  • [42] Boosted multi-task learning
    Chapelle, Olivier
    Shivaswamy, Pannagadatta
    Vadrevu, Srinivas
    Weinberger, Kilian
    Zhang, Ya
    Tseng, Belle
    MACHINE LEARNING, 2011, 85 (1-2) : 149 - 173
  • [43] Distributed Multi-Task Learning
    Wang, Jialei
    Kolar, Mladen
    Srebro, Nathan
    ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 51, 2016, 51 : 751 - 760
  • [44] Parallel Multi-Task Learning
    Zhang, Yu
    2015 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2015, : 629 - 638
  • [46] Local Rademacher Complexity-based Learning Guarantees for Multi-Task Learning
    Yousefi, Niloofar
    Lei, Yunwen
    Kloft, Marius
    Mollaghasemi, Mansooreh
    Anagnostopoulos, Georgios C.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2018, 19
  • [47] Learning Sparse Task Relations in Multi-Task Learning
    Zhang, Yu
    Yang, Qiang
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 2914 - 2920
  • [48] A Survey on Multi-Task Learning
    Zhang, Yu
    Yang, Qiang
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (12) : 5586 - 5609
  • [49] Survey of Multi-Task Learning
    Zhang Y.
    Liu J.-W.
    Zuo X.
    1600, Science Press (43): : 1340 - 1378
  • [50] Double-layer annotation of traditional costume images based on multi-task learning
    Zhao, Hai-Ying
    Zhou, Wei
    Hou, Xiao-Gang
    Zhang, Xiao-Li
    Jilin Daxue Xuebao (Gongxueban)/Journal of Jilin University (Engineering and Technology Edition), 2021, 51 (01): : 293 - 302