RankDNN: Learning to Rank for Few-Shot Learning

被引:0
|
作者
Guo, Qianyu [1 ,2 ]
Gong Haotong [1 ]
Wei, Xujun [1 ,3 ]
Fu, Yanwei [2 ]
Yu, Yizhou [4 ]
Zhang, Wenqiang [2 ,3 ]
Ge, Weifeng [1 ,2 ]
机构
[1] Fudan Univ, Sch Comp Sci, Nebula AI Grp, Shanghai, Peoples R China
[2] Shanghai Key Lab Intelligent Informat Proc, Shanghai, Peoples R China
[3] Fudan Univ, Acad Engn & Technol, Shanghai, Peoples R China
[4] Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
KRONECKER PRODUCT;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper introduces a new few-shot learning pipeline that casts relevance ranking for image retrieval as binary ranking relation classification. In comparison to image classification, ranking relation classification is sample efficient and domain agnostic. Besides, it provides a new perspective on few-shot learning and is complementary to state-of-the-art methods. The core component of our deep neural network is a simple MLP, which takes as input an image triplet encoded as the difference between two vector-Kronecker products, and outputs a binary relevance ranking order. The proposed RankMLP can be built on top of any state-of-the-art feature extractors, and our entire deep neural network is called the ranking deep neural network, or RankDNN. Meanwhile, RankDNN can be flexibly fused with other post-processing methods. During the meta test, RankDNN ranks support images according to their similarity with the query samples, and each query sample is assigned the class label of its nearest neighbor. Experiments demonstrate that RankDNN can effectively improve the performance of its baselines based on a variety of backbones and it outperforms previous state-of-the-art algorithms on multiple few-shot learning benchmarks, including miniImageNet, tieredImageNet, Caltech-UCSD Birds, and CIFAR-FS. Furthermore, experiments on the cross-domain challenge demonstrate the superior transferability of RankDNN.The code is available at: https://github.com/guoqianyu-alberta/RankDNN.
引用
收藏
页码:728 / 736
页数:9
相关论文
共 50 条
  • [31] Exploring Quantization in Few-Shot Learning
    Wang, Meiqi
    Xue, Ruixin
    Lin, Jun
    Wang, Zhongfeng
    2020 18TH IEEE INTERNATIONAL NEW CIRCUITS AND SYSTEMS CONFERENCE (NEWCAS'20), 2020, : 279 - 282
  • [32] Few-shot Learning with Prompting Methods
    2023 6TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION AND IMAGE ANALYSIS, IPRIA, 2023,
  • [33] Active Few-Shot Learning with FASL
    Muller, Thomas
    Perez-Torro, Guillermo
    Basile, Angelo
    Franco-Salvador, Marc
    NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS (NLDB 2022), 2022, 13286 : 98 - 110
  • [34] Explore pretraining for few-shot learning
    Li, Yan
    Huang, Jinjie
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (2) : 4691 - 4702
  • [35] Few-Shot Learning for Opinion Summarization
    Brazinskas, Arthur
    Lapata, Mirella
    Titov, Ivan
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 4119 - 4135
  • [36] Few-Shot Learning With Geometric Constraints
    Jung, Hong-Gyu
    Lee, Seong-Whan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (11) : 4660 - 4672
  • [37] Prototype Reinforcement for Few-Shot Learning
    Xu, Liheng
    Xie, Qian
    Jiang, Baoqing
    Zhang, Jiashuo
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 4912 - 4916
  • [38] Learning to Adapt With Memory for Probabilistic Few-Shot Learning
    Zhang, Lei
    Zuo, Liyun
    Du, Yingjun
    Zhen, Xiantong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (11) : 4283 - 4292
  • [39] Dynamic Knowledge Path Learning for Few-Shot Learning
    Li, Jingzhu
    Yin, Zhe
    Yang, Xu
    Jiao, Jianbin
    Ding, Ye
    BIG DATA MINING AND ANALYTICS, 2025, 8 (02): : 479 - 495
  • [40] Splicing learning: A novel few-shot learning approach
    Hu, Lianting
    Liang, Huiying
    Lu, Long
    INFORMATION SCIENCES, 2021, 552 : 17 - 28