RankDNN: Learning to Rank for Few-Shot Learning

被引:0
|
作者
Guo, Qianyu [1 ,2 ]
Gong Haotong [1 ]
Wei, Xujun [1 ,3 ]
Fu, Yanwei [2 ]
Yu, Yizhou [4 ]
Zhang, Wenqiang [2 ,3 ]
Ge, Weifeng [1 ,2 ]
机构
[1] Fudan Univ, Sch Comp Sci, Nebula AI Grp, Shanghai, Peoples R China
[2] Shanghai Key Lab Intelligent Informat Proc, Shanghai, Peoples R China
[3] Fudan Univ, Acad Engn & Technol, Shanghai, Peoples R China
[4] Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
KRONECKER PRODUCT;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper introduces a new few-shot learning pipeline that casts relevance ranking for image retrieval as binary ranking relation classification. In comparison to image classification, ranking relation classification is sample efficient and domain agnostic. Besides, it provides a new perspective on few-shot learning and is complementary to state-of-the-art methods. The core component of our deep neural network is a simple MLP, which takes as input an image triplet encoded as the difference between two vector-Kronecker products, and outputs a binary relevance ranking order. The proposed RankMLP can be built on top of any state-of-the-art feature extractors, and our entire deep neural network is called the ranking deep neural network, or RankDNN. Meanwhile, RankDNN can be flexibly fused with other post-processing methods. During the meta test, RankDNN ranks support images according to their similarity with the query samples, and each query sample is assigned the class label of its nearest neighbor. Experiments demonstrate that RankDNN can effectively improve the performance of its baselines based on a variety of backbones and it outperforms previous state-of-the-art algorithms on multiple few-shot learning benchmarks, including miniImageNet, tieredImageNet, Caltech-UCSD Birds, and CIFAR-FS. Furthermore, experiments on the cross-domain challenge demonstrate the superior transferability of RankDNN.The code is available at: https://github.com/guoqianyu-alberta/RankDNN.
引用
收藏
页码:728 / 736
页数:9
相关论文
共 50 条
  • [41] An Applicative Survey on Few-shot Learning
    Zhang J.
    Zhang X.
    Lv L.
    Di Y.
    Chen W.
    Recent Patents on Engineering, 2022, 16 (05) : 104 - 124
  • [42] Splicing learning: A novel few-shot learning approach
    Hu, Lianting
    Liang, Huiying
    Lu, Long
    Lu, Long (lulong@whu.edu.cn), 1600, Elsevier Inc. (552): : 17 - 28
  • [43] Learning to Compare: Relation Network for Few-Shot Learning
    Sung, Flood
    Yang, Yongxin
    Zhang, Li
    Xiang, Tao
    Torr, Philip H. S.
    Hospedales, Timothy M.
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 1199 - 1208
  • [44] Adaptive Learning Knowledge Networks for Few-Shot Learning
    Yan, Minghao
    IEEE ACCESS, 2019, 7 : 119041 - 119051
  • [45] Learning to learn for few-shot continual active learning
    Ho, Stella
    Liu, Ming
    Gao, Shang
    Gao, Longxiang
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (10)
  • [46] Learning a Few-shot Embedding Model with Contrastive Learning
    Liu, Chen
    Fu, Yanwei
    Xu, Chengming
    Yang, Siqian
    Li, Jilin
    Wang, Chengjie
    Zhang, Li
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 8635 - 8643
  • [47] Unsupervised meta-learning for few-shot learning
    Xu, Hui
    Wang, Jiaxing
    Li, Hao
    Ouyang, Deqiang
    Shao, Jie
    PATTERN RECOGNITION, 2021, 116
  • [48] Augmenting Few-Shot Learning With Supervised Contrastive Learning
    Lee, Taemin
    Yoo, Sungjoo
    IEEE ACCESS, 2021, 9 : 61466 - 61474
  • [49] Secure collaborative few-shot learning
    Xie, Yu
    Wang, Han
    Yu, Bin
    Zhang, Chen
    KNOWLEDGE-BASED SYSTEMS, 2020, 203
  • [50] Prototypical Networks for Few-shot Learning
    Snell, Jake
    Swersky, Kevin
    Zemel, Richard
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30