Exploring lottery ticket hypothesis in few-shot learning

被引:1
|
作者
Xie, Yu [1 ,2 ]
Sun, Qiang [3 ]
Fu, Yanwei [2 ]
机构
[1] Purple Mt Labs, Nanjing 211111, Peoples R China
[2] Fudan Univ, Sch Data Sci, Shanghai 200433, Peoples R China
[3] Fudan Univ, Acad Engn & Technol, Shanghai 200433, Peoples R China
关键词
Few-shot learning; Lottery ticket hypothesis; SCALE-SPACE METHODS;
D O I
10.1016/j.neucom.2023.126426
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Lottery Ticket Hypothesis (LTH) [14] has gathered great focus since being proposed. Researchers then succeed in figuring out alternative ways to find the "winning ticket" and extending the vanilla version to various kinds of situations ranging from image segmentation to language pretraining. However, these works all emphasize fully supervised learning with plenty of training instances, whilst they ignore the important scenario of learning from few examples, i.e., Few-Shot Learning (FSL). Different from classical many-shot learning tasks, the common FSL setting assumes the disjoint of source and target categories. To the best of our knowledge, the lottery ticket hypothesis has for the first time, systematically studied in few-shot learning scenarios. To validate the hypothesis, we conduct extensive experiments on several few-shot learning methods with three widely-used datasets, miniImageNet, CUB, CIFARFS. Results reveal that we can even find "winning tickets" for some high-performance methods. In addition, our experiments on Cross-Domain FSL further validates the transferability of the found "winning tickets". Furthermore, the process of finding LTH can be costly. So we study the early-stage LTH for FSL via exploring the Inverse Scale Space(ISS). Empirical results validate the efficacy of early-stage LTH. (c) 2023 Published by Elsevier B.V.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Learning about few-shot concept learning
    Ananya Rastogi
    Nature Computational Science, 2022, 2 : 698 - 698
  • [22] Co-Learning for Few-Shot Learning
    Xu, Rui
    Xing, Lei
    Shao, Shuai
    Liu, Baodi
    Zhang, Kai
    Liu, Weifeng
    NEURAL PROCESSING LETTERS, 2022, 54 (04) : 3339 - 3356
  • [23] Federated Few-Shot Learning with Adversarial Learning
    Fan, Chenyou
    Huang, Jianwei
    2021 19TH INTERNATIONAL SYMPOSIUM ON MODELING AND OPTIMIZATION IN MOBILE, AD HOC, AND WIRELESS NETWORKS (WIOPT), 2021,
  • [24] Exploring Hierarchical Prototypes for Few-Shot Segmentation
    Chen, Yaozong
    Cao, Wenming
    ARTIFICIAL INTELLIGENCE, CICAI 2022, PT I, 2022, 13604 : 42 - 53
  • [25] Personalized Federated Few-Shot Learning
    Zhao, Yunfeng
    Yu, Guoxian
    Wang, Jun
    Domeniconi, Carlotta
    Guo, Maozu
    Zhang, Xiangliang
    Cui, Lizhen
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (02) : 2534 - 2544
  • [26] Few-Shot Classification with Contrastive Learning
    Yang, Zhanyuan
    Wang, Jinghua
    Zhu, Yingying
    COMPUTER VISION, ECCV 2022, PT XX, 2022, 13680 : 293 - 309
  • [27] A Feature Generator for Few-Shot Learning
    Kanagalingam, Heethanjan
    Pathmanathan, Thenukan
    Ketheeswaran, Navaneethan
    Vathanakumar, Mokeeshan
    Afham, Mohamed
    Rodrigo, Ranga
    arXiv,
  • [28] Few-shot learning for ear recognition
    Zhang, Jie
    Yu, Wen
    Yang, Xudong
    Deng, Fang
    PROCEEDINGS OF 2019 INTERNATIONAL CONFERENCE ON IMAGE, VIDEO AND SIGNAL PROCESSING (IVSP 2019), 2019, : 50 - 54
  • [29] Few-Shot Learning with Novelty Detection
    Bjerge, Kim
    Bodesheim, Paul
    Karstoft, Henrik
    DEEP LEARNING THEORY AND APPLICATIONS, PT I, DELTA 2024, 2024, 2171 : 340 - 363
  • [30] Prototype Completion for Few-Shot Learning
    Zhang, Baoquan
    Li, Xutao
    Ye, Yunming
    Feng, Shanshan
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (10) : 12250 - 12268