Exploring lottery ticket hypothesis in few-shot learning

被引:1
|
作者
Xie, Yu [1 ,2 ]
Sun, Qiang [3 ]
Fu, Yanwei [2 ]
机构
[1] Purple Mt Labs, Nanjing 211111, Peoples R China
[2] Fudan Univ, Sch Data Sci, Shanghai 200433, Peoples R China
[3] Fudan Univ, Acad Engn & Technol, Shanghai 200433, Peoples R China
关键词
Few-shot learning; Lottery ticket hypothesis; SCALE-SPACE METHODS;
D O I
10.1016/j.neucom.2023.126426
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Lottery Ticket Hypothesis (LTH) [14] has gathered great focus since being proposed. Researchers then succeed in figuring out alternative ways to find the "winning ticket" and extending the vanilla version to various kinds of situations ranging from image segmentation to language pretraining. However, these works all emphasize fully supervised learning with plenty of training instances, whilst they ignore the important scenario of learning from few examples, i.e., Few-Shot Learning (FSL). Different from classical many-shot learning tasks, the common FSL setting assumes the disjoint of source and target categories. To the best of our knowledge, the lottery ticket hypothesis has for the first time, systematically studied in few-shot learning scenarios. To validate the hypothesis, we conduct extensive experiments on several few-shot learning methods with three widely-used datasets, miniImageNet, CUB, CIFARFS. Results reveal that we can even find "winning tickets" for some high-performance methods. In addition, our experiments on Cross-Domain FSL further validates the transferability of the found "winning tickets". Furthermore, the process of finding LTH can be costly. So we study the early-stage LTH for FSL via exploring the Inverse Scale Space(ISS). Empirical results validate the efficacy of early-stage LTH. (c) 2023 Published by Elsevier B.V.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] Few-shot Learning with Prompting Methods
    2023 6TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION AND IMAGE ANALYSIS, IPRIA, 2023,
  • [42] Active Few-Shot Learning with FASL
    Muller, Thomas
    Perez-Torro, Guillermo
    Basile, Angelo
    Franco-Salvador, Marc
    NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS (NLDB 2022), 2022, 13286 : 98 - 110
  • [43] Explore pretraining for few-shot learning
    Li, Yan
    Huang, Jinjie
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (2) : 4691 - 4702
  • [44] Few-Shot Learning for Opinion Summarization
    Brazinskas, Arthur
    Lapata, Mirella
    Titov, Ivan
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 4119 - 4135
  • [45] Few-Shot Learning With Geometric Constraints
    Jung, Hong-Gyu
    Lee, Seong-Whan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (11) : 4660 - 4672
  • [46] Prototype Reinforcement for Few-Shot Learning
    Xu, Liheng
    Xie, Qian
    Jiang, Baoqing
    Zhang, Jiashuo
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 4912 - 4916
  • [47] Learning about few-shot concept learning
    Rastogi, Ananya
    NATURE COMPUTATIONAL SCIENCE, 2022, 2 (11): : 698 - 698
  • [48] An Applicative Survey on Few-shot Learning
    Zhang J.
    Zhang X.
    Lv L.
    Di Y.
    Chen W.
    Recent Patents on Engineering, 2022, 16 (05) : 104 - 124
  • [49] Secure collaborative few-shot learning
    Xie, Yu
    Wang, Han
    Yu, Bin
    Zhang, Chen
    KNOWLEDGE-BASED SYSTEMS, 2020, 203
  • [50] Prototypical Networks for Few-shot Learning
    Snell, Jake
    Swersky, Kevin
    Zemel, Richard
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30