Exploring lottery ticket hypothesis in few-shot learning

被引:1
|
作者
Xie, Yu [1 ,2 ]
Sun, Qiang [3 ]
Fu, Yanwei [2 ]
机构
[1] Purple Mt Labs, Nanjing 211111, Peoples R China
[2] Fudan Univ, Sch Data Sci, Shanghai 200433, Peoples R China
[3] Fudan Univ, Acad Engn & Technol, Shanghai 200433, Peoples R China
关键词
Few-shot learning; Lottery ticket hypothesis; SCALE-SPACE METHODS;
D O I
10.1016/j.neucom.2023.126426
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Lottery Ticket Hypothesis (LTH) [14] has gathered great focus since being proposed. Researchers then succeed in figuring out alternative ways to find the "winning ticket" and extending the vanilla version to various kinds of situations ranging from image segmentation to language pretraining. However, these works all emphasize fully supervised learning with plenty of training instances, whilst they ignore the important scenario of learning from few examples, i.e., Few-Shot Learning (FSL). Different from classical many-shot learning tasks, the common FSL setting assumes the disjoint of source and target categories. To the best of our knowledge, the lottery ticket hypothesis has for the first time, systematically studied in few-shot learning scenarios. To validate the hypothesis, we conduct extensive experiments on several few-shot learning methods with three widely-used datasets, miniImageNet, CUB, CIFARFS. Results reveal that we can even find "winning tickets" for some high-performance methods. In addition, our experiments on Cross-Domain FSL further validates the transferability of the found "winning tickets". Furthermore, the process of finding LTH can be costly. So we study the early-stage LTH for FSL via exploring the Inverse Scale Space(ISS). Empirical results validate the efficacy of early-stage LTH. (c) 2023 Published by Elsevier B.V.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Few-Shot Learning With a Strong Teacher
    Ye, Han-Jia
    Ming, Lu
    Zhan, De-Chuan
    Chao, Wei-Lun
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (03) : 1425 - 1440
  • [32] Local Propagation for Few-Shot Learning
    Lifchitz, Yann
    Avrithis, Yannis
    Picard, Sylvaine
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 10457 - 10464
  • [33] Few-Shot Learning With Class Imbalance
    Ochal M.
    Patacchiola M.
    Vazquez J.
    Storkey A.
    Wang S.
    IEEE Transactions on Artificial Intelligence, 2023, 4 (05): : 1348 - 1358
  • [34] Explore pretraining for few-shot learning
    Yan Li
    Jinjie Huang
    Multimedia Tools and Applications, 2024, 83 : 4691 - 4702
  • [35] Few-Shot Learning for Defence and Security
    Robinson, Todd
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413
  • [36] Exploring sample relationship for few-shot classification
    Chen, Xingye
    Wu, Wenxiao
    Ma, Li
    You, Xinge
    Gao, Changxin
    Sang, Nong
    Shao, Yuanjie
    PATTERN RECOGNITION, 2025, 159
  • [37] Few-Shot Learning for Image Denoising
    Jiang, Bo
    Lu, Yao
    Zhang, Bob
    Lu, Guangming
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (09) : 4741 - 4753
  • [38] Few-shot Learning with Noisy Labels
    Liang, Kevin J.
    Rangrej, Samrudhdhi B.
    Petrovic, Vladan
    Hassner, Tal
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 9079 - 9088
  • [39] Few-shot Continual Infomax Learning
    Gu, Ziqi
    Xu, Chunyan
    Yang, Jian
    Cui, Zhen
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 19167 - 19176
  • [40] Adaptive Subspaces for Few-Shot Learning
    Simon, Christian
    Koniusz, Piotr
    Nock, Richard
    Harandi, Mehrtash
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 4135 - 4144