The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models

被引:38
|
作者
Chen, Tianlong [1 ]
Frankle, Jonathan [2 ]
Chang, Shiyu [3 ]
Liu, Sijia [3 ,4 ]
Zhang, Yang [3 ]
Carbin, Michael [2 ]
Wang, Zhangyang [1 ]
机构
[1] Univ Texas Austin, Austin, TX 78712 USA
[2] MIT CSAIL, Cambridge, MA USA
[3] MIT IBM Watson AI Lab, Cambridge, MA USA
[4] Michigan State Univ, E Lansing, MI 48824 USA
基金
美国国家科学基金会;
关键词
D O I
10.1109/CVPR46437.2021.01604
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The computer vision world has been re-gaining enthusiasm in various pre-trained models, including both classical ImageNet supervised pre-training and recently emerged self-supervised pre-training such as simCLR [10] and MoCo [40]. Pre-trained weights often boost a wide range of downstream tasks including classification, detection, and segmentation. Latest studies suggest that pre-training benefits from gigantic model capacity [11]. We are hereby curious and ask: after pre-training, does a pre-trained model indeed have to stay large for its downstream transferability? In this paper, we examine supervised and self-supervised pre-trained models through the lens of the lottery ticket hypothesis (LTH) [31]. LTH identifies highly sparse matching subnetworks that can be trained in isolation from (nearly) scratch yet still reach the full models' performance. We extend the scope of LTH and question whether matching subnetworks still exist in pre-trained computer vision models, that enjoy the same downstream transfer performance. Our extensive experiments convey an overall positive message: from all pre-trained weights obtained by ImageNet classification, simCLR, and MoCo, we are consistently able to locate such matching subnetworks at 59.04% to 96.48% sparsity that transfer universally to multiple downstream tasks, whose performance see no degradation compared to using full pre-trained weights. Further analyses reveal that subnetworks found from different pre-training tend to yield diverse mask structures and perturbation sensitivities. We conclude that the core LTH observations remain generally relevant in the pre-training paradigm of computer vision, but more delicate discussions are needed in some cases. Codes and pre-trained models will be made available at: https://github.com/VITA-Group/CV_LTH_Pre-training.
引用
收藏
页码:16301 / 16311
页数:11
相关论文
共 50 条
  • [41] Class incremental learning with self-supervised pre-training and prototype learning
    Liu, Wenzhuo
    Wu, Xin-Jian
    Zhu, Fei
    Yu, Ming-Ming
    Wang, Chuang
    Liu, Cheng-Lin
    PATTERN RECOGNITION, 2025, 157
  • [42] DenseCL: A simple framework for self-supervised dense visual pre-training
    Wang, Xinlong
    Zhang, Rufeng
    Shen, Chunhua
    Kong, Tao
    VISUAL INFORMATICS, 2023, 7 (01) : 30 - 40
  • [43] Masked Autoencoder for Self-Supervised Pre-training on Lidar Point Clouds
    Hess, Georg
    Jaxing, Johan
    Svensson, Elias
    Hagerman, David
    Petersson, Christoffer
    Svensson, Lennart
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW), 2023, : 350 - 359
  • [44] Feature-Suppressed Contrast for Self-Supervised Food Pre-training
    Liu, Xinda
    Zhu, Yaohui
    Liu, Linhu
    Tian, Jiang
    Wang, Lili
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 4359 - 4367
  • [45] MULTI-TASK SELF-SUPERVISED PRE-TRAINING FOR MUSIC CLASSIFICATION
    Wu, Ho-Hsiang
    Kao, Chieh-Chi
    Tang, Qingming
    Sun, Ming
    McFee, Brian
    Bello, Juan Pablo
    Wang, Chao
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 556 - 560
  • [46] PreTraM: Self-supervised Pre-training via Connecting Trajectory and Map
    Xu, Chenfeng
    Li, Tian
    Tang, Chen
    Sun, Lingfeng
    Keutzer, Kurt
    Tomizuka, Masayoshi
    Fathi, Alireza
    Zhan, Wei
    COMPUTER VISION, ECCV 2022, PT XXXIX, 2022, 13699 : 34 - 50
  • [47] Self-supervised Pre-training with Acoustic Configurations for Replay Spoofing Detection
    Shim, Hye-jin
    Heo, Hee-Soo
    Jung, Jee-weon
    Yu, Ha-Jin
    INTERSPEECH 2020, 2020, : 1091 - 1095
  • [48] Self-Supervised Underwater Image Generation for Underwater Domain Pre-Training
    Wu, Zhiheng
    Wu, Zhengxing
    Chen, Xingyu
    Lu, Yue
    Yu, Junzhi
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73 : 1 - 14
  • [49] COMPARISON OF SELF-SUPERVISED SPEECH PRE-TRAINING METHODS ON FLEMISH DUTCH
    Poncelet, Jakob
    Hamme, Hugo Van
    2021 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU), 2021, : 169 - 176
  • [50] Contrast to Divide: Self-Supervised Pre-Training for Learning with Noisy Labels
    Zheltonozhskii, Evgenii
    Baskin, Chaim
    Mendelson, Avi
    Bronstein, Alex M.
    Litany, Or
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 387 - 397