CONTINUAL LEARNING WITH FOUNDATION MODELS: AN EMPIRICAL STUDY OF LATENT REPLAY

被引:0
|
作者
Ostapenko, Oleksiy [1 ,2 ,3 ]
Lesort, Timothee [1 ,2 ]
Rodriguez, Pau [3 ]
Arefin, Md Rifat [1 ,2 ]
Douillard, Arthur [4 ,6 ]
Rish, Irina [1 ,2 ,7 ]
Charlin, Laurent [1 ,5 ,7 ]
机构
[1] Mila Quebec AI Inst, Montreal, PQ, Canada
[2] Univ Montreal, Montreal, PQ, Canada
[3] ServiceNow, Santa Clara, CA 94043 USA
[4] Heuritech, Paris, France
[5] HEC Montreal, Montreal, PQ, Canada
[6] Sorbonne Univ, Paris, France
[7] Canada CIFAR AI Chair, Montreal, PQ, Canada
来源
CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 199 | 2022年 / 199卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Rapid development of large-scale pre-training has resulted in foundation models that can act as effective feature extractors on a variety of downstream tasks and domains. Motivated by this, we study the efficacy of pre-trained vision models as a foundation for downstream continual learning (CL) scenarios. Our goal is twofold. First, we want to understand the compute-accuracy trade-off between CL in the raw-data space and in the latent space of pre-trained encoders. Second, we investigate how the characteristics of the encoder, the pre-training algorithm and data, as well as of the resulting latent space affect CL performance. For this, we compare the efficacy of various pre-trained models in large-scale benchmarking scenarios with a vanilla replay setting applied in the latent and in the raw-data space. Notably, this study shows how transfer, forgetting, task similarity and learning are dependent on the input data characteristics and not necessarily on the CL algorithms. First, we show that under some circumstances reasonable CL performance can readily be achieved with a non-parametric classifier at negligible compute. We then show how models pre-trained on broader data result in better performance for various replay sizes. We explain this with representational similarity and transfer properties of these representations. Finally, we show the effectiveness of self-supervised (SSL) pre-training for downstream domains that are out-of-distribution as compared to the pre-training domain. We point out and validate several research directions that can further increase the efficacy of latent CL including representation ensembling. The diverse set of datasets used in this study can serve as a compute-efficient playground for further CL research. Codebase is available under https://github.com/oleksost/latent_CL.
引用
收藏
页数:32
相关论文
共 50 条
  • [21] An Investigation of Replay-based Approaches for Continual Learning
    Bagus, Benedikt
    Gepperth, Alexander
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [22] Rethinking Experience Replay: Bag of Tricks for Continual Learning
    Buzzega, Pietro
    Boschini, Matteo
    Porrello, Angelo
    Calderara, Simone
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 2180 - 2187
  • [23] Saliency Guided Experience Packing for Replay in Continual Learning
    Saha, Gobinda
    Roy, Kaushik
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 5262 - 5272
  • [24] Effective Data Selection and Replay for Unsupervised Continual Learning
    Liu, Hanmo
    Di, Shimin
    Li, Haoyang
    Li, Shuangyin
    Chen, Lei
    Zhou, Xiaofang
    Proceedings - International Conference on Data Engineering, 2024, : 1449 - 1463
  • [25] Towards Causal Replay for Knowledge Rehearsal in Continual Learning
    Churamani, Nikhil
    Cheong, Jiaee
    Kalkan, Sinan
    Gunes, Hatice
    AAAI BRIDGE PROGRAM ON CONTINUAL CAUSALITY, VOL 208, 2023, 208 : 63 - 70
  • [26] Learning Bayesian Sparse Networks with Full Experience Replay for Continual Learning
    Yan, Qingsen
    Gong, Dong
    Liu, Yuhang
    van den Hengel, Anton
    Shi, Javen Qinfeng
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 109 - 118
  • [27] Latent spectral regularization for continual learning
    Frascaroli, Emanuele
    Benaglia, Riccardo
    Boschini, Matteo
    Moschella, Luca
    Fiorini, Cosimo
    Rodola, Emanuele
    Calderara, Simone
    PATTERN RECOGNITION LETTERS, 2024, 184 : 119 - 125
  • [28] Recent Advances of Foundation Language Models-based Continual Learning: A Survey
    Yang, Yutao
    Zhou, Jie
    Ding, Xuan wen
    Huai, Tianyu
    Liu, Shunyu
    Chen, Qin
    Xie, Yuan
    He, Liang
    ACM COMPUTING SURVEYS, 2025, 57 (05)
  • [29] Replay-Driven Continual Learning for the Industrial Internet of Things
    Sen, Sagar
    Nielsen, Simon Myklebust
    Husom, Erik Johannes
    Goknil, Arda
    Tverdal, Simeon
    Pinilla, Leonardo Sastoque
    2023 IEEE/ACM 2ND INTERNATIONAL CONFERENCE ON AI ENGINEERING - SOFTWARE ENGINEERING FOR AI, CAIN, 2023, : 43 - 55
  • [30] CONTINUAL LEARNING OF NEW SOUND CLASSES USING GENERATIVE REPLAY
    Wang, Zhepei
    Subakan, Cem
    Tzinis, Efthymios
    Smaragdis, Paris
    Charlin, Laurent
    2019 IEEE WORKSHOP ON APPLICATIONS OF SIGNAL PROCESSING TO AUDIO AND ACOUSTICS (WASPAA), 2019, : 308 - 312