CONTINUAL LEARNING WITH FOUNDATION MODELS: AN EMPIRICAL STUDY OF LATENT REPLAY

被引:0
|
作者
Ostapenko, Oleksiy [1 ,2 ,3 ]
Lesort, Timothee [1 ,2 ]
Rodriguez, Pau [3 ]
Arefin, Md Rifat [1 ,2 ]
Douillard, Arthur [4 ,6 ]
Rish, Irina [1 ,2 ,7 ]
Charlin, Laurent [1 ,5 ,7 ]
机构
[1] Mila Quebec AI Inst, Montreal, PQ, Canada
[2] Univ Montreal, Montreal, PQ, Canada
[3] ServiceNow, Santa Clara, CA 94043 USA
[4] Heuritech, Paris, France
[5] HEC Montreal, Montreal, PQ, Canada
[6] Sorbonne Univ, Paris, France
[7] Canada CIFAR AI Chair, Montreal, PQ, Canada
来源
CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 199 | 2022年 / 199卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Rapid development of large-scale pre-training has resulted in foundation models that can act as effective feature extractors on a variety of downstream tasks and domains. Motivated by this, we study the efficacy of pre-trained vision models as a foundation for downstream continual learning (CL) scenarios. Our goal is twofold. First, we want to understand the compute-accuracy trade-off between CL in the raw-data space and in the latent space of pre-trained encoders. Second, we investigate how the characteristics of the encoder, the pre-training algorithm and data, as well as of the resulting latent space affect CL performance. For this, we compare the efficacy of various pre-trained models in large-scale benchmarking scenarios with a vanilla replay setting applied in the latent and in the raw-data space. Notably, this study shows how transfer, forgetting, task similarity and learning are dependent on the input data characteristics and not necessarily on the CL algorithms. First, we show that under some circumstances reasonable CL performance can readily be achieved with a non-parametric classifier at negligible compute. We then show how models pre-trained on broader data result in better performance for various replay sizes. We explain this with representational similarity and transfer properties of these representations. Finally, we show the effectiveness of self-supervised (SSL) pre-training for downstream domains that are out-of-distribution as compared to the pre-training domain. We point out and validate several research directions that can further increase the efficacy of latent CL including representation ensembling. The diverse set of datasets used in this study can serve as a compute-efficient playground for further CL research. Codebase is available under https://github.com/oleksost/latent_CL.
引用
收藏
页数:32
相关论文
共 50 条
  • [41] Combining replay and LoRA for continual learning in natural language understanding
    Borhanifard, Zeinab
    Faili, Heshaam
    Yaghoobzadeh, Yadollah
    COMPUTER SPEECH AND LANGUAGE, 2025, 90
  • [42] Online Continual Learning in Acoustic Scene Classification: An Empirical Study
    Ha, Donghee
    Kim, Mooseop
    Jeong, Chi Yoon
    SENSORS, 2023, 23 (15)
  • [43] GCR: Gradient Coreset based Replay Buffer Selection for Continual Learning
    Tiwari, Rishabh
    Killamsetty, Krishnateja
    Iyer, Rishabh
    Shenoy, Pradeep
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 99 - 108
  • [44] Coordinating Experience Replay: A Harmonious Experience Retention approach for Continual Learning
    Ji, Zhong
    Liu, Jiayi
    Wang, Qiang
    Zhang, Zhongfei
    KNOWLEDGE-BASED SYSTEMS, 2021, 234
  • [45] Chameleon: Dual Memory Replay for Online Continual Learning on Edge Devices
    Aggarwal, Shivam
    Binici, Kuluhan
    Mitra, Tulika
    2023 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION, DATE, 2023,
  • [46] Brain-inspired replay for continual learning with artificial neural networks
    Gido M. van de Ven
    Hava T. Siegelmann
    Andreas S. Tolias
    Nature Communications, 11
  • [47] ACAE-REMIND for online continual learning with compressed feature replay
    Wang, Kai
    van de Weijer, Joost
    Herranz, Luis
    PATTERN RECOGNITION LETTERS, 2021, 150 : 122 - 129
  • [48] Chameleon: Dual Memory Replay for Online Continual Learning on Edge Devices
    Aggarwal, Shivam
    Binici, Kuluhan
    Mitra, Tulika
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2024, 43 (06) : 1663 - 1676
  • [49] The Inter-batch Diversity of Samples in Experience Replay for Continual Learning
    Krutsylo, Andrii
    THIRTY-EIGTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 21, 2024, : 23395 - 23396
  • [50] Subset Replay based Continual Learning for Scalable Improvement of Autonomous Systems
    Brahma, Pratik Prabhanjan
    Othon, Adrienne
    PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, : 1179 - 1187