Evolving Ensemble Model based on Hilbert Schmidt Independence Criterion for task-free continual learning

被引:0
|
作者
Ye, Fei [1 ]
Bors, Adrian G. [2 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Software Engn, Chengdu, Peoples R China
[2] Univ York, Dept Comp Sci, York YO10 5GH, England
关键词
Lifelong learning; Variational Autoencoders (VAE); Hilbert Schmidt Independence Criterion; Representation learning;
D O I
10.1016/j.neucom.2025.129370
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual Learning (CL) aims to extend the abilities of deep learning models for continuously acquiring new knowledge without forgetting. However, most CL studies assume that task identities and boundaries are known, which is not a realistic assumption in areal scenario. In this work, we address amore challenging and realistic situation in CL, namely the Task-Free Continuous Learning (TFCL), where an ensemble of experts is trained on non-stationary data streams without having any task labels. To deal with TFCL, we introduce the Evolving Ensemble Model (EEM), which can dynamically build new experts into a mixture, thus adapting to the changing data distributions while continuously learning new data sets. To ensure a compact network architecture for EEM during training, we propose a novel expansion mechanism that considers the Hilbert- Schmidt Independence Criterion (HSIC) for evaluating the statistical consistency between the knowledge learned by each expert and that corresponding to the given data. This expansion mechanism does not require storing all previous samples and is more efficient as it performs statistical evaluations in a low-dimensional feature space inferred by a deep network. We also propose anew dropout mechanism for selectively removing unimportant stored samples from the memory buffer used for storing the continuously incoming data before they are used for training. The proposed dropout mechanism ensures the diversity of information being learnt by the experts of our model. We perform extensive TFCL tests which show that the proposed approach achieves the state of the art. The source code is available at https://github.com/dtuzi123/HSCI-DEM.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Task-Free Continual Learning
    Aljundi, Rahaf
    Kelchtermans, Klaas
    Tuytelaars, Tinne
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 11246 - 11255
  • [2] LEARNING AN EVOLVED MIXTURE MODEL FOR TASK-FREE CONTINUAL LEARNING
    Ye, Fei
    Bors, Adrian G.
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 1936 - 1940
  • [3] Similarity-Based Adaptation for Task-Aware and Task-Free Continual Learning
    Adel, Tameem
    Journal of Artificial Intelligence Research, 2024, 80 : 377 - 417
  • [4] Robust Learning with the Hilbert-Schmidt Independence Criterion
    Greenfeld, Daniel
    Shalit, Uri
    25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [5] eSelf-Evolved Dynamic Expansion Model for Task-Free Continual Learning
    Ye, Fei
    Bors, Adrian G.
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 22045 - 22055
  • [6] Kernel learning and optimization with Hilbert–Schmidt independence criterion
    Tinghua Wang
    Wei Li
    International Journal of Machine Learning and Cybernetics, 2018, 9 : 1707 - 1717
  • [7] Kernel Learning with Hilbert-Schmidt Independence Criterion
    Wang, Tinghua
    Li, Wei
    He, Xianwen
    PATTERN RECOGNITION (CCPR 2016), PT I, 2016, 662 : 720 - 730
  • [8] Robust Learning with the Hilbert-Schmidt Independence Criterion
    Greenfeld, Daniel
    Shalit, Uri
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [9] Similarity-Based Adaptation for Task-Aware and Task-Free Continual Learning
    Adel, Tameem
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2024, 80 : 377 - 417
  • [10] Task-Free Dynamic Sparse Vision Transformer for Continual Learning
    Ye, Fei
    Bors, Adrian G.
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 15, 2024, : 16442 - 16450