Task-Free Continual Learning via Online Discrepancy Distance Learning

被引:0
|
作者
Ye, Fei [1 ]
Bors, Adrian G. [1 ]
机构
[1] Univ York, Dept Comp Sci, York YO10 5GH, England
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning from non-stationary data streams, also called Task-Free Continual Learning (TFCL) remains challenging due to the absence of explicit task information in most applications. Even though recently some algorithms have been proposed for TFCL, these methods lack theoretical guarantees. Moreover, there are no theoretical studies about forgetting during TFCL. This paper develops a new theoretical analysis framework that derives generalization bounds based on the discrepancy distance between the visited samples and the entire information made available for training the model. This analysis provides new insights into the forgetting behaviour in classification tasks. Inspired by this theoretical model, we propose a new approach enabled with the dynamic component expansion mechanism for a mixture model, namely Online Discrepancy Distance Learning (ODDL). ODDL estimates the discrepancy between the current memory and the already accumulated knowledge as an expansion signal aiming to ensure a compact network architecture with optimal performance. We then propose a new sample selection approach that selectively stores the samples into the memory buffer through the discrepancy-based measure, further improving the performance. We perform several TFCL experiments with the proposed methodology, which demonstrate that the proposed approach achieves the state of the art performance.
引用
收藏
页数:14
相关论文
共 50 条
  • [11] Similarity-Based Adaptation for Task-Aware and Task-Free Continual Learning
    Adel, Tameem
    Journal of Artificial Intelligence Research, 2024, 80 : 377 - 417
  • [12] Similarity-Based Adaptation for Task-Aware and Task-Free Continual Learning
    Adel, Tameem
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2024, 80 : 377 - 417
  • [13] eSelf-Evolved Dynamic Expansion Model for Task-Free Continual Learning
    Ye, Fei
    Bors, Adrian G.
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 22045 - 22055
  • [14] Bio-inspired, task-free continual learning through activity regularization
    Laessig, Francesco
    Aceituno, Pau Vilimelis
    Sorbaro, Martino
    Grewe, Benjamin F.
    BIOLOGICAL CYBERNETICS, 2023, 117 (4-5) : 345 - 361
  • [15] Bio-inspired, task-free continual learning through activity regularization
    Francesco Lässig
    Pau Vilimelis Aceituno
    Martino Sorbaro
    Benjamin F. Grewe
    Biological Cybernetics, 2023, 117 : 345 - 361
  • [16] An ANN-Guided Approach to Task-Free Continual Learning with Spiking Neural Networks
    Zhang, Jie
    Fan, Wentao
    Liu, Xin
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT VIII, 2024, 14432 : 217 - 228
  • [17] Tf-GCZSL: Task-free generalized continual zero-shot learning
    Gautam, Chandan
    Parameswaran, Sethupathy
    Mishra, Ashish
    Sundaram, Suresh
    NEURAL NETWORKS, 2022, 155 : 487 - 497
  • [18] Evolving Ensemble Model based on Hilbert Schmidt Independence Criterion for task-free continual learning
    Ye, Fei
    Bors, Adrian G.
    NEUROCOMPUTING, 2025, 624
  • [19] Doubly Perturbed Task Free Continual Learning
    Lee, Byung Hyun
    Oh, Min-Hwan
    Chun, Se Young
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 12, 2024, : 13346 - 13354
  • [20] EXEMPLAR-FREE ONLINE CONTINUAL LEARNING
    He, Jiangpeng
    Zhu, Fengqing
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 541 - 545