Task-Free Continual Learning via Online Discrepancy Distance Learning

被引:0
|
作者
Ye, Fei [1 ]
Bors, Adrian G. [1 ]
机构
[1] Univ York, Dept Comp Sci, York YO10 5GH, England
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning from non-stationary data streams, also called Task-Free Continual Learning (TFCL) remains challenging due to the absence of explicit task information in most applications. Even though recently some algorithms have been proposed for TFCL, these methods lack theoretical guarantees. Moreover, there are no theoretical studies about forgetting during TFCL. This paper develops a new theoretical analysis framework that derives generalization bounds based on the discrepancy distance between the visited samples and the entire information made available for training the model. This analysis provides new insights into the forgetting behaviour in classification tasks. Inspired by this theoretical model, we propose a new approach enabled with the dynamic component expansion mechanism for a mixture model, namely Online Discrepancy Distance Learning (ODDL). ODDL estimates the discrepancy between the current memory and the already accumulated knowledge as an expansion signal aiming to ensure a compact network architecture with optimal performance. We then propose a new sample selection approach that selectively stores the samples into the memory buffer through the discrepancy-based measure, further improving the performance. We perform several TFCL experiments with the proposed methodology, which demonstrate that the proposed approach achieves the state of the art performance.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Continual Multiview Task Learning via Deep Matrix Factorization
    Sun, Gan
    Cong, Yang
    Zhang, Yulun
    Zhao, Guoshuai
    Fu, Yun
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (01) : 139 - 150
  • [32] Learning on the Job: Online Lifelong and Continual Learning
    Liu, Bing
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 13544 - 13549
  • [33] Online Continual Learning via Maximal Coding Rate Reduction
    Liu, Zhanyang
    Liu, Jinfeng
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT V, ICIC 2024, 2024, 14866 : 176 - 187
  • [34] Continual Variational Autoencoder Learning via Online Cooperative Memorization
    Ye, Fei
    Bors, Adrian G.
    COMPUTER VISION, ECCV 2022, PT XXIII, 2022, 13683 : 531 - 549
  • [35] Continual Relation Extraction via Sequential Multi-Task Learning
    Thanh-Thien Le
    Manh Nguyen
    Tung Thanh Nguyen
    Linh Ngo Van
    Thien Huu Nguyen
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 16, 2024, : 18444 - 18452
  • [36] Transformer with Task Selection for Continual Learning
    Huang, Sheng-Kai
    Huang, Chun-Rong
    2023 18TH INTERNATIONAL CONFERENCE ON MACHINE VISION AND APPLICATIONS, MVA, 2023,
  • [37] Continual Learning With Unknown Task Boundary
    Zhu, Xiaoxie
    Yi, Jinfeng
    Zhang, Lijun
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 13
  • [38] Measuring Asymmetric Gradient Discrepancy in Parallel Continual Learning
    Lyu, Fan
    Sun, Qing
    Shang, Fanhua
    Wan, Liang
    Feng, Wei
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 11377 - 11386
  • [39] Order effects in task-free learning: Tuning to information-carrying sound features
    Todd, Juanita
    Yeark, Mattsen
    Auriac, Paul
    Paton, Bryan
    Winkler, Istvan
    CORTEX, 2024, 172 : 114 - 124
  • [40] Rehearsal-Free Online Continual Learning for Automatic Speech Recognition
    Vander Eeckt, Steven
    Van Hamme, Hugo
    INTERSPEECH 2023, 2023, : 944 - 948