Task-Free Continual Learning via Online Discrepancy Distance Learning

被引:0
|
作者
Ye, Fei [1 ]
Bors, Adrian G. [1 ]
机构
[1] Univ York, Dept Comp Sci, York YO10 5GH, England
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning from non-stationary data streams, also called Task-Free Continual Learning (TFCL) remains challenging due to the absence of explicit task information in most applications. Even though recently some algorithms have been proposed for TFCL, these methods lack theoretical guarantees. Moreover, there are no theoretical studies about forgetting during TFCL. This paper develops a new theoretical analysis framework that derives generalization bounds based on the discrepancy distance between the visited samples and the entire information made available for training the model. This analysis provides new insights into the forgetting behaviour in classification tasks. Inspired by this theoretical model, we propose a new approach enabled with the dynamic component expansion mechanism for a mixture model, namely Online Discrepancy Distance Learning (ODDL). ODDL estimates the discrepancy between the current memory and the already accumulated knowledge as an expansion signal aiming to ensure a compact network architecture with optimal performance. We then propose a new sample selection approach that selectively stores the samples into the memory buffer through the discrepancy-based measure, further improving the performance. We perform several TFCL experiments with the proposed methodology, which demonstrate that the proposed approach achieves the state of the art performance.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] ONLINE CONTINUAL LEARNING FOR EMBEDDED DEVICES
    Hayes, Tyler L.
    Kanan, Christopher
    CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 199, 2022, 199
  • [42] Survey on Online Streaming Continual Learning
    Gunasekara, Nuwan
    Pfahringer, Bernhard
    Gomes, Heitor Murilo
    Bifet, Albert
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 6628 - 6637
  • [43] Order effects in task-free learning: Tuning to information-critical sound features
    Todd, Juanita
    Yeark, Mattsen
    Auriac, Paul
    Paton, Bryan
    INTERNATIONAL JOURNAL OF PSYCHOPHYSIOLOGY, 2023, 188 : 32 - 32
  • [44] Online Noisy Continual Relation Learning
    Li, Guozheng
    Wang, Peng
    Luo, Qiqing
    Liu, Yanhe
    Ke, Wenjun
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 11, 2023, : 13059 - 13066
  • [45] Sample Condensation in Online Continual Learning
    Sangermano, Mattia
    Carta, Antonio
    Cossu, Andrea
    Bacciu, Davide
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [46] Scalable Adversarial Online Continual Learning
    Dam, Tanmoy
    Pratama, Mahardhika
    Ferdaus, Meftahul
    Anavatti, Sreenatha
    Abbas, Hussein
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT III, 2023, 13715 : 373 - 389
  • [47] Online continual learning with declarative memory
    Xiao, Zhe
    Du, Zhekai
    Wang, Ruijin
    Gan, Ruimeng
    Li, Jingjing
    NEURAL NETWORKS, 2023, 163 : 146 - 155
  • [48] Free, online videos for distance learning in medical genetics
    Maggipinto, Sarah
    Chen, Angela
    Huynh, Dustin
    Heutlinger, Olivia
    Eberenz, Kimberly
    Mallick, Samyukta
    Marshall, Tanner
    Desai, Rishi
    Wolbrink, Traci A.
    Boone, Philip M.
    EUROPEAN JOURNAL OF MEDICAL GENETICS, 2020, 63 (09)
  • [49] Online and distance learning
    Deeson, Eric
    BRITISH JOURNAL OF EDUCATIONAL TECHNOLOGY, 2008, 39 (05) : 963 - 964
  • [50] Continual task learning in natural and artificial agents
    Flesch, Timo
    Saxe, Andrew
    Summer, Christopher
    TRENDS IN NEUROSCIENCES, 2023, 46 (03) : 199 - 210