Preempting Catastrophic Forgetting in Continual Learning Models by Anticipatory Regularization

被引:0
|
作者
El Khatib, Alaa [1 ]
Karray, Fakhri [1 ]
机构
[1] Univ Waterloo, Elect & Comp Engn, Waterloo, ON, Canada
关键词
D O I
10.1109/ijcnn.2019.8852426
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural networks trained on tasks sequentially tend to degrade in performance, on the average, the more tasks they see, as the representations learned for one task get progressively modified while learning subsequent tasks. This phenomenon-known as catastrophic forgetting-is a major obstacle on the road toward designing agents that can continually learn new concepts and tasks the way, say, humans do. A common approach to containing catastrophic forgetting is to use regularization to slow down learning on weights deemed important to previously learned tasks. We argue in this paper that, on their own, such post hoc measures to safeguard what has been learned can, even in their more sophisticated variants, paralyze the network and degrade its capacity to learn and counter forgetting as the number of tasks learned increases. We propose insteador possibly in conjunction-that, in anticipation of future tasks, regularization be applied to drive the optimization of network weights toward reusable solutions. We show that one way to achieve this is through an auxiliary unsupervised reconstruction loss that encourages the learned representations not only to be useful for solving, say, the current classification task, but also to reflect the content of the data being processed-content that is generally richer than it is discriminative for any one task. We compare our approach to the recent elastic weight consolidation regularization approach, and show that, although we do not explicitly try to preserve important weights or pass on any information about the data distribution of learned tasks, our model is comparable in performance, and in some cases better.
引用
收藏
页数:7
相关论文
共 50 条
  • [41] Representation Space Maintenance: Against Forgetting in Continual Learning
    Niu, Rui
    Wu, Zhiyong
    Song, Changhe
    2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024, 2024,
  • [42] Mitigating Forgetting in Online Continual Learning with Neuron Calibration
    Yin, Haiyan
    Yang, Peng
    Li, Ping
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [43] Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning
    Ke, Zixuan
    Liu, Bing
    Ma, Nianzu
    Xu, Hu
    Shu, Lei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [44] A Continual Learning Survey: Defying Forgetting in Classification Tasks
    De Lange, Matthias
    Aljundi, Rahaf
    Masana, Marc
    Parisot, Sarah
    Jia, Xu
    Leonardis, Ales
    Slabaugh, Greg
    Tuytelaars, Tinne
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (07) : 3366 - 3385
  • [45] AFEC: Active Forgetting of Negative Transfer in Continual Learning
    Wang, Liyuan
    Zhang, Mingtian
    Jia, Zhongfan
    Li, Qian
    Ma, Kaisheng
    Bao, Chenglong
    Zhu, Jun
    Zhong, Yi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [46] Probing Representation Forgetting in Supervised and Unsupervised Continual Learning
    Davari, MohammadReza
    Asadi, Nader
    Mudur, Sudhir
    Aljundi, Rahaf
    Belilovsky, Eugene
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 16691 - 16700
  • [47] Controlling Conditional Language Models without Catastrophic Forgetting
    Korbak, Tomasz
    Elsahar, Hady
    Kruszewski, German
    Dymetman, Marc
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [48] Investigating the Catastrophic Forgetting in Multimodal Large Language Models
    Zhai, Yuexiang
    Tong, Shengbang
    Li, Xiao
    Cai, Mu
    Qu, Qing
    Lee, Yong Jae
    Ma, Yi
    CONFERENCE ON PARSIMONY AND LEARNING, VOL 234, 2024, 234 : 202 - 227
  • [49] State Primitive Learning to Overcome Catastrophic Forgetting in Robotics
    Xiong, Fangzhou
    Liu, Zhiyong
    Huang, Kaizhu
    Yang, Xu
    Qiao, Hong
    COGNITIVE COMPUTATION, 2021, 13 (02) : 394 - 402
  • [50] State Primitive Learning to Overcome Catastrophic Forgetting in Robotics
    Fangzhou Xiong
    Zhiyong Liu
    Kaizhu Huang
    Xu Yang
    Hong Qiao
    Cognitive Computation, 2021, 13 : 394 - 402