Imbalance Mitigation for Continual Learning via Knowledge Decoupling and Dual Enhanced Contrastive Learning

被引:1
|
作者
Ji, Zhong [1 ,2 ]
Jiao, Zhanyu [1 ]
Wang, Qiang [1 ]
Pang, Yanwei [1 ,2 ]
Han, Jungong [3 ]
机构
[1] Tianjin Univ, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
[2] Shanghai Artificial Intelligence Lab, Shanghai 200232, Peoples R China
[3] Univ Sheffield, Dept Comp Sci, Sheffield S10 2TG, S Yorkshire, England
基金
中国国家自然科学基金;
关键词
Catastrophic forgetting; continual learning (CL); experience replay (ER); image classification;
D O I
10.1109/TNNLS.2023.3347477
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual learning (CL) aims at studying how to learn new knowledge continuously from data streams without catastrophically forgetting the previous knowledge. One of the key problems is catastrophic forgetting, that is, the performance of the model on previous tasks declines significantly after learning the subsequent task. Several studies addressed it by replaying samples stored in the buffer when training new tasks. However, the data imbalance between old and new task samples results in two serious problems: information suppression and weak feature discriminability. The former refers to the information in the sufficient new task samples suppressing that in the old task samples, which is harmful to maintaining the knowledge since the biased output worsens the consistency of the same sample's output at different moments. The latter refers to the feature representation being biased to the new task, which lacks discrimination to distinguish both old and new tasks. To this end, we build an imbalance mitigation for CL (IMCL) framework that incorporates a decoupled knowledge distillation (DKD) approach and a dual enhanced contrastive learning (DECL) approach to tackle both problems. Specifically, the DKD approach alleviates the suppression of the new task on the old tasks by decoupling the model output probability during the replay stage, which better maintains the knowledge of old tasks. The DECL approach enhances both low- and high-level features and fuses the enhanced features to construct contrastive loss to effectively distinguish different tasks. Extensive experiments on three popular datasets show that our method achieves promising performance under task incremental learning (Task-IL), class incremental learning (Class-IL), and domain incremental learning (Domain-IL) settings.
引用
收藏
页码:3450 / 3463
页数:14
相关论文
共 50 条
  • [1] Exemplar-based Continual Learning via Contrastive Learning
    Chen S.
    Zhang M.
    Zhang J.
    Huang K.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (07): : 1 - 12
  • [2] Zero-Shot Learning via Contrastive Learning on Dual Knowledge Graphs
    Wang, Jin
    Jiang, Bo
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 885 - 892
  • [3] Dual-Mode Contrastive Learning-Enhanced Knowledge Tracing
    Huang, Danni
    Yu, Jicheng
    Mao, Shun
    Li, Jiawei
    Jiang, Yuncheng
    PRICAI 2024: TRENDS IN ARTIFICIAL INTELLIGENCE, PT I, 2025, 15281 : 81 - 92
  • [4] Latent Bias Mitigation via Contrastive Learning
    Gao, Yue
    Zhang, Shu
    Sun, Jun
    Yu, Shanshan
    Yoshii, Akihito
    PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE IN ELECTRONICS ENGINEERING, AIEE 2024, 2024, : 42 - 47
  • [5] Learning Rules in Knowledge Graphs via Contrastive Learning
    Feng, Xiaoyang
    Liu, Xueli
    Yang, Yajun
    Wang, Wenjun
    Wang, Jun
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2024, PT IV, 2024, 14853 : 408 - 424
  • [6] Margin Contrastive Learning with Learnable-Vector for Continual Learning
    Nagata, Kotaro
    Hotta, Kazuhiro
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 3562 - 3568
  • [7] Federated Continual Learning via Knowledge Fusion: A Survey
    Yang, Xin
    Yu, Hao
    Gao, Xin
    Wang, Hao
    Zhang, Junbo
    Li, Tianrui
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (08) : 3832 - 3850
  • [8] Optimizing Reusable Knowledge for Continual Learning via Metalearning
    Hurtado, Julio
    Raymond-Saez, Alain
    Soto, Alvaro
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [9] Contrastive Supervised Distillation for Continual Representation Learning
    Barletti, Tommaso
    Biondi, Niccolo
    Pernici, Federico
    Bruni, Matteo
    Del Bimbo, Alberto
    IMAGE ANALYSIS AND PROCESSING, ICIAP 2022, PT I, 2022, 13231 : 597 - 609
  • [10] Online Continual Learning with Contrastive Vision Transformer
    Wang, Zhen
    Liu, Liu
    Kong, Yajing
    Guo, Jiaxian
    Tao, Dacheng
    COMPUTER VISION, ECCV 2022, PT XX, 2022, 13680 : 631 - 650