Contrastive Correlation Preserving Replay for Online Continual Learning

被引:8
|
作者
Yu, Da [1 ]
Zhang, Mingyi [2 ,3 ]
Li, Mantian [1 ]
Zha, Fusheng [1 ]
Zhang, Junge [2 ,3 ]
Sun, Lining [1 ]
Huang, Kaiqi [2 ,3 ,4 ]
机构
[1] Harbin Inst Technol HIT, State Key Lab Robot & Syst, Harbin 150080, Peoples R China
[2] Chinese Acad Sci CASIA, Inst Automat, Ctr Res Intelligent Syst & Engn, Beijing 100190, Peoples R China
[3] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
[4] CAS Ctr Excellence Brain Sci & Intelligence Techno, Shanghai 200031, Peoples R China
基金
中国国家自然科学基金;
关键词
Task analysis; Correlation; Knowledge transfer; Training; Memory management; Data models; Mutual information; Continual learning; catastrophic forgetting; class-incremental learning; experience replay; KNOWLEDGE;
D O I
10.1109/TCSVT.2023.3285221
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Online Continual Learning (OCL), as a core step towards achieving human-level intelligence, aims to incrementally learn and accumulate novel concepts from streaming data that can be seen only once, while alleviating catastrophic forgetting on previously acquired knowledge. Under this mode, the model needs to learn new classes or tasks in an online manner, and the data distribution may change over time. Moreover, task boundaries and identities are not available during training and evaluation. To balance the stability and plasticity of networks, in this work, we propose a replay-based framework for OCL, named Contrastive Correlation Preserving Replay (CCPR), which focuses on not only instances but also correlations between multiple instances. Specifically, besides the previous raw samples, the corresponding representations are stored in the memory and used to construct correlations for the past and the current model. To better capture correlation and higher-order dependencies, we maximize the low bound of mutual information between the past correlation and the current correlation by leveraging contrastive objectives. Furthermore, to improve the performance, we propose a new memory update strategy, which simultaneously encourages the balance and diversity of samples within the memory. With limited memory slots, it allows less redundant and more representative samples for later replay. We conduct extensive evaluations on several popular CL datasets, and experiments show that our method consistently outperforms the state-of-the-art methods and can effectively consolidate knowledge to alleviate forgetting.
引用
收藏
页码:124 / 139
页数:16
相关论文
共 50 条
  • [1] HPCR: Holistic Proxy-Based Contrastive Replay for Online Continual Learning
    Lin, Huiwei
    Feng, Shanshan
    Zhang, Baoquan
    Li, Xutao
    Ye, Yunming
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025,
  • [2] CeCR: Cross-entropy contrastive replay for online class-incremental continual learning
    Sun, Guanglu
    Ji, Baolun
    Liang, Lili
    Chen, Minghui
    NEURAL NETWORKS, 2024, 173
  • [3] PCR: Proxy-based Contrastive Replay for Online Class-Incremental Continual Learning
    Lin, Huiwei
    Zhang, Baoquan
    Feng, Shanshan
    Li, Xutao
    Ye, Yunming
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 24246 - 24255
  • [4] Online Continual Learning with Contrastive Vision Transformer
    Wang, Zhen
    Liu, Liu
    Kong, Yajing
    Guo, Jiaxian
    Tao, Dacheng
    COMPUTER VISION, ECCV 2022, PT XX, 2022, 13680 : 631 - 650
  • [5] Supervised Contrastive Replay: Revisiting the Nearest Class Mean Classifier in Online Class-Incremental Continual Learning
    Mai, Zheda
    Li, Ruiwen
    Kim, Hyunwoo
    Sanner, Scott
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 3584 - 3594
  • [6] Selective Replay Enhances Learning in Online Continual Analogical Reasoning
    Hayes, Tyler L.
    Kanan, Christopher
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 3497 - 3507
  • [7] CONTRASTIVE LEARNING FOR ONLINE SEMI-SUPERVISED GENERAL CONTINUAL LEARNING
    Michel, Nicolas
    Negrel, Romain
    Chierchia, Giovanni
    Bercher, Jean-Francois
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 1896 - 1900
  • [8] Chameleon: Dual Memory Replay for Online Continual Learning on Edge Devices
    Aggarwal, Shivam
    Binici, Kuluhan
    Mitra, Tulika
    2023 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION, DATE, 2023,
  • [9] ACAE-REMIND for online continual learning with compressed feature replay
    Wang, Kai
    van de Weijer, Joost
    Herranz, Luis
    PATTERN RECOGNITION LETTERS, 2021, 150 : 122 - 129
  • [10] Chameleon: Dual Memory Replay for Online Continual Learning on Edge Devices
    Aggarwal, Shivam
    Binici, Kuluhan
    Mitra, Tulika
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2024, 43 (06) : 1663 - 1676