Complementary Calibration: Boosting General Continual Learning With Collaborative Distillation and Self-Supervision

被引:7
|
作者
Ji, Zhong [1 ,2 ]
Li, Jin [1 ,2 ]
Wang, Qiang [1 ,2 ]
Zhang, Zhongfei [3 ]
机构
[1] Tianjin Univ, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
[2] Tianjin Univ, Tianjin Key Lab Brain Inspired Intelligence Techno, Tianjin 300072, Peoples R China
[3] SUNY Binghamton, Dept Comp Sci, Binghamton, NY 13902 USA
基金
中国国家自然科学基金;
关键词
General continual learning; complementary calibration; knowledge distillation; self-supervised learning; supervised contrastive learning;
D O I
10.1109/TIP.2022.3230457
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
General Continual Learning (GCL) aims at learning from non independent and identically distributed stream data without catastrophic forgetting of the old tasks that don't rely on task boundaries during both training and testing stages. We reveal that the relation and feature deviations are crucial problems for catastrophic forgetting, in which relation deviation refers to the deficiency of the relationship among all classes in knowledge distillation, and feature deviation refers to indiscriminative feature representations. To this end, we propose a Complementary Calibration (CoCa) framework by mining the complementary model's outputs and features to alleviate the two deviations in the process of GCL. Specifically, we propose a new collaborative distillation approach for addressing the relation deviation. It distills model's outputs by utilizing ensemble dark knowledge of new model's outputs and reserved outputs, which maintains the performance of old tasks as well as balancing the relationship among all classes. Furthermore, we explore a collaborative self-supervision idea to leverage pretext tasks and supervised contrastive learning for addressing the feature deviation problem by learning complete and discriminative features for all classes. Extensive experiments on six popular datasets show that our CoCa framework achieves superior performance against state-of-the-art methods.
引用
收藏
页码:657 / 667
页数:11
相关论文
共 50 条
  • [31] Learning Visual Localization of a Quadrotor Using Its Noise as Self-Supervision
    Nava, Mirko
    Paolillo, Antonio
    Guzzi, Jerome
    Gambardella, Luca Maria
    Giusti, Alessandro
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02) : 2218 - 2225
  • [32] ToolBot: Learning Oriented Keypoints for Tool Usage From Self-Supervision
    Wei, Junhang
    Hao, Peng
    Wang, Shuo
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (01) : 723 - 731
  • [33] SLAPS: Self-Supervision Improves Structure Learning for Graph Neural Networks
    Fatemi, Bahare
    El Asri, Layla
    Kazemi, Seyed Mehran
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [34] Learning an Effective Control Policy for a Robotic Drumstick via Self-Supervision
    Bretan, Mason
    Sanan, Siddharth
    Heck, Larry
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 2339 - 2341
  • [35] Self-Supervision and Self-Distillation with Multilayer Feature Contrast for Supervision Collapse in Few-Shot Remote Sensing Scene Classification
    Zhou, Haonan
    Du, Xiaoping
    Li, Sen
    REMOTE SENSING, 2022, 14 (13)
  • [36] State Augmentation via Self-Supervision in Offline Multiagent Reinforcement Learning
    Wang, Siying
    Li, Xiaodie
    Qu, Hong
    Chen, Wenyu
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2024, 16 (03) : 1051 - 1062
  • [37] Learning to Detect Subsea Pipelines with Deep Segmentation Network and Self-Supervision
    Bharti, Vibhav
    Lane, David
    Wang, Sen
    GLOBAL OCEANS 2020: SINGAPORE - U.S. GULF COAST, 2020,
  • [38] Efficient Representation Learning for Healthcare with Cross-Architectural Self-Supervision
    Singh, Pranav
    Cirrone, Jacopo
    MACHINE LEARNING FOR HEALTHCARE CONFERENCE, VOL 219, 2023, 219
  • [39] INS-GNN: Improving graph imbalance learning with self-supervision
    Juan, Xin
    Zhou, Fengfeng
    Wang, Wentao
    Jin, Wei
    Tang, Jiliang
    Wang, Xin
    INFORMATION SCIENCES, 2023, 637
  • [40] Zero-shot learning with self-supervision by shuffling semantic embeddings
    Kim, Hoseong
    Lee, Jewook
    Byun, Hyeran
    NEUROCOMPUTING, 2021, 437 : 1 - 8