Complementary Calibration: Boosting General Continual Learning With Collaborative Distillation and Self-Supervision

被引:7
|
作者
Ji, Zhong [1 ,2 ]
Li, Jin [1 ,2 ]
Wang, Qiang [1 ,2 ]
Zhang, Zhongfei [3 ]
机构
[1] Tianjin Univ, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
[2] Tianjin Univ, Tianjin Key Lab Brain Inspired Intelligence Techno, Tianjin 300072, Peoples R China
[3] SUNY Binghamton, Dept Comp Sci, Binghamton, NY 13902 USA
基金
中国国家自然科学基金;
关键词
General continual learning; complementary calibration; knowledge distillation; self-supervised learning; supervised contrastive learning;
D O I
10.1109/TIP.2022.3230457
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
General Continual Learning (GCL) aims at learning from non independent and identically distributed stream data without catastrophic forgetting of the old tasks that don't rely on task boundaries during both training and testing stages. We reveal that the relation and feature deviations are crucial problems for catastrophic forgetting, in which relation deviation refers to the deficiency of the relationship among all classes in knowledge distillation, and feature deviation refers to indiscriminative feature representations. To this end, we propose a Complementary Calibration (CoCa) framework by mining the complementary model's outputs and features to alleviate the two deviations in the process of GCL. Specifically, we propose a new collaborative distillation approach for addressing the relation deviation. It distills model's outputs by utilizing ensemble dark knowledge of new model's outputs and reserved outputs, which maintains the performance of old tasks as well as balancing the relationship among all classes. Furthermore, we explore a collaborative self-supervision idea to leverage pretext tasks and supervised contrastive learning for addressing the feature deviation problem by learning complete and discriminative features for all classes. Extensive experiments on six popular datasets show that our CoCa framework achieves superior performance against state-of-the-art methods.
引用
收藏
页码:657 / 667
页数:11
相关论文
共 50 条
  • [21] Non-Prehensile Manipulation Learning through Self-Supervision
    Gao, Ziyan
    Elibol, Armagan
    Chong, Nak Young
    2020 FOURTH IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING (IRC 2020), 2020, : 93 - 99
  • [22] Learning multi-view visual correspondences with self-supervision
    Zhang, Pengcheng
    Zhou, Lei
    Bai, Xiao
    Wang, Chen
    Zhou, Jun
    Zhang, Liang
    Zheng, Jin
    DISPLAYS, 2022, 72
  • [23] FedGL: Federated graph learning framework with global self-supervision
    Chen, Chuan
    Xu, Ziyue
    Hu, Weibo
    Zheng, Zibin
    Zhang, Jie
    INFORMATION SCIENCES, 2024, 657
  • [24] Learning Unsupervised Visual Grounding Through Semantic Self-Supervision
    Javed, Syed Ashar
    Saxena, Shreyas
    Gandhi, Vineet
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 796 - 802
  • [25] DoubleMatch: Improving Semi-Supervised Learning with Self-Supervision
    Wallin, Erik
    Svensson, Lennart
    Kahl, Fredrik
    Hammarstrand, Lars
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2871 - 2877
  • [26] Audio-Visual Contrastive Learning with Temporal Self-Supervision
    Jenni, Simon
    Black, Alexander
    Collomosse, John
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 7, 2023, : 7996 - 8004
  • [27] Offline Meta-Reinforcement Learning with Online Self-Supervision
    Pong, Vitchyr H.
    Nair, Ashvin
    Smith, Laura
    Huang, Catherine
    Levine, Sergey
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [28] SKGCR: self-supervision enhanced knowledge-aware graph collaborative recommendation
    Xiangkun Liu
    Bo Yang
    Jingyu Xu
    Applied Intelligence, 2023, 53 : 19872 - 19891
  • [29] Self-Supervised Self-Supervision by Combining Deep Learning and Probabilistic Logic
    Lang, Hunter
    Poon, Hoifung
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 4978 - 4986
  • [30] Learning to recognize while learning to speak: Self-supervision and developing a speaking motor
    Wu, Xiang
    Weng, Juyang
    NEURAL NETWORKS, 2021, 143 : 28 - 41