Class Impression for Data-Free Incremental Learning

被引:2
|
作者
Ayromlou, Sana [1 ]
Abolmaesumi, Purang [1 ]
Tsang, Teresa [2 ]
Li, Xiaoxiao [1 ]
机构
[1] Univ British Columbia, Vancouver, BC, Canada
[2] Vancouver Gen Hosp, Vancouver, BC, Canada
基金
加拿大自然科学与工程研究理事会; 加拿大健康研究院;
关键词
D O I
10.1007/978-3-031-16440-8_31
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Standard deep learning-based classification approaches require collecting all samples from all classes in advance and are trained offline. This paradigm may not be practical in real-world clinical applications, where new classes are incrementally introduced through the addition of new data. Class incremental learning is a strategy allowing learning from such data. However, a major challenge is catastrophic forgetting, i.e., performance degradation on previous classes when adapting a trained model to new data. To alleviate this challenge, prior methodologies save a portion of training data that require perpetual storage, which may introduce privacy issues. Here, we propose a novel data-free class incremental learning framework that first synthesizes data from the model trained on previous classes to generate a Class Impression. Subsequently, it updates the model by combining the synthesized data with new class data. Furthermore, we incorporate a cosine normalized Cross-entropy loss to mitigate the adverse effects of the imbalance, a margin loss to increase separation among previous classes and new ones, and an infra-domain contrastive loss to generalize the model trained on the synthesized data to real data. We compare our proposed framework with state-of-the-art methods in class incremental learning, where we demonstrate improvement in accuracy for the classification of 11,062 echocardiography cine series of patients. Code is available at https://github.com/sanaAyrml/Class-Impresion-for-Data-free-Incremental-Learning
引用
收藏
页码:320 / 329
页数:10
相关论文
共 50 条
  • [21] Reminding the incremental language model via data-free self-distillation
    Han Wang
    Ruiliu Fu
    Chengzhang Li
    Xuejun Zhang
    Jun Zhou
    Xing Bai
    Yonghong Yan
    Qingwei Zhao
    Applied Intelligence, 2023, 53 : 9298 - 9320
  • [22] Reminding the incremental language model via data-free self-distillation
    Wang, Han
    Fu, Ruiliu
    Li, Chengzhang
    Zhang, Xuejun
    Zhou, Jun
    Bai, Xing
    Yan, Yonghong
    Zhao, Qingwei
    APPLIED INTELLIGENCE, 2023, 53 (08) : 9298 - 9320
  • [23] DFRD: Data-Free Robustness Distillation for Heterogeneous Federated Learning
    Luo, Kangyang
    Wang, Shuai
    Fu, Yexuan
    Li, Xiang
    Lan, Yunshi
    Gao, Ming
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [24] A novel data-free continual learning method with contrastive reversion
    Wu, Chu
    Xie, Runshan
    Wang, Shitong
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (02) : 505 - 518
  • [25] FedGhost: Data-Free Model Poisoning Enhancement in Federated Learning
    Ma, Zhuoran
    Huang, Xinyi
    Wang, Zhuzhu
    Qin, Zhan
    Wang, Xiangyu
    Ma, Jianfeng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 2096 - 2108
  • [26] Latent Coreset Sampling based Data-Free Continual Learning
    Wang, Zhuoyi
    Li, Dingcheng
    Li, Ping
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 2078 - 2087
  • [27] A novel data-free continual learning method with contrastive reversion
    Chu Wu
    Runshan Xie
    Shitong Wang
    International Journal of Machine Learning and Cybernetics, 2024, 15 : 505 - 518
  • [28] DENSE: Data-Free One-Shot Federated Learning
    Zhang, Jie
    Chen, Chen
    Li, Bo
    Lyu, Lingjuan
    Wu, Shuang
    Ding, Shouhong
    Shen, Chunhua
    Wu, Chao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [29] Leveraging joint incremental learning objective with data ensemble for class incremental learning
    Mazumder, Pratik
    Karim, Mohammed Asad
    Joshi, Indu
    Singh, Pravendra
    NEURAL NETWORKS, 2023, 161 : 202 - 212
  • [30] The Complexity of Data-Free Nfer
    Kauffman, Sean
    Larsen, Kim Guldstrand
    Zimmermann, Martin
    RUNTIME VERIFICATION, RV 2024, 2025, 15191 : 174 - 191