CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning

被引:64
|
作者
Smith, James Seale [1 ,2 ]
Karlinsky, Leonid [2 ,4 ]
Gutta, Vyshnavi [1 ]
Cascante-Bonilla, Paola [2 ,3 ]
Kim, Donghyun [2 ,4 ]
Arbelle, Assaf [4 ]
Panda, Rameswar [2 ,4 ]
Feris, Rogerio [2 ,4 ]
Kira, Zsolt [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] MIT, IBM Watson AI Lab, Cambridge, MA 02139 USA
[3] Rice Univ, Houston, TX USA
[4] IBM Res, Armonk, NY USA
关键词
D O I
10.1109/CVPR52729.2023.01146
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Computer vision models suffer from a phenomenon known as catastrophic forgetting when learning novel concepts from continuously shifting training data. Typical solutions for this continual learning problem require extensive rehearsal of previously seen data, which increases memory costs and may violate data privacy. Recently, the emergence of large-scale pre-trained vision transformer models has enabled prompting approaches as an alternative to data-rehearsal. These approaches rely on a key-query mechanism to generate prompts and have been found to be highly resistant to catastrophic forgetting in the well-established rehearsal-free continual learning setting. However, the key mechanism of these methods is not trained end-to-end with the task sequence. Our experiments show that this leads to a reduction in their plasticity, hence sacrificing new task accuracy, and inability to benefit from expanded parameter capacity. We instead propose to learn a set of prompt components which are assembled with input-conditioned weights to produce input-conditioned prompts, resulting in a novel attention-based end-to-end key-query scheme. Our experiments show that we outperform the current SOTA method DualPrompt on established benchmarks by as much as 4.5% in average final accuracy. We also outperform the state of art by as much as 4.4% accuracy on a continual learning benchmark which contains both class-incremental and domain-incremental task shifts, corresponding to many practical settings. Our code is available at https://github.com/GT-RIPL/CODA-Prompt
引用
收藏
页码:11909 / 11919
页数:11
相关论文
共 38 条
  • [21] Hierarchical Decomposition of Prompt-Based Continual Learning: Rethinking Obscured Sub-optimality
    Wang, Liyuan
    Xie, Jingyi
    Zhang, Xingxing
    Huang, Mingyi
    Su, Hang
    Zhu, Jun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [22] Smaller is Better: An Analysis of Instance Quantity/Quality Trade-off in Rehearsal-based Continual Learning
    Pelosin, Francesco
    Torsello, Andrea
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [23] Towards Exemplar-Free Continual Learning in Vision Transformers: an Account of Attention, Functional and Weight Regularization
    Pelosin, Francesco
    Jha, Saurav
    Torsello, Andrea
    Raducanu, Bogdan
    van de Weijer, Joost
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 3819 - 3828
  • [24] Category-instance distillation based on visual-language models for rehearsal-free class incremental learning
    Jin, Weilong
    Wang, Zilei
    Zhang, Yixin
    IET COMPUTER VISION, 2024,
  • [25] Similarity-Based Adaptation for Task-Aware and Task-Free Continual Learning
    Adel, Tameem
    Journal of Artificial Intelligence Research, 2024, 80 : 377 - 417
  • [26] Similarity-Based Adaptation for Task-Aware and Task-Free Continual Learning
    Adel, Tameem
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2024, 80 : 377 - 417
  • [27] Gradient-based Editing of Memory Examples for Online Task-free Continual Learning
    Jin, Xisen
    Sadhu, Arka
    Du, Junyi
    Ren, Xiang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [28] Knowledge-guided prompt-based continual learning: Aligning task-prompts through contrastive hard negatives
    Lu, Heng-yang
    Lin, Long-kang
    Fan, Chenyou
    Wang, Chongjun
    Fang, Wei
    Wu, Xiao-jun
    KNOWLEDGE-BASED SYSTEMS, 2025, 310
  • [29] Continual learning for cross-modal image-text retrieval based on domain-selective attention
    Yang, Rui
    Wang, Shuang
    Gu, Yu
    Wang, Jihui
    Sun, Yingzhi
    Zhang, Huan
    Liao, Yu
    Jiao, Licheng
    PATTERN RECOGNITION, 2024, 149
  • [30] Decoding BatchNorm statistics via anchors pool for data-free models based on continual learning
    Xiaobin Li
    Weiqiang Wang
    Guangluan Xu
    Neural Computing and Applications, 2025, 37 (6) : 5039 - 5055