CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning

被引:64
|
作者
Smith, James Seale [1 ,2 ]
Karlinsky, Leonid [2 ,4 ]
Gutta, Vyshnavi [1 ]
Cascante-Bonilla, Paola [2 ,3 ]
Kim, Donghyun [2 ,4 ]
Arbelle, Assaf [4 ]
Panda, Rameswar [2 ,4 ]
Feris, Rogerio [2 ,4 ]
Kira, Zsolt [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] MIT, IBM Watson AI Lab, Cambridge, MA 02139 USA
[3] Rice Univ, Houston, TX USA
[4] IBM Res, Armonk, NY USA
关键词
D O I
10.1109/CVPR52729.2023.01146
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Computer vision models suffer from a phenomenon known as catastrophic forgetting when learning novel concepts from continuously shifting training data. Typical solutions for this continual learning problem require extensive rehearsal of previously seen data, which increases memory costs and may violate data privacy. Recently, the emergence of large-scale pre-trained vision transformer models has enabled prompting approaches as an alternative to data-rehearsal. These approaches rely on a key-query mechanism to generate prompts and have been found to be highly resistant to catastrophic forgetting in the well-established rehearsal-free continual learning setting. However, the key mechanism of these methods is not trained end-to-end with the task sequence. Our experiments show that this leads to a reduction in their plasticity, hence sacrificing new task accuracy, and inability to benefit from expanded parameter capacity. We instead propose to learn a set of prompt components which are assembled with input-conditioned weights to produce input-conditioned prompts, resulting in a novel attention-based end-to-end key-query scheme. Our experiments show that we outperform the current SOTA method DualPrompt on established benchmarks by as much as 4.5% in average final accuracy. We also outperform the state of art by as much as 4.4% accuracy on a continual learning benchmark which contains both class-incremental and domain-incremental task shifts, corresponding to many practical settings. Our code is available at https://github.com/GT-RIPL/CODA-Prompt
引用
收藏
页码:11909 / 11919
页数:11
相关论文
共 38 条
  • [31] Evolving Ensemble Model based on Hilbert Schmidt Independence Criterion for task-free continual learning
    Ye, Fei
    Bors, Adrian G.
    NEUROCOMPUTING, 2025, 624
  • [32] Exemplar-Free Continual Learning of Vision Transformers via Gated Class-Attention and Cascaded Feature Drift Compensation
    Cotogni, Marco
    Yang, Fei
    Cusano, Claudio
    Bagdanov, Andrew D.
    van de Weijer, Joost
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025,
  • [33] Imbalanced-Free Memory Selection Scheme Based Continual Learning by Using K-means Clustering
    Lee, Changha
    Jeon, Minsu
    Yang, Eunju
    Kim, Seong-Hwan
    Youn, Chan-Hyun
    2019 10TH INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY CONVERGENCE (ICTC): ICT CONVERGENCE LEADING THE AUTONOMOUS FUTURE, 2019, : 910 - 915
  • [34] Structural Attention Enhanced Continual Meta-Learning for Graph Edge Labeling Based Few-Shot Remote Sensing Scene Classification
    Li, Feimo
    Li, Shuaibo
    Fan, Xinxin
    Li, Xiong
    Chang, Hongxing
    REMOTE SENSING, 2022, 14 (03)
  • [35] Attention-based digital filter with anchor-free feature pyramid learning model for pedestrian detection
    Shrivastava A.
    Poonkuntran S.
    Journal of Intelligent and Fuzzy Systems, 2024, 46 (04): : 10287 - 10303
  • [36] Attention-Based Population-Invariant Deep Reinforcement Learning for Collision-Free Flocking with A Scalable Fixed-Wing UAV Swarm
    Yan, Chao
    Low, Kin Huat
    Xiang, Xiaojia
    Hu, Tianjiang
    Shen, Lincheng
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 13730 - 13736
  • [37] Discovering colorectal cancer biomarkers with individual fragments analysis of cell-free DNA using attention-based deep multi-instance learning.
    Lee, Eunsaem
    Shin, Seungho
    Park, Seungtae
    Yu, Seunghyeon
    Jo, Shin-Sang
    Jeon, Hee Joon
    Yoon, Na Ri
    Kim, Jinho
    Park, Donghyun
    Hwang, Hyung Ju
    JOURNAL OF CLINICAL ONCOLOGY, 2022, 40 (16)
  • [38] TranStutter: A Convolution-Free Transformer-Based Deep Learning Method to Classify Stuttered Speech Using 2D Mel-Spectrogram Visualization and Attention-Based Feature Representation
    Basak, Krishna
    Mishra, Nilamadhab
    Chang, Hsien-Tsung
    SENSORS, 2023, 23 (19)