TASK-WISE PROMPT QUERY FUNCTION FOR REHEARSAL-FREE CONTINUAL LEARNING

被引:0
|
作者
Chen, Shuai [1 ,2 ]
Zhang, Mingyi [1 ]
Zhang, Junge [1 ]
Huang, Kaiqi [1 ,2 ,3 ]
机构
[1] Chinese Acad Sci, CRISE, Inst Automat, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing, Peoples R China
[3] Ctr Excellence Brain Sci & Intelligence Technol, CAS, Beijing, Peoples R China
关键词
Continual learning; Incremental learning; NEURAL-NETWORKS;
D O I
10.1109/ICASSP48485.2024.10446403
中图分类号
学科分类号
摘要
Continual learning (CL) aims to enable a model to retain knowledge of old tasks while learning new ones. One effective approach to CL is based on data rehearsal method. However, this approach increases the cost of storing data and cannot be used when data from old tasks is unavailable for some reason. Recently, with the emergence of large-scale pre-trained transformer models, prompt-based methods have become an alternative to data rehearsal. These methods rely on a query mechanism to generate prompts and have demonstrated resistance to forgetting in CL scenarios without rehearsal. However, these methods generate prompts in a task-wise way while queries for samples in an instance-wise way, and usually directly use pre-trained models as the encoding function for generating queries. This may lead to data retrieval errors and failure to match the correct prompts. In contrast, we propose building a new task-wise prompt query function that can continuously learn as the task progresses, thereby avoiding the issue of pre-trained models being unable to correctly match appropriate sample-prompt pairs. Our approach improves the effectiveness of the current state-of-the-art methods and has been verified on a series of datasets through our experimental results.
引用
收藏
页码:6320 / 6324
页数:5
相关论文
共 43 条
  • [21] Task-wise attention guided part complementary learning for few-shot image classification
    Cheng, Gong
    Li, Ruimin
    Lang, Chunbo
    Han, Junwei
    SCIENCE CHINA-INFORMATION SCIENCES, 2021, 64 (02)
  • [22] Category-instance distillation based on visual-language models for rehearsal-free class incremental learning
    Jin, Weilong
    Wang, Zilei
    Zhang, Yixin
    IET COMPUTER VISION, 2024,
  • [23] LEARNING AN EVOLVED MIXTURE MODEL FOR TASK-FREE CONTINUAL LEARNING
    Ye, Fei
    Bors, Adrian G.
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 1936 - 1940
  • [24] Task-Free Continual Learning via Online Discrepancy Distance Learning
    Ye, Fei
    Bors, Adrian G.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [25] Task-Free Dynamic Sparse Vision Transformer for Continual Learning
    Ye, Fei
    Bors, Adrian G.
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 15, 2024, : 16442 - 16450
  • [26] Similarity-Based Adaptation for Task-Aware and Task-Free Continual Learning
    Adel, Tameem
    Journal of Artificial Intelligence Research, 2024, 80 : 377 - 417
  • [27] Similarity-Based Adaptation for Task-Aware and Task-Free Continual Learning
    Adel, Tameem
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2024, 80 : 377 - 417
  • [28] Online Task-free Continual Learning with Dynamic Sparse Distributed Memory
    Pourcel, Julien
    Ngoc-Son Vu
    French, Robert M.
    COMPUTER VISION, ECCV 2022, PT XXV, 2022, 13685 : 739 - 756
  • [29] Improving Task-free Continual Learning by Distributionally Robust Memory Evolution
    Wang, Zhenyi
    Shen, Li
    Fang, Le
    Suo, Qiuling
    Duan, Tiehang
    Gao, Mingchen
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [30] Bio-inspired, task-free continual learning through activity regularization
    Laessig, Francesco
    Aceituno, Pau Vilimelis
    Sorbaro, Martino
    Grewe, Benjamin F.
    BIOLOGICAL CYBERNETICS, 2023, 117 (4-5) : 345 - 361