Saliency Guided Experience Packing for Replay in Continual Learning

被引:4
|
作者
Saha, Gobinda [1 ]
Roy, Kaushik [1 ]
机构
[1] Purdue Univ, Elmore Family Sch Elect & Comp Engn, W Lafayette, IN 47907 USA
来源
2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV) | 2023年
基金
美国国家科学基金会;
关键词
D O I
10.1109/WACV56688.2023.00524
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial learning systems aspire to mimic human intelligence by continually learning from a stream of tasks without forgetting past knowledge. One way to enable such learning is to store past experiences in the form of input examples in episodic memory and replay them when learning new tasks. However, performance of such method suffers as the size of the memory becomes smaller. In this paper, we propose a new approach for experience replay, where we select the past experiences by looking at the saliency maps which provide visual explanations for the model's decision. Guided by these saliency maps, we pack the memory with only the parts or patches of the input images important for the model's prediction. While learning a new task, we replay these memory patches with appropriate zero-padding to remind the model about its past decisions. We evaluate our algorithm on CIFAR-100, miniImageNet and CUB datasets and report better performance than the state-of-the-art approaches. With qualitative and quantitative analyses we show that our method captures richer summaries of past experiences without any memory increase, and hence performs well with small episodic memory.
引用
收藏
页码:5262 / 5272
页数:11
相关论文
共 50 条
  • [21] Relay Hindsight Experience Replay: Self-guided continual reinforcement learning for sequential object manipulation tasks with sparse rewards
    Luo, Yongle
    Wang, Yuxin
    Dong, Kun
    Zhang, Qiang
    Cheng, Erkang
    Sun, Zhiyong
    Song, Bo
    NEUROCOMPUTING, 2023, 557
  • [22] Posterior Meta-Replay for Continual Learning
    Henning, Christian
    Cervera, Maria R.
    D'Angelo, Francesco
    von Oswald, Johannes
    Traber, Regina
    Ehret, Benjamin
    Kobayashi, Seijin
    Grewe, Benjamin F.
    Sacramento, Joao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [23] Beneficial Effect of Combined Replay for Continual Learning
    Solinas, M.
    Rousset, S.
    Cohendet, R.
    Bourrier, Y.
    Mainsant, M.
    Molnos, A.
    Reyboz, M.
    Mermillod, M.
    ICAART: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 2, 2021, : 205 - 217
  • [24] MCLER: Multi-Critic Continual Learning With Experience Replay for Quadruped Gait Generation
    Liu, Maoqi
    Chen, Yanyun
    Song, Ran
    Qian, Longyue
    Fang, Xing
    Tan, Wenhao
    Li, Yibin
    Zhang, Wei
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (09): : 8138 - 8145
  • [25] Relational Experience Replay: Continual Learning by Adaptively Tuning Task-Wise Relationship
    Wang, Quanziang
    Wang, Renzhen
    Li, Yuexiang
    Wei, Dong
    Wang, Hong
    Ma, Kai
    Zheng, Yefeng
    Meng, Deyu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 9683 - 9698
  • [26] Disentangled Prototype-Guided Dynamic Memory Replay for Continual Learning in Acoustic Signal Classification
    Choi, Seok-Hun
    Buu, Seok-Jun
    IEEE ACCESS, 2024, 12 : 153796 - 153808
  • [27] SLER: Self-generated long-term experience replay for continual reinforcement learning
    Li, Chunmao
    Li, Yang
    Zhao, Yinliang
    Peng, Peng
    Geng, Xupeng
    APPLIED INTELLIGENCE, 2021, 51 (01) : 185 - 201
  • [28] A Benchmark and Empirical Analysis for Replay Strategies in Continual Learning
    Yang, Qihan
    Feng, Fan
    Chan, Rosa H. M.
    CONTINUAL SEMI-SUPERVISED LEARNING, CSSL 2021, 2022, 13418 : 75 - 90
  • [29] Continual Pedestrian Trajectory Learning With Social Generative Replay
    Wu, Ya
    Bighashdel, Ariyan
    Chen, Guang
    Dubbelman, Gijs
    Jancura, Pavol
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (02) : 848 - 855
  • [30] SLER: Self-generated long-term experience replay for continual reinforcement learning
    Chunmao Li
    Yang Li
    Yinliang Zhao
    Peng Peng
    Xupeng Geng
    Applied Intelligence, 2021, 51 : 185 - 201