Continual learning in the presence of repetition

被引:0
|
作者
Hemati, Hamed [1 ]
Pellegrini, Lorenzo [2 ]
Duan, Xiaotian [3 ,4 ]
Zhao, Zixuan [3 ,4 ]
Xia, Fangfang [3 ,4 ]
Masana, Marc [5 ,6 ]
Tscheschner, Benedikt [5 ,7 ]
Veas, Eduardo [5 ,7 ]
Zheng, Yuxiang [8 ]
Zhao, Shiji [8 ]
Li, Shao-Yuan [8 ]
Huang, Sheng-Jun [8 ]
Lomonaco, Vincenzo [9 ]
van de Ven, Gido M. [10 ]
机构
[1] Univ St Gallen, Inst Comp Sci, Rosenbergstr 30, CH-9000 St Gallen, Switzerland
[2] Univ Bologna, Dept Comp Sci, Via Univ 50, I-47521 Cesena, Italy
[3] Univ Chicago, 5801 S Ellis Ave, Chicago, IL 60637 USA
[4] Argonne Natl Lab, 9700 S Cass Ave, Lemont, IL 60439 USA
[5] Graz Univ Technol, Rechbauerstr 12, A-8010 Graz, Austria
[6] TU Graz SAL Dependable Embedded Syst Lab, Silicon Austria Labs, A-8010 Graz, Austria
[7] Know Ctr GmbH, Sandgasse 36-4, A-8010 Graz, Austria
[8] Nanjing Univ Aeronaut & Astronaut, MIIT Key Lab Pattern Anal & Machine Intelligence, Nanjing 211106, Peoples R China
[9] Univ Pisa, Dept Comp Sci, Largo Bruno Pontecorvo 3, I-56127 Pisa, Italy
[10] Katholieke Univ Leuven, Dept Elect Engn, Kasteelpark Arenberg 10, B-3001 Leuven, Belgium
基金
比利时弗兰德研究基金会;
关键词
Continual learning; Class-incremental learning; Repetition; Competition; MEMORY;
D O I
10.1016/j.neunet.2024.106920
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual learning (CL) provides a framework for training models in ever-evolving environments. Although re-occurrence of previously seen objects or tasks is common in real-world problems, the concept of repetition in the data stream is not often considered in standard benchmarks for CL. Unlike with the rehearsal mechanism in buffer-based strategies, where sample repetition is controlled by the strategy, repetition in the data stream naturally stems from the environment. This report provides a summary of the CLVision challenge at CVPR 2023, which focused on the topic of repetition in class-incremental learning. The report initially outlines the challenge objective and then describes three solutions proposed by finalist teams that aim to effectively exploit the repetition in the stream to learn continually. The experimental results from the challenge highlight the effectiveness of ensemble-based solutions that employ multiple versions of similar modules, each trained on different but overlapping subsets of classes. This report underscores the transformative potential of taking a different perspective in CL by employing repetition in the data stream to foster innovative strategy design.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Continual Auxiliary Task Learning
    McLeod, Matthew
    Lo, Chunlok
    Schlegel, Matthew
    Jacobsen, Andrew
    Kumaraswamy, Raksha
    White, Martha
    White, Adam
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [22] Reinforced Continual Learning for Graphs
    Rakaraddi, Appan
    Kei, Lam Siew
    Pratama, Mahardhika
    de Carvalho, Marcus
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 1666 - 1674
  • [23] Memory Bounds for Continual Learning
    Chen, Xi
    Papadimitriou, Christos
    Peng, Binghui
    2022 IEEE 63RD ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS), 2022, : 519 - 530
  • [24] Deep continual learning for medical call incidents text classification under the presence of dataset shifts
    Ferri P.
    Lomonaco V.
    Passaro L.C.
    Félix-De Castro A.
    Sánchez-Cuesta P.
    Sáez C.
    García-Gómez J.M.
    Computers in Biology and Medicine, 2024, 175
  • [25] Continual learning with selective nets
    Luu, Hai Tung
    Szemenyei, Marton
    APPLIED INTELLIGENCE, 2025, 55 (07)
  • [26] Continual Learning of Object Instances
    Parshotam, Kishan
    Kilickaya, Mert
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 907 - 914
  • [27] Subspace distillation for continual learning
    Roy, Kaushik
    Simon, Christian
    Moghadam, Peyman
    Harandi, Mehrtash
    NEURAL NETWORKS, 2023, 167 : 65 - 79
  • [28] The Present and Future of Continual Learning
    Bae, Heechul
    Song, Soonyong
    Park, Junhee
    11TH INTERNATIONAL CONFERENCE ON ICT CONVERGENCE: DATA, NETWORK, AND AI IN THE AGE OF UNTACT (ICTC 2020), 2020, : 1193 - 1195
  • [29] PARTIAL HYPERNETWORKS FOR CONTINUAL LEARNING
    Hemati, Hamed
    Lomonaco, Vincenzo
    Bacciu, Davide
    Borth, Damian
    CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 232, 2023, 232 : 318 - 336
  • [30] Adaptive Progressive Continual Learning
    Xu, Ju
    Ma, Jin
    Gao, Xuesong
    Zhu, Zhanxing
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (10) : 6715 - 6728