A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning

被引:0
|
作者
Wang, Zhenyi [1 ]
Yang, Enneng [2 ]
Shen, Li [3 ]
Huang, Heng [1 ]
机构
[1] Univ Maryland, Dept Comp Sci, College Pk, MD 20742 USA
[2] Northeastern Univ, Shenyang 110819, Liaoning, Peoples R China
[3] Sun Yat Sen Univ, Guangzhou, Peoples R China
关键词
Beneficial forgetting; harmful forgetting; memorization; distribution shift; cross-disciplinary research;
D O I
10.1109/TPAMI.2024.3498346
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Forgetting refers to the loss or deterioration of previously acquired knowledge. While existing surveys on forgetting have primarily focused on continual learning, forgetting is a prevalent phenomenon observed in various other research domains within deep learning. Forgetting manifests in research fields such as generative models due to generator shifts, and federated learning due to heterogeneous data distributions across clients. Addressing forgetting encompasses several challenges, including balancing the retention of old task knowledge with fast learning of new task, managing task interference with conflicting goals, and preventing privacy leakage, etc. Moreover, most existing surveys on continual learning implicitly assume that forgetting is always harmful. In contrast, our survey argues that forgetting is a double-edged sword and can be beneficial and desirable in certain cases, such as privacy-preserving scenarios. By exploring forgetting in a broader context, we present a more nuanced understanding of this phenomenon and highlight its potential advantages. Through this comprehensive survey, we aspire to uncover potential solutions by drawing upon ideas and approaches from various fields that have dealt with forgetting. By examining forgetting beyond its conventional boundaries, we hope to encourage the development of novel strategies for mitigating, harnessing, or even embracing forgetting in real applications.
引用
收藏
页码:1464 / 1483
页数:20
相关论文
共 50 条
  • [31] Beyond Prompt Learning: Continual Adapter for Efficient Rehearsal-Free Continual Learning
    Gao, Xinyuan
    Dong, Songlin
    He, Yuhang
    Wang, Qiang
    Gong, Yihong
    COMPUTER VISION - ECCV 2024, PT LXXXV, 2025, 15143 : 89 - 106
  • [32] CONSISTENCY IS THE KEY TO FURTHER MITIGATING CATASTROPHIC FORGETTING IN CONTINUAL LEARNING
    Bhat, Prashant
    Zonooz, Bahram
    Arani, Elahe
    CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 199, 2022, 199
  • [33] Survey on Online Streaming Continual Learning
    Gunasekara, Nuwan
    Pfahringer, Bernhard
    Gomes, Heitor Murilo
    Bifet, Albert
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 6628 - 6637
  • [34] A Novel Class-wise Forgetting Detector in Continual Learning
    Pham, Xuan Cuong
    Liew, Alan Wee-chung
    Wang, Can
    2021 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS (DICTA 2021), 2021, : 518 - 525
  • [35] Preempting Catastrophic Forgetting in Continual Learning Models by Anticipatory Regularization
    El Khatib, Alaa
    Karray, Fakhri
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [36] Understanding Catastrophic Forgetting of Gated Linear Networks in Continual Learning
    Munari, Matteo
    Pasa, Luca
    Zambon, Daniele
    Alippi, Cesare
    Navarin, Nicolo
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [37] Continual Learning With Knowledge Distillation: A Survey
    Li, Songze
    Su, Tonghua
    Zhang, Xuyao
    Wang, Zhongjie
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [38] Continual Learning with Deep Generative Replay
    Shin, Hanul
    Lee, Jung Kwon
    Kim, Jaehong
    Kim, Jiwon
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [39] Continual Learning for Smart City: A Survey
    Yang, Li
    Luo, Zhipeng
    Zhang, Shiming
    Teng, Fei
    Li, Tianrui
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (12) : 7805 - 7824
  • [40] Loss of plasticity in deep continual learning
    Dohare, Shibhansh
    Hernandez-Garcia, J. Fernando
    Lan, Qingfeng
    Rahman, Parash
    Mahmood, A. Rupam
    Sutton, Richard S.
    NATURE, 2024, 632 (8026) : 768 - 774