A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning

被引:0
|
作者
Wang, Zhenyi [1 ]
Yang, Enneng [2 ]
Shen, Li [3 ]
Huang, Heng [1 ]
机构
[1] Univ Maryland, Dept Comp Sci, College Pk, MD 20742 USA
[2] Northeastern Univ, Shenyang 110819, Liaoning, Peoples R China
[3] Sun Yat Sen Univ, Guangzhou, Peoples R China
关键词
Beneficial forgetting; harmful forgetting; memorization; distribution shift; cross-disciplinary research;
D O I
10.1109/TPAMI.2024.3498346
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Forgetting refers to the loss or deterioration of previously acquired knowledge. While existing surveys on forgetting have primarily focused on continual learning, forgetting is a prevalent phenomenon observed in various other research domains within deep learning. Forgetting manifests in research fields such as generative models due to generator shifts, and federated learning due to heterogeneous data distributions across clients. Addressing forgetting encompasses several challenges, including balancing the retention of old task knowledge with fast learning of new task, managing task interference with conflicting goals, and preventing privacy leakage, etc. Moreover, most existing surveys on continual learning implicitly assume that forgetting is always harmful. In contrast, our survey argues that forgetting is a double-edged sword and can be beneficial and desirable in certain cases, such as privacy-preserving scenarios. By exploring forgetting in a broader context, we present a more nuanced understanding of this phenomenon and highlight its potential advantages. Through this comprehensive survey, we aspire to uncover potential solutions by drawing upon ideas and approaches from various fields that have dealt with forgetting. By examining forgetting beyond its conventional boundaries, we hope to encourage the development of novel strategies for mitigating, harnessing, or even embracing forgetting in real applications.
引用
收藏
页码:1464 / 1483
页数:20
相关论文
共 50 条
  • [41] A Comprehensive Survey of Recommender Systems Based on Deep Learning
    Zhou, Hongde
    Xiong, Fei
    Chen, Hongshu
    APPLIED SCIENCES-BASEL, 2023, 13 (20):
  • [42] Deep learning for fake news detection: A comprehensive survey
    Hu, Linmei
    Wei, Siqi
    Zhao, Ziwang
    Wu, Bin
    AI OPEN, 2022, 3 : 133 - 155
  • [43] Video description: A comprehensive survey of deep learning approaches
    Rafiq, Ghazala
    Rafiq, Muhammad
    Choi, Gyu Sang
    ARTIFICIAL INTELLIGENCE REVIEW, 2023, 56 (11) : 13293 - 13372
  • [44] A comprehensive survey on optimizing deep learning models by metaheuristics
    Bahriye Akay
    Dervis Karaboga
    Rustu Akay
    Artificial Intelligence Review, 2022, 55 : 829 - 894
  • [45] Causal Inference Meets Deep Learning: A Comprehensive Survey
    Jiao, Licheng
    Wang, Yuhan
    Liu, Xu
    Li, Lingling
    Liu, Fang
    Ma, Wenping
    Guo, Yuwei
    Chen, Puhua
    Yang, Shuyuan
    Hou, Biao
    RESEARCH, 2024, 7
  • [46] Deep Learning Approaches for Autonomous Driving a Comprehensive Survey
    Vasanthamma
    Dubey, Manoj
    Kantharaju, Kanaparthi
    Kollipara, Naga Venkateshwara Rao
    Sumalatha, M.
    METALLURGICAL & MATERIALS ENGINEERING, 2025, 31 (01) : 346 - 354
  • [47] Deep Learning for Intelligent Wireless Networks: A Comprehensive Survey
    Mao, Qian
    Hu, Fei
    Hao, Qi
    IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2018, 20 (04): : 2595 - 2621
  • [48] A comprehensive survey on design and application of autoencoder in deep learning
    Li, Pengzhi
    Pei, Yan
    Li, Jianqiang
    APPLIED SOFT COMPUTING, 2023, 138
  • [49] Multiscale Deep Learning for Detection and Recognition: A Comprehensive Survey
    Jiao, Licheng
    Wang, Mengjiao
    Liu, Xu
    Li, Lingling
    Liu, Fang
    Feng, Zhixi
    Yang, Shuyuan
    Hou, Biao
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 21
  • [50] Activation functions in deep learning: A comprehensive survey and benchmark
    Dubey, Shiv Ram
    Singh, Satish Kumar
    Chaudhuri, Bidyut Baran
    NEUROCOMPUTING, 2022, 503 : 92 - 108