The Challenges of Continuous Self-Supervised Learning

被引:12
|
作者
Purushwalkam, Senthil [1 ]
Morgado, Pedro [1 ,2 ]
Gupta, Abhinav [1 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[2] Univ Wisconsin, Madison, WI 53706 USA
来源
关键词
D O I
10.1007/978-3-031-19809-0_40
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised learning (SSL) aims to eliminate one of the major bottlenecks in representation learning - the need for human annotations. As a result, SSL holds the promise to learn representations from data in-the-wild, i.e., without the need for finite and static datasets. Instead, SSL should exploit the continuous stream of data being generated on the internet or by agents exploring their environments. In this work, we investigate whether traditional self-supervised learning approaches would be effective deployed in-the-wild by conducting experiments on the continuous self-supervised learning problem. In this setup, models should learn from a continuous (infinite) non-IID data stream that follows a non-stationary distribution of visual concepts. The goal is to learn representations that are robust, adaptive yet not forgetful of concepts seen in the past. We show that a direct application of current methods to continuous SSL is 1) inefficient both computationally and in the amount of data required, 2) leads to inferior representations due to temporal correlations (non-IID data) in the streaming sources and 3) exhibits signs of catastrophic forgetting when trained on sources with non-stationary data distributions. We study the use of replay buffers to alleviate the issues of inefficiency and temporal correlations, and enhance them by actively maintaining the least redundant samples in the buffer. We show that minimum redundancy (MinRed) buffers allow us to learn effective representations even in the most challenging streaming scenarios (e.g., sequential frames obtained from a single embodied agent), and alleviates the problem of catastrophic forgetting.
引用
收藏
页码:702 / 721
页数:20
相关论文
共 50 条
  • [31] A Survey on Contrastive Self-Supervised Learning
    Jaiswal, Ashish
    Babu, Ashwin Ramesh
    Zadeh, Mohammad Zaki
    Banerjee, Debapriya
    Makedon, Fillia
    TECHNOLOGIES, 2021, 9 (01)
  • [32] Conformal Credal Self-Supervised Learning
    Lienen, Julian
    Demir, Caglar
    Huellermeier, Eyke
    CONFORMAL AND PROBABILISTIC PREDICTION WITH APPLICATIONS, VOL 204, 2023, 204 : 213 - 232
  • [33] Self-Supervised Learning for Videos: A Survey
    Schiappa, Madeline C.
    Rawat, Yogesh S.
    Shah, Mubarak
    ACM COMPUTING SURVEYS, 2023, 55 (13S)
  • [34] Self-supervised learning in medicine and healthcare
    Krishnan, Rayan
    Rajpurkar, Pranav
    Topol, Eric J.
    NATURE BIOMEDICAL ENGINEERING, 2022, 6 (12) : 1346 - 1352
  • [35] Graph Adversarial Self-Supervised Learning
    Yang, Longqi
    Zhang, Liangliang
    Yang, Wenjing
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [36] Biased Self-supervised learning for ASR
    Kreyssig, Florian L.
    Shi, Yangyang
    Guo, Jinxi
    Sari, Leda
    Mohamed, Abdelrahman
    Woodland, Philip C.
    INTERSPEECH 2023, 2023, : 4948 - 4952
  • [37] Mean Shift for Self-Supervised Learning
    Koohpayegani, Soroush Abbasi
    Tejankar, Ajinkya
    Pirsiavash, Hamed
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 10306 - 10315
  • [38] Backdoor Attacks on Self-Supervised Learning
    Saha, Aniruddha
    Tejankar, Ajinkya
    Koohpayegani, Soroush Abbasi
    Pirsiavash, Hamed
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 13327 - 13336
  • [39] Synergistic Self-supervised and Quantization Learning
    Cao, Yun-Hao
    Sun, Peiqin
    Huang, Yechang
    Wu, Jianxin
    Zhou, Shuchang
    COMPUTER VISION - ECCV 2022, PT XXX, 2022, 13690 : 587 - 604
  • [40] SELF-SUPERVISED LEARNING-MODEL
    SAGA, K
    SUGASAKA, T
    SEKIGUCHI, M
    FUJITSU SCIENTIFIC & TECHNICAL JOURNAL, 1993, 29 (03): : 209 - 216