S 4: Self-supervised learning with sparse-dense sampling

被引:2
|
作者
Tian, Yongqin [1 ]
Zhang, Weidong [1 ]
Su, Peng [2 ]
Xu, Yibo [3 ]
Zhuang, Peixian [4 ]
Xie, Xiwang [5 ]
Zhao, Wenyi [3 ]
机构
[1] Henan Inst Sci & Technol, Sch Informat Engn, Xinxiang 453003, Peoples R China
[2] Guilin Univ Elect Technol, Sch Comp Sci & Informat Secur, Guilin 541004, Peoples R China
[3] Beijing Univ Posts & Telecommun, Sch Artificial Intelligence, Beijing 100876, Peoples R China
[4] Univ Sci & Technol Beijing, Sch Automat & Elect Engn, Beijing 100083, Peoples R China
[5] Dalian Maritime Univ, Sch Informat Sci & Technol, Dalian 116026, Peoples R China
基金
中国国家自然科学基金;
关键词
Self-supervised visual representation learning; Sparse-dense sampling; Collaborative optimization;
D O I
10.1016/j.knosys.2024.112040
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self -supervised visual representation learning (SSL) attempts to extract significant features from unlabeled datasets, alleviating the necessity for labor-intensive and time-consuming manual labeling processes. However, existing contrastive learning -based methods typically suffer from the underutilization of datasets, consume significant computational resources, and employ longer training epochs or large batch sizes. In this study, we propose a novel method aimed at optimizing self -supervised learning that integrates the advantages of sparse -dense sampling and collaborative optimization, thereby significantly improving the performance of downstream tasks. Specifically, sparse -dense sampling primarily focuses on high-level semantic features, while leveraging the spatial structure relationship provided by the unlabeled dataset to ensure the incorporation of low-level texture features to improve data utilization. Besides, collaborative optimization, including contrastive and location tasks, further enhances the model's ability to perceive features of different dimensions, thereby improving its utilization of features in the embedding space. Furthermore, the combination of sparse -dense sampling and collaborative optimization strategies can reduce computational consumption while improving performance. Extensive experiments demonstrate that the proposed method effectively reduces the computational requirements while delivering favorable results. The codes and model weights will be available at https://github.com/AI-TYQ/S4.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Self-supervised Learning: A Succinct Review
    Rani, Veenu
    Nabi, Syed Tufael
    Kumar, Munish
    Mittal, Ajay
    Kumar, Krishan
    ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING, 2023, 30 (04) : 2761 - 2775
  • [42] Audio self-supervised learning: A survey
    Liu, Shuo
    Mallol-Ragolta, Adria
    Parada-Cabaleiro, Emilia
    Qian, Kun
    Jing, Xin
    Kathan, Alexander
    Hu, Bin
    Schuller, Bjorn W.
    PATTERNS, 2022, 3 (12):
  • [43] MarioNette: Self-Supervised Sprite Learning
    Smirnov, Dmitriy
    Gharbi, Michael
    Fisher, Matthew
    Guizilini, Vitor
    Efros, Alexei A.
    Solomon, Justin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [44] Self-supervised learning for outlier detection
    Diers, Jan
    Pigorsch, Christian
    STAT, 2021, 10 (01):
  • [45] Self-supervised Learning: A Succinct Review
    Veenu Rani
    Syed Tufael Nabi
    Munish Kumar
    Ajay Mittal
    Krishan Kumar
    Archives of Computational Methods in Engineering, 2023, 30 : 2761 - 2775
  • [46] Self-Supervised Learning for Recommender System
    Huang, Chao
    Wang, Xiang
    He, Xiangnan
    Yin, Dawei
    PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 3440 - 3443
  • [47] Self-Supervised Learning for Multimedia Recommendation
    Tao, Zhulin
    Liu, Xiaohao
    Xia, Yewei
    Wang, Xiang
    Yang, Lifang
    Huang, Xianglin
    Chua, Tat-Seng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 5107 - 5116
  • [48] Whitening for Self-Supervised Representation Learning
    Ermolov, Aleksandr
    Siarohin, Aliaksandr
    Sangineto, Enver
    Sebe, Nicu
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [49] Self-Supervised Learning in Remote Sensing
    Wang, Yi
    Albrecht, Conrad M.
    Ait Ali Braham, Nassim
    Mou, Lichao
    Zhu, Xiao Xiang
    IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE, 2022, 10 (04) : 213 - 247
  • [50] Relational Self-Supervised Learning on Graphs
    Lee, Namkyeong
    Hyun, Dongmin
    Lee, Junseok
    Park, Chanyoung
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 1054 - 1063