S 4: Self-supervised learning with sparse-dense sampling

被引:2
|
作者
Tian, Yongqin [1 ]
Zhang, Weidong [1 ]
Su, Peng [2 ]
Xu, Yibo [3 ]
Zhuang, Peixian [4 ]
Xie, Xiwang [5 ]
Zhao, Wenyi [3 ]
机构
[1] Henan Inst Sci & Technol, Sch Informat Engn, Xinxiang 453003, Peoples R China
[2] Guilin Univ Elect Technol, Sch Comp Sci & Informat Secur, Guilin 541004, Peoples R China
[3] Beijing Univ Posts & Telecommun, Sch Artificial Intelligence, Beijing 100876, Peoples R China
[4] Univ Sci & Technol Beijing, Sch Automat & Elect Engn, Beijing 100083, Peoples R China
[5] Dalian Maritime Univ, Sch Informat Sci & Technol, Dalian 116026, Peoples R China
基金
中国国家自然科学基金;
关键词
Self-supervised visual representation learning; Sparse-dense sampling; Collaborative optimization;
D O I
10.1016/j.knosys.2024.112040
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self -supervised visual representation learning (SSL) attempts to extract significant features from unlabeled datasets, alleviating the necessity for labor-intensive and time-consuming manual labeling processes. However, existing contrastive learning -based methods typically suffer from the underutilization of datasets, consume significant computational resources, and employ longer training epochs or large batch sizes. In this study, we propose a novel method aimed at optimizing self -supervised learning that integrates the advantages of sparse -dense sampling and collaborative optimization, thereby significantly improving the performance of downstream tasks. Specifically, sparse -dense sampling primarily focuses on high-level semantic features, while leveraging the spatial structure relationship provided by the unlabeled dataset to ensure the incorporation of low-level texture features to improve data utilization. Besides, collaborative optimization, including contrastive and location tasks, further enhances the model's ability to perceive features of different dimensions, thereby improving its utilization of features in the embedding space. Furthermore, the combination of sparse -dense sampling and collaborative optimization strategies can reduce computational consumption while improving performance. Extensive experiments demonstrate that the proposed method effectively reduces the computational requirements while delivering favorable results. The codes and model weights will be available at https://github.com/AI-TYQ/S4.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Self-Supervised Dialogue Learning
    Wu, Jiawei
    Wang, Xin
    Wang, William Yang
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 3857 - 3867
  • [22] Self-supervised learning model
    Saga, Kazushie
    Sugasaka, Tamami
    Sekiguchi, Minoru
    Fujitsu Scientific and Technical Journal, 1993, 29 (03): : 209 - 216
  • [23] Longitudinal self-supervised learning
    Zhao, Qingyu
    Liu, Zixuan
    Adeli, Ehsan
    Pohl, Kilian M.
    MEDICAL IMAGE ANALYSIS, 2021, 71
  • [24] Credal Self-Supervised Learning
    Lienen, Julian
    Huellermeier, Eyke
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [25] Self-Supervised Learning for Recommendation
    Huang, Chao
    Xia, Lianghao
    Wang, Xiang
    He, Xiangnan
    Yin, Dawei
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 5136 - 5139
  • [26] Quantum self-supervised learning
    Jaderberg, B.
    Anderson, L. W.
    Xie, W.
    Albanie, S.
    Kiffner, M.
    Jaksch, D.
    QUANTUM SCIENCE AND TECHNOLOGY, 2022, 7 (03):
  • [27] Self-Supervised Learning for Electroencephalography
    Rafiei, Mohammad H.
    Gauthier, Lynne V.
    Adeli, Hojjat
    Takabi, Daniel
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (02) : 1457 - 1471
  • [28] Self-supervised Learning for Semantic Sentence Matching with Dense Transformer Inference Network
    Yu, Fengying
    Wang, Jianzong
    Tao, Dewei
    Cheng, Ning
    Xiao, Jing
    WEB AND BIG DATA, APWEB-WAIM 2021, PT I, 2021, 12858 : 258 - 272
  • [29] Self-supervised Learning of Implicit Shape Representation with Dense Correspondence for Deformable Objects
    Zhang, Baowen
    Li, Jiahe
    Deng, Xiaoming
    Zhang, Yinda
    Ma, Cuixia
    Wang, Hongan
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 14222 - 14232
  • [30] A New Self-supervised Method for Supervised Learning
    Yang, Yuhang
    Ding, Zilin
    Cheng, Xuan
    Wang, Xiaomin
    Liu, Ming
    INTERNATIONAL CONFERENCE ON COMPUTER VISION, APPLICATION, AND DESIGN (CVAD 2021), 2021, 12155