Grouped Contrastive Learning of Self-Supervised Sentence Representation

被引:1
|
作者
Wang, Qian [1 ]
Zhang, Weiqi [1 ]
Lei, Tianyi [1 ]
Peng, Dezhong [1 ,2 ,3 ]
机构
[1] Sichuan Univ, Coll Comp Sci & Technol, Chengdu 610065, Peoples R China
[2] Chengdu Ruibei Yingte Informat Technol Co Ltd, Chengdu 610054, Peoples R China
[3] Sichuan Zhiqian Technol Co Ltd, Chengdu 610065, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 17期
关键词
contrastive learning; self-attention; data augmentation; grouped representation; unsupervised learning;
D O I
10.3390/app13179873
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
This paper proposes a method called Grouped Contrastive Learning of self-supervised Sentence Representation (GCLSR), which can learn an effective and meaningful representation of sentences. Previous works maximize the similarity between two vectors to be the objective of contrastive learning, suffering from the high-dimensionality of the vectors. In addition, most previous works have adopted discrete data augmentation to obtain positive samples and have directly employed a contrastive framework from computer vision to perform contrastive training, which could hamper contrastive training because text data are discrete and sparse compared with image data. To solve these issues, we design a novel framework of contrastive learning, i.e., GCLSR, which divides the high-dimensional feature vector into several groups and respectively computes the groups' contrastive losses to make use of more local information, eventually obtaining a more fine-grained sentence representation. In addition, in GCLSR, we design a new self-attention mechanism and both a continuous and a partial-word vector augmentation (PWVA). For the discrete and sparse text data, the use of self-attention could help the model focus on the informative words by measuring the importance of every word in a sentence. By using the PWVA, GCLSR can obtain high-quality positive samples used for contrastive learning. Experimental results demonstrate that our proposed GCLSR achieves an encouraging result on the challenging datasets of the semantic textual similarity (STS) task and transfer task.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Self-supervised contrastive representation learning for large-scale trajectories
    Li, Shuzhe
    Chen, Wei
    Yan, Bingqi
    Li, Zhen
    Zhu, Shunzhi
    Yu, Yanwei
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 148 : 357 - 366
  • [22] RegionCL: Exploring Contrastive Region Pairs for Self-supervised Representation Learning
    Xu, Yufei
    Zhang, Qiming
    Zhang, Jing
    Tao, Dacheng
    COMPUTER VISION - ECCV 2022, PT XXXIII, 2022, 13693 : 477 - 494
  • [23] Pose-disentangled Contrastive Learning for Self-supervised Facial Representation
    Liu, Yuanyuan
    Wang, Wenbin
    Zhan, Yibing
    Feng, Shaoze
    Liu, Kejun
    Chen, Zhe
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 9717 - 9728
  • [24] Self-Supervised Facial Motion Representation Learning via Contrastive Subclips
    Sun, Zheng
    Torrie, Shad A.
    Sumsion, Andrew W.
    Lee, Dah-Jye
    ELECTRONICS, 2023, 12 (06)
  • [25] Self-Supervised Video Representation Learning with Meta-Contrastive Network
    Lin, Yuanze
    Guo, Xun
    Lu, Yan
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 8219 - 8229
  • [26] Self-supervised contrastive representation learning for classifying Internet of Things malware
    Wang, Fangwei
    Chen, Yinhe
    Gao, Hongfeng
    Li, Qingru
    Wang, Changguang
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 150
  • [27] Adversarial Self-Supervised Contrastive Learning
    Kim, Minseon
    Tack, Jihoon
    Hwang, Sung Ju
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [28] A Survey on Contrastive Self-Supervised Learning
    Jaiswal, Ashish
    Babu, Ashwin Ramesh
    Zadeh, Mohammad Zaki
    Banerjee, Debapriya
    Makedon, Fillia
    TECHNOLOGIES, 2021, 9 (01)
  • [29] Self-Supervised Learning: Generative or Contrastive
    Liu, Xiao
    Zhang, Fanjin
    Hou, Zhenyu
    Mian, Li
    Wang, Zhaoyu
    Zhang, Jing
    Tang, Jie
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (01) : 857 - 876
  • [30] Stable Contrastive Learning for Self-Supervised Sentence Embeddings With Pseudo-Siamese Mutual Learning
    Xie, Yutao
    Wu, Qiyu
    Chen, Wei
    Wang, Tengjiao
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 3046 - 3059