Grouped Contrastive Learning of Self-Supervised Sentence Representation

被引:1
|
作者
Wang, Qian [1 ]
Zhang, Weiqi [1 ]
Lei, Tianyi [1 ]
Peng, Dezhong [1 ,2 ,3 ]
机构
[1] Sichuan Univ, Coll Comp Sci & Technol, Chengdu 610065, Peoples R China
[2] Chengdu Ruibei Yingte Informat Technol Co Ltd, Chengdu 610054, Peoples R China
[3] Sichuan Zhiqian Technol Co Ltd, Chengdu 610065, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 17期
关键词
contrastive learning; self-attention; data augmentation; grouped representation; unsupervised learning;
D O I
10.3390/app13179873
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
This paper proposes a method called Grouped Contrastive Learning of self-supervised Sentence Representation (GCLSR), which can learn an effective and meaningful representation of sentences. Previous works maximize the similarity between two vectors to be the objective of contrastive learning, suffering from the high-dimensionality of the vectors. In addition, most previous works have adopted discrete data augmentation to obtain positive samples and have directly employed a contrastive framework from computer vision to perform contrastive training, which could hamper contrastive training because text data are discrete and sparse compared with image data. To solve these issues, we design a novel framework of contrastive learning, i.e., GCLSR, which divides the high-dimensional feature vector into several groups and respectively computes the groups' contrastive losses to make use of more local information, eventually obtaining a more fine-grained sentence representation. In addition, in GCLSR, we design a new self-attention mechanism and both a continuous and a partial-word vector augmentation (PWVA). For the discrete and sparse text data, the use of self-attention could help the model focus on the informative words by measuring the importance of every word in a sentence. By using the PWVA, GCLSR can obtain high-quality positive samples used for contrastive learning. Experimental results demonstrate that our proposed GCLSR achieves an encouraging result on the challenging datasets of the semantic textual similarity (STS) task and transfer task.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] IPCL: ITERATIVE PSEUDO-SUPERVISED CONTRASTIVE LEARNING TO IMPROVE SELF-SUPERVISED FEATURE REPRESENTATION
    Kumar, Sonal
    Phukan, Anirudh
    Sur, Arijit
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 6270 - 6274
  • [42] Contrastive Spatio-Temporal Pretext Learning for Self-Supervised Video Representation
    Zhang, Yujia
    Po, Lai-Man
    Xu, Xuyuan
    Liu, Mengyang
    Wang, Yexin
    Ou, Weifeng
    Zhao, Yuzhi
    Yu, Wing-Yin
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 3380 - 3389
  • [43] Self-Supervised Contrastive Molecular Representation Learning with a Chemical Synthesis Knowledge Graph
    Xie, Jiancong
    Wang, Yi
    Rao, Jiahua
    Zheng, Shuangjia
    Yang, Yuedong
    JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2024, 64 (06) : 1945 - 1954
  • [44] Cut-in maneuver detection with self-supervised contrastive video representation learning
    Yagiz Nalcakan
    Yalin Bastanlar
    Signal, Image and Video Processing, 2023, 17 : 2915 - 2923
  • [45] Self-Supervised Contrastive Representation Learning for Semi-Supervised Time-Series Classification
    Eldele, Emadeldeen
    Ragab, Mohamed
    Chen, Zhenghua
    Wu, Min
    Kwoh, Chee-Keong
    Li, Xiaoli
    Guan, Cuntai
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (12) : 15604 - 15618
  • [46] A comprehensive perspective of contrastive self-supervised learning
    Songcan CHEN
    Chuanxing GENG
    Frontiers of Computer Science, 2021, (04) : 102 - 104
  • [47] On Compositions of Transformations in Contrastive Self-Supervised Learning
    Patrick, Mandela
    Asano, Yuki M.
    Kuznetsova, Polina
    Fong, Ruth
    Henriques, Joao F.
    Zweig, Geoffrey
    Vedaldi, Andrea
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9557 - 9567
  • [48] Contrastive Self-supervised Learning for Graph Classification
    Zeng, Jiaqi
    Xie, Pengtao
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 10824 - 10832
  • [49] Group Contrastive Self-Supervised Learning on Graphs
    Xu, Xinyi
    Deng, Cheng
    Xie, Yaochen
    Ji, Shuiwang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (03) : 3169 - 3180
  • [50] Self-supervised contrastive learning on agricultural images
    Guldenring, Ronja
    Nalpantidis, Lazaros
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2021, 191