Self-Guided Contrastive Learning for BERT Sentence Representations

被引:0
|
作者
Kim, Taeuk [1 ]
Yoo, Kang Min [2 ]
Lee, Sang-goo [1 ]
机构
[1] Seoul Natl Univ, Dept Comp Sci & Engn, Seoul, South Korea
[2] NAVER AI Lab, Seongnam, South Korea
来源
59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021) | 2021年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Although BERT and its variants have reshaped the NLP landscape, it still remains unclear how best to derive sentence embeddings from such pre-trained Transformers. In this work, we propose a contrastive learning method that utilizes self-guidance for improving the quality of BERT sentence representations. Our method fine-tunes BERT in a self-supervised fashion, does not rely on data augmentation, and enables the usual [CLS] token embeddings to function as sentence vectors. Moreover, we redesign the contrastive learning objective (NT-Xent) and apply it to sentence representation learning. We demonstrate with extensive experiments that our approach is more effective than competitive baselines on diverse sentence-related tasks. We also show it is efficient at inference and robust to domain shifts.
引用
收藏
页码:2528 / 2540
页数:13
相关论文
共 50 条
  • [41] Comparing self-guided learning and educator-guided learning formats for simulation-based clinical training
    Brydges, Ryan
    Carnahan, Heather
    Rose, Don
    Dubrowski, Adam
    JOURNAL OF ADVANCED NURSING, 2010, 66 (08) : 1832 - 1844
  • [42] Contrastive Representations Pre-Training for Enhanced Discharge Summary BERT
    Won, DaeYeon
    Lee, YoungJun
    Choi, Ho-Jin
    Jung, YuChae
    2021 IEEE 9TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2021), 2021, : 507 - 508
  • [43] Self-Supervised Visual Representations Learning by Contrastive Mask Prediction
    Zhao, Yucheng
    Wang, Guangting
    Luo, Chong
    Zeng, Wenjun
    Zha, Zheng-Jun
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 10140 - 10149
  • [44] Self-guided filter for image denoising
    Zhu, Shujin
    Yu, Zekuan
    IET IMAGE PROCESSING, 2020, 14 (11) : 2561 - 2566
  • [45] Iterative Self-Guided Image Filtering
    He, Lei
    Xie, Yongfang
    Xie, Shiwen
    Jiang, Zhaohui
    Chen, Zhipeng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (08) : 7537 - 7549
  • [46] Exploring the Role of BERT Token Representations to Explain Sentence Probing Results
    Mohebbi, Hosein
    Modarressi, Ali
    Pilehvar, Mohammad Taher
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 792 - 806
  • [47] CLSEP: Contrastive learning of sentence embedding with prompt
    Wang, Qian
    Zhang, Weiqi
    Lei, Tianyi
    Cao, Yu
    Peng, Dezhong
    Wang, Xu
    KNOWLEDGE-BASED SYSTEMS, 2023, 266
  • [48] MCSE: Multimodal Contrastive Learning of Sentence Embeddings
    Zhang, Miaoran
    Mosbach, Marius
    Adelani, David Ifeoluwa
    Hedderich, Michael A.
    Klakow, Dietrich
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 5959 - 5969
  • [49] DistillCSE: Distilled Contrastive Learning for Sentence Embeddings
    Xu, Jiahao
    Shao, Wei
    Chen, Lihui
    Liu, Lemao
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 8153 - 8165
  • [50] A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple-wise Perspective in Angular Space
    Zhang, Yuhao
    Zhu, Hongji
    Wang, Yongliang
    Xu, Nan
    Li, Xiaobo
    Zhao, BinQiang
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 4892 - 4903