Contrastive Distillation on Intermediate Representations for Language Model Compression

被引:0
|
作者
Sun, Siqi [1 ]
Gan, Zhe [1 ]
Cheng, Yu [1 ]
Fang, Yuwei [1 ]
Wang, Shuohang [1 ]
Liu, Jingjing [1 ]
机构
[1] Microsoft Dynam 365 Res, Redmond, WA 98008 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing language model compression methods mostly use a simple L-2 loss to distill knowledge in the intermediate representations of a large BERT model to a smaller one. Although widely used, this objective by design assumes that all the dimensions of hidden representations are independent, failing to capture important structural knowledge in the intermediate layers of the teacher network. To achieve better distillation efficacy, we propose Contrastive Distillation on Intermediate Representations (CODIR), a principled knowledge distillation framework where the student is trained to distill knowledge through intermediate layers of the teacher via a contrastive objective. By learning to distinguish positive sample from a large set of negative samples, CoDIR facilitates the student's exploitation of rich information in teacher's hidden layers. CoDIR can be readily applied to compress large-scale language models in both pre-training and finetuning stages, and achieves superb performance on the GLUE benchmark, outperforming state-of-the-art compression methods.(1)
引用
收藏
页码:498 / 508
页数:11
相关论文
共 50 条
  • [1] Adaptive Contrastive Knowledge Distillation for BERT Compression
    Guo, Jinyang
    Liu, Jiaheng
    Wang, Zining
    Ma, Yuqing
    Gong, Ruihao
    Xu, Ke
    Liu, Xianglong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 8941 - 8953
  • [2] Uncertainty-Driven Knowledge Distillation for Language Model Compression
    Huang, Tianyu
    Dong, Weisheng
    Wu, Fangfang
    Li, Xin
    Shi, Guangming
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 : 2850 - 2858
  • [3] Multi-Granularity Structural Knowledge Distillation for Language Model Compression
    Liu, Chang
    Tao, Chongyang
    Feng, Jiazhan
    Zhao, Dongyan
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 1001 - 1011
  • [4] Contrastive adversarial knowledge distillation for deep model compression in time-series regression tasks
    Xu, Qing
    Chen, Zhenghua
    Ragab, Mohamed
    Wang, Chao
    Wu, Min
    Li, Xiaoli
    NEUROCOMPUTING, 2022, 485 : 242 - 251
  • [5] Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method
    Tan, Shicheng
    Tam, Weng Lam
    Wang, Yuanchun
    Gong, Wenwen
    Zhao, Shu
    Zhang, Peng
    Tang, Jie
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 9678 - 9696
  • [6] Explanation Guided Knowledge Distillation for Pre-trained Language Model Compression
    Yang, Zhao
    Zhang, Yuanzhe
    Sui, Dianbo
    Ju, Yiming
    Zhao, Jun
    Liu, Kang
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2024, 23 (02)
  • [7] AD-KD: Attribution-Driven Knowledge Distillation for Language Model Compression
    Wu, Siyue
    Chen, Hongzhan
    Quan, Xiaojun
    Wang, Qifan
    Wang, Rui
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 8449 - 8465
  • [8] HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model Compression
    Done, Chenhe
    Li, Yaliang
    Shen, Ying
    Qui, Minghui
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3126 - 3136
  • [9] From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression
    Xu, Runxin
    Luo, Fuli
    Wang, Chengyu
    Chang, Baobao
    Huang, Jun
    Huang, Songfang
    Huang, Fei
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 11547 - 11555
  • [10] Fine-Tuning via Mask Language Model Enhanced Representations Based Contrastive Learning and Application
    Zhang, Dechi
    Wan, Weibing
    Computer Engineering and Applications, 60 (17): : 129 - 138