Contrastive Distillation on Intermediate Representations for Language Model Compression

被引:0
|
作者
Sun, Siqi [1 ]
Gan, Zhe [1 ]
Cheng, Yu [1 ]
Fang, Yuwei [1 ]
Wang, Shuohang [1 ]
Liu, Jingjing [1 ]
机构
[1] Microsoft Dynam 365 Res, Redmond, WA 98008 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing language model compression methods mostly use a simple L-2 loss to distill knowledge in the intermediate representations of a large BERT model to a smaller one. Although widely used, this objective by design assumes that all the dimensions of hidden representations are independent, failing to capture important structural knowledge in the intermediate layers of the teacher network. To achieve better distillation efficacy, we propose Contrastive Distillation on Intermediate Representations (CODIR), a principled knowledge distillation framework where the student is trained to distill knowledge through intermediate layers of the teacher via a contrastive objective. By learning to distinguish positive sample from a large set of negative samples, CoDIR facilitates the student's exploitation of rich information in teacher's hidden layers. CoDIR can be readily applied to compress large-scale language models in both pre-training and finetuning stages, and achieves superb performance on the GLUE benchmark, outperforming state-of-the-art compression methods.(1)
引用
收藏
页码:498 / 508
页数:11
相关论文
共 50 条
  • [11] Knowledge Distillation Beyond Model Compression
    Sarfraz, Fahad
    Arani, Elahe
    Zonooz, Bahram
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 6136 - 6143
  • [12] Contrastive Visual Semantic Pretraining Magnifies the Semantics of Natural Language Representations
    Wolfe, Robert
    Caliskan, Aylin
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 3050 - 3061
  • [13] SCL-IKD: intermediate knowledge distillation via supervised contrastive representation learning
    Saurabh Sharma
    Shikhar Singh Lodhi
    Joydeep Chandra
    Applied Intelligence, 2023, 53 : 28520 - 28541
  • [14] SCL-IKD: intermediate knowledge distillation via supervised contrastive representation learning
    Sharma, Saurabh
    Lodhi, Shikhar Singh
    Chandra, Joydeep
    APPLIED INTELLIGENCE, 2023, 53 (23) : 28520 - 28541
  • [15] Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains
    Pan, Haojie
    Wang, Chengyu
    Qiu, Minghui
    Zhang, Yichang
    Li, Yaliang
    Huang, Jun
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 3026 - 3036
  • [16] Contrastive Model Inversion for Data-Free Knowledge Distillation
    Fang, Gongfan
    Song, Jie
    Wang, Xinchao
    Shen, Chengchao
    Wang, Xingen
    Song, Mingli
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 2374 - 2380
  • [17] Model Selection - Knowledge Distillation Framework for Model Compression
    Chen, Renhai
    Yuan, Shimin
    Wang, Shaobo
    Li, Zhenghan
    Xing, Meng
    Feng, Zhiyong
    2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [18] CONTRACLM: Contrastive Learning For Causal Language Model
    Jain, Nihal
    Zhang, Dejiao
    Ahmad, Wasi Uddin
    Wang, Zijian
    Nan, Feng
    Li, Xiaopeng
    Tan, Ming
    Nallapati, Ramesh
    Ray, Baishakhi
    Bhatia, Parminder
    Ma, Xiaofei
    Xiang, Bing
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 6436 - 6459
  • [19] Effective Compression of Language Models by Combining Pruning and Knowledge Distillation
    Chiu, Chi-Yu
    Hong, Ding-Yong
    Liu, Pangfeng
    Wu, Jan-Jan
    2024 IEEE 48TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC 2024, 2024, : 429 - 438
  • [20] Patient Knowledge Distillation for BERT Model Compression
    Sun, Siqi
    Cheng, Yu
    Gan, Zhe
    Liu, Jingjing
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 4323 - 4332