Compressing Pre-trained Models of Code into 3 MB

被引:12
|
作者
Shi, Jieke [1 ]
Yang, Zhou [1 ]
Xu, Bowen [1 ]
Kang, Hong Jin [1 ]
Lo, David [1 ]
机构
[1] Singapore Management Univ, Sch Comp & Informat Syst, Singapore, Singapore
基金
新加坡国家研究基金会;
关键词
Model Compression; Genetic Algorithm; Pre-Trained Models;
D O I
10.1145/3551349.3556964
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Although large pre-trained models of code have delivered significant advancements in various code processing tasks, there is an impediment to the wide and fluent adoption of these powerful models in software developers' daily workflow: these large models consume hundreds of megabytes of memory and run slowly on personal devices, which causes problems in model deployment and greatly degrades the user experience. It motivates us to propose Compressor, a novel approach that can compress the pre-trained models of code into extremely small models with negligible performance sacrifice. Our proposed method formulates the design of tiny models as simplifying the pre-trained model architecture: searching for a significantly smaller model that follows an architectural design similar to the original pre-trained model. Compressor proposes a genetic algorithm (GA)-based strategy to guide the simplification process. Prior studies found that a model with higher computational cost tends to be more powerful. Inspired by this insight, the GA algorithm is designed to maximize a model's Giga floating-point operations (GFLOPs), an indicator of the model computational cost, to satisfy the constraint of the target model size. Then, we use the knowledge distillation technique to train the small model: unlabelled data is fed into the large model and the outputs are used as labels to train the small model. We evaluate Compressor with two state-of-the-art pre-trained models, i.e., CodeBERT and GraphCodeBERT, on two important tasks, i.e., vulnerability prediction and clone detection. We use our method to compress pre-trained models to a size (3 MB), which is 160x smaller than the original size. The results show that compressed CodeBERT and GraphCodeBERT are 4.31x and 4.15x faster than the original model at inference, respectively. More importantly, they maintain 96.15% and 97.74% of the original performance on the vulnerability prediction task. They even maintain higher ratios (99.20% and 97.52%) of the original performance on the clone detection task.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Refining Pre-Trained Motion Models
    Sun, Xinglong
    Harley, Adam W.
    Guibas, Leonidas J.
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, : 4932 - 4938
  • [22] Efficiently Robustify Pre-Trained Models
    Jain, Nishant
    Behl, Harkirat
    Rawat, Yogesh Singh
    Vineet, Vibhav
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 5482 - 5492
  • [23] Pre-trained Models for Sonar Images
    Valdenegro-Toro, Matias
    Preciado-Grijalva, Alan
    Wehbe, Bilal
    OCEANS 2021: SAN DIEGO - PORTO, 2021,
  • [24] Pre-Trained Language Models and Their Applications
    Wang, Haifeng
    Li, Jiwei
    Wu, Hua
    Hovy, Eduard
    Sun, Yu
    ENGINEERING, 2023, 25 : 51 - 65
  • [25] CODEFUSION: A Pre-trained Diffusion Model for Code Generation
    Singh, Mukul
    Cambronero, Jose
    Gulwani, Sumit
    Le, Vu
    Negreanu, Carina
    Verbruggen, Gust
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 11697 - 11708
  • [26] On the Role of Pre-trained Embeddings in Binary Code Analysis
    Maier, Alwin
    Weissberg, Felix
    Rieck, Konrad
    PROCEEDINGS OF THE 19TH ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ACM ASIACCS 2024, 2024, : 795 - 810
  • [27] Bridge and Hint: Extending Pre-trained Language Models for Long-Range Code
    Chen, Yujia
    Gao, Cuiyun
    Yang, Zezhou
    Zhang, Hongyu
    Liao, Qing
    PROCEEDINGS OF THE 33RD ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2024, 2024, : 274 - 286
  • [28] PTM-APIRec: Leveraging Pre-trained Models of Source Code in API Recommendation
    Li, Zhihao
    Li, Chuanyi
    Tang, Ze
    Huang, Wanhong
    Ge, Jidong
    Luo, Bin
    Ng, Vincent
    Wang, Ting
    Hu, Yucheng
    Zhang, Xiaopeng
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2024, 33 (03)
  • [29] RAPID: Zero-Shot Domain Adaptation for Code Search with Pre-Trained Models
    Fan, Guodong
    Chen, Shizhan
    Gao, Cuiyun
    Xiao, Jianmao
    Zhang, Tao
    Feng, Zhiyong
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2024, 33 (05)
  • [30] CodeAttack: Code-Based Adversarial Attacks for Pre-trained Programming Language Models
    Jha, Akshita
    Reddy, Chandan K.
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 12, 2023, : 14892 - 14900