Prompt Tuning in Code Intelligence: An Experimental Evaluation

被引:6
|
作者
Wang, Chaozheng [1 ]
Yang, Yuanhang [1 ]
Gao, Cuiyun [1 ]
Peng, Yun [2 ]
Zhang, Hongyu [3 ,4 ]
Lyu, Michael R. [2 ]
机构
[1] Harbin Inst Technol, Shenzhen 518055, Peoples R China
[2] Chinese Univ Hong Kong, Hong Kong 999077, Peoples R China
[3] Univ Newcastle, Newcastle, Australia
[4] Chongqing Univ, Chongqing 400044, Peoples R China
基金
中国国家自然科学基金;
关键词
Tuning; Codes; Task analysis; Training; Predictive models; Adaptation models; Source coding; Code intelligence; prompt tuning; empirical study;
D O I
10.1109/TSE.2023.3313881
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Pre-trained models have been shown effective in many code intelligence tasks, such as automatic code summarization and defect prediction. These models are pre-trained on large-scale unlabeled corpus and then fine-tuned in downstream tasks. However, as the inputs to pre-training and downstream tasks are in different forms, it is hard to fully explore the knowledge of pre-trained models. Besides, the performance of fine-tuning strongly relies on the amount of downstream task data, while in practice, the data scarcity scenarios are common. Recent studies in the natural language processing (NLP) field show that prompt tuning, a new paradigm for tuning, alleviates the above issues and achieves promising results in various NLP tasks. In prompt tuning, the prompts inserted during tuning provide task-specific knowledge, which is especially beneficial for tasks with relatively scarce data. In this article, we empirically evaluate the usage and effect of prompt tuning in code intelligence tasks. We conduct prompt tuning on popular pre-trained models CodeBERT and CodeT5 and experiment with four code intelligence tasks including defect prediction, code search, code summarization, and code translation. Our experimental results show that prompt tuning consistently outperforms fine-tuning in all four tasks. In addition, prompt tuning shows great potential in low-resource scenarios, e.g., improving the BLEU scores of fine-tuning by more than 26% on average for code summarization. Our results suggest that instead of fine-tuning, we could adapt prompt tuning for code intelligence tasks to achieve better performance, especially when lacking task-specific data. We also discuss the implications for adapting prompt tuning in code intelligence tasks.
引用
收藏
页码:4869 / 4885
页数:17
相关论文
共 50 条
  • [1] No More Fine-Tuning? An Experimental Evaluation of Prompt Tuning in Code Intelligence
    Wang, Chaozheng
    Yang, Yuanhang
    Gao, Cuiyun
    Peng, Yun
    Zhang, Hongyu
    Lyu, Michael R.
    PROCEEDINGS OF THE 30TH ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2022, 2022, : 382 - 394
  • [2] Leveraging meta-data of code for adapting prompt tuning for code summarization
    Jiang, Zhihua
    Wang, Di
    Rao, Dongning
    APPLIED INTELLIGENCE, 2025, 55 (02)
  • [3] Leveraging meta-data of code for adapting prompt tuning for code summarizationLeveraging meta-data of code for adapting prompt tuning for code summaryZ. Jiang et al.
    Zhihua Jiang
    Di Wang
    Dongning Rao
    Applied Intelligence, 2025, 55 (3)
  • [4] Zero-Shot Code Representation Learning via Prompt Tuning
    Cui, Nan
    Gu, Xiaodong
    Shen, Beijun
    arXiv,
  • [5] Context-focused Prompt Tuning Pre-trained Code Models to Improve Code Summarization
    Pan, Xinglu
    Liu, Chenxiao
    Zou, Yanzhen
    Zhao, Xianlin
    Xie, Bing
    2024 IEEE 48TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC 2024, 2024, : 1344 - 1349
  • [6] Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization
    Razdaibiedina, Anastasia
    Mao, Yuning
    Khabsa, Madian
    Lewis, Mike
    Hou, Rui
    Ba, Jimmy
    Almahairi, Amjad
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 6740 - 6757
  • [7] Improve Code Summarization via Prompt-Tuning CodeT5
    LI Huanzhen
    WuhanUniversityJournalofNaturalSciences, 2023, 28 (06) : 474 - 482
  • [8] TI-Prompt: Towards a Prompt Tuning Method for Few-shot Threat Intelligence Twitter Classification
    You, Yizhe
    Jiang, Zhengwei
    Zhang, Kai
    Jiang, Jun
    Wang, Xuren
    Zhang, Zheyu
    Wang, Shirui
    Feng, Huamin
    2022 IEEE 46TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE (COMPSAC 2022), 2022, : 272 - 279
  • [9] Visual Prompt Tuning
    Jia, Menglin
    Tang, Luming
    Chen, Bor-Chun
    Cardie, Claire
    Belongie, Serge
    Hariharan, Bharath
    Lim, Ser-Nam
    COMPUTER VISION - ECCV 2022, PT XXXIII, 2022, 13693 : 709 - 727
  • [10] Prompt-aligned Gradient for Prompt Tuning
    Zhu, Beier
    Niu, Yulei
    Han, Yucheng
    Wu, Yue
    Zhang, Hanwang
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 15613 - 15623