Prompt Tuning in Code Intelligence: An Experimental Evaluation

被引:6
|
作者
Wang, Chaozheng [1 ]
Yang, Yuanhang [1 ]
Gao, Cuiyun [1 ]
Peng, Yun [2 ]
Zhang, Hongyu [3 ,4 ]
Lyu, Michael R. [2 ]
机构
[1] Harbin Inst Technol, Shenzhen 518055, Peoples R China
[2] Chinese Univ Hong Kong, Hong Kong 999077, Peoples R China
[3] Univ Newcastle, Newcastle, Australia
[4] Chongqing Univ, Chongqing 400044, Peoples R China
基金
中国国家自然科学基金;
关键词
Tuning; Codes; Task analysis; Training; Predictive models; Adaptation models; Source coding; Code intelligence; prompt tuning; empirical study;
D O I
10.1109/TSE.2023.3313881
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Pre-trained models have been shown effective in many code intelligence tasks, such as automatic code summarization and defect prediction. These models are pre-trained on large-scale unlabeled corpus and then fine-tuned in downstream tasks. However, as the inputs to pre-training and downstream tasks are in different forms, it is hard to fully explore the knowledge of pre-trained models. Besides, the performance of fine-tuning strongly relies on the amount of downstream task data, while in practice, the data scarcity scenarios are common. Recent studies in the natural language processing (NLP) field show that prompt tuning, a new paradigm for tuning, alleviates the above issues and achieves promising results in various NLP tasks. In prompt tuning, the prompts inserted during tuning provide task-specific knowledge, which is especially beneficial for tasks with relatively scarce data. In this article, we empirically evaluate the usage and effect of prompt tuning in code intelligence tasks. We conduct prompt tuning on popular pre-trained models CodeBERT and CodeT5 and experiment with four code intelligence tasks including defect prediction, code search, code summarization, and code translation. Our experimental results show that prompt tuning consistently outperforms fine-tuning in all four tasks. In addition, prompt tuning shows great potential in low-resource scenarios, e.g., improving the BLEU scores of fine-tuning by more than 26% on average for code summarization. Our results suggest that instead of fine-tuning, we could adapt prompt tuning for code intelligence tasks to achieve better performance, especially when lacking task-specific data. We also discuss the implications for adapting prompt tuning in code intelligence tasks.
引用
收藏
页码:4869 / 4885
页数:17
相关论文
共 50 条
  • [31] Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification
    Hu, Shengding
    Ding, Ning
    Wang, Huadong
    Liu, Zhiyuan
    Wang, Jingang
    Li, Juanzi
    Wu, Wei
    Sun, Maosong
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 2225 - 2240
  • [32] Universal Prompt Tuning for Graph Neural Networks
    Fang, Taoran
    Zhang, Yunchao
    Yang, Yang
    Wang, Chunping
    Chen, Lei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [33] Consistent Prompt Tuning for Generalized Category Discovery
    Yang, Muli
    Yin, Jie
    Gu, Yanan
    Deng, Cheng
    Zhang, Hanwang
    Zhu, Hongyuan
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025,
  • [34] Knowledge Prompt-tuning for Sequential Recommendation
    Zhai, Jianyang
    Zheng, Xiawu
    Wang, Chang-Dong
    Li, Hui
    Tian, Yonghong
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 6451 - 6461
  • [35] Point Prompt Tuning for Temporally Language Grounding
    Zeng, Yawen
    PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 2003 - 2007
  • [36] ASR MODEL ADAPTATION WITH DOMAIN PROMPT TUNING
    Zou, Pengpeng
    Ye, Jianhao
    Zhou, Hongbin
    2024 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING, IALP 2024, 2024, : 406 - 410
  • [37] Continual Prompt Tuning for Dialog State Tracking
    Zhu, Qi
    Li, Bing
    Mi, Fei
    Zhu, Xiaoyan
    Huang, Minlie
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 1124 - 1137
  • [38] Prompt Tuning with Contradictory Intentions for Sarcasm Recognition
    Liu, Yiyi
    Zhang, Ruqing
    Fan, Yixing
    Guo, Jiafeng
    Cheng, Xueqi
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 328 - 339
  • [39] Prompt Tuning for Unified Multimodal Pretrained Models
    Yang, Hao
    Lin, Junyang
    Yang, An
    Wang, Peng
    Zhou, Chang
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 402 - 416
  • [40] PTAU: Prompt Tuning for Attributing Unanswerable Questions
    Liao, Jinzhi
    Zhao, Xiang
    Zheng, Jianming
    Li, Xinyi
    Cai, Fei
    Tang, Jiuyang
    PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 1219 - 1229