PTE: Prompt tuning with ensemble verbalizers

被引:0
|
作者
Liang, Liheng [1 ]
Wang, Guancheng [2 ]
Lin, Cong [2 ]
Feng, Zhuowen [3 ]
机构
[1] Guangdong Ocean Univ, Fac Math & Comp Sci, Zhanjiang 524088, Peoples R China
[2] Guangdong Ocean Univ, Coll Elect & Informat Engn, Zhanjiang 524088, Peoples R China
[3] Guangdong Ocean Univ, Coll Literature & News Commun, Zhanjiang 524088, Peoples R China
关键词
Prompt tuning; Few-shot learning; Text classification; Pre-trained language models;
D O I
10.1016/j.eswa.2024.125600
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Prompt tuning has achieved remarkable success in facilitating the performance of Pre-trained Language Models (PLMs) across various downstream NLP tasks, particularly in scenarios with limited downstream data. Reframing tasks as fill-in-the-blank questions represents an effective approach within prompt tuning. However, this approach necessitates the mapping of labels through a verbalizer consisting of one or more label tokens, constrained by manually crafted prompts. Furthermore, most existing automatic crafting methods either introduce external resources or rely solely on discrete or continuous optimization strategies. To address this issue, we have proposed a methodology for optimizing discrete verbalizers based on gradient descent, which we refer to this approach as PTE. This method integrates discrete tokens into verbalizers that can be continuously optimized, combining the distinct advantages of both discrete and continuous optimization strategies. In contrast to prior approaches, ours eschews reliance on prompts generated by other models or prior knowledge, merely augmenting a matrix. This approach boasts remarkable simplicity and flexibility, enabling prompt optimization while preserving the interpretability of output label tokens without constraints imposed by discrete vocabularies. Finally, employing this method in text classification tasks, we observe that PTE achieves results comparable to, if not surpassing, previous methods even under extreme conciseness. This furnishes a simple, intuitive, and efficient solution for automatically constructing verbalizers. Moreover, through quantitative analysis of optimized verbalizers, we uncover that language models likely rely not only on semantic information but also on other features for text classification. This revelation unveils new avenues for future research and model enhancements.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization
    Razdaibiedina, Anastasia
    Mao, Yuning
    Khabsa, Madian
    Lewis, Mike
    Hou, Rui
    Ba, Jimmy
    Almahairi, Amjad
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 6740 - 6757
  • [2] Visual Prompt Tuning
    Jia, Menglin
    Tang, Luming
    Chen, Bor-Chun
    Cardie, Claire
    Belongie, Serge
    Hariharan, Bharath
    Lim, Ser-Nam
    COMPUTER VISION - ECCV 2022, PT XXXIII, 2022, 13693 : 709 - 727
  • [3] Prompt-aligned Gradient for Prompt Tuning
    Zhu, Beier
    Niu, Yulei
    Han, Yucheng
    Wu, Yue
    Zhang, Hanwang
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 15613 - 15623
  • [4] Compressed Video Prompt Tuning
    Li, Bing
    Chen, Jiaxin
    Bao, Xiuguo
    Huang, Di
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [5] DePT: Decoupled Prompt Tuning
    Zhang, Ji
    Wu, Shihan
    Gao, Lianli
    Shen, Heng Tao
    Song, Jingkuan
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 12924 - 12933
  • [6] Universality and Limitations of Prompt Tuning
    Wang, Yihan
    Chauhan, Jatin
    Wang, Wei
    Hsieh, Cho-Jui
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [7] When Adversarial Training Meets Prompt Tuning: Adversarial Dual Prompt Tuning for Unsupervised Domain Adaptation
    Cui, Chaoran
    Liu, Ziyi
    Gong, Shuai
    Zhu, Lei
    Zhang, Chunyun
    Liu, Hui
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 1427 - 1440
  • [8] LION: Implicit Vision Prompt Tuning
    Wang, Haixin
    Chang, Jianlong
    Zhai, Yihang
    Luo, Xiao
    Sun, Jinan
    Lin, Zhouchen
    Tian, Qi
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6, 2024, : 5372 - 5380
  • [9] Prompt Tuning in Biomedical Relation Extraction
    He, Jianping
    Li, Fang
    Li, Jianfu
    Hu, Xinyue
    Nian, Yi
    Xiang, Yang
    Wang, Jingqi
    Wei, Qiang
    Li, Yiming
    Xu, Hua
    Tao, Cui
    JOURNAL OF HEALTHCARE INFORMATICS RESEARCH, 2024, 8 (02) : 206 - 224
  • [10] Review of Research on Adapter and Prompt Tuning
    Lin, Lingde
    Liu, Na
    Wang, Zhengan
    Computer Engineering and Applications, 59 (02): : 12 - 21