PTE: Prompt tuning with ensemble verbalizers

被引:0
|
作者
Liang, Liheng [1 ]
Wang, Guancheng [2 ]
Lin, Cong [2 ]
Feng, Zhuowen [3 ]
机构
[1] Guangdong Ocean Univ, Fac Math & Comp Sci, Zhanjiang 524088, Peoples R China
[2] Guangdong Ocean Univ, Coll Elect & Informat Engn, Zhanjiang 524088, Peoples R China
[3] Guangdong Ocean Univ, Coll Literature & News Commun, Zhanjiang 524088, Peoples R China
关键词
Prompt tuning; Few-shot learning; Text classification; Pre-trained language models;
D O I
10.1016/j.eswa.2024.125600
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Prompt tuning has achieved remarkable success in facilitating the performance of Pre-trained Language Models (PLMs) across various downstream NLP tasks, particularly in scenarios with limited downstream data. Reframing tasks as fill-in-the-blank questions represents an effective approach within prompt tuning. However, this approach necessitates the mapping of labels through a verbalizer consisting of one or more label tokens, constrained by manually crafted prompts. Furthermore, most existing automatic crafting methods either introduce external resources or rely solely on discrete or continuous optimization strategies. To address this issue, we have proposed a methodology for optimizing discrete verbalizers based on gradient descent, which we refer to this approach as PTE. This method integrates discrete tokens into verbalizers that can be continuously optimized, combining the distinct advantages of both discrete and continuous optimization strategies. In contrast to prior approaches, ours eschews reliance on prompts generated by other models or prior knowledge, merely augmenting a matrix. This approach boasts remarkable simplicity and flexibility, enabling prompt optimization while preserving the interpretability of output label tokens without constraints imposed by discrete vocabularies. Finally, employing this method in text classification tasks, we observe that PTE achieves results comparable to, if not surpassing, previous methods even under extreme conciseness. This furnishes a simple, intuitive, and efficient solution for automatically constructing verbalizers. Moreover, through quantitative analysis of optimized verbalizers, we uncover that language models likely rely not only on semantic information but also on other features for text classification. This revelation unveils new avenues for future research and model enhancements.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] Judicial Text Relation Extraction Based on Prompt Tuning
    Chen, Xue
    Li, Yi
    Fan, Shuhuan
    Hou, Mengshu
    2024 2ND ASIA CONFERENCE ON COMPUTER VISION, IMAGE PROCESSING AND PATTERN RECOGNITION, CVIPPR 2024, 2024,
  • [42] Progressive Multi-modal Conditional Prompt Tuning
    Qiu, Xiaoyu
    Feng, Hao
    Wang, Yuechen
    Zhou, Wengang
    Li, Houqiang
    PROCEEDINGS OF THE 4TH ANNUAL ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2024, 2024, : 46 - 54
  • [43] Adversarial Prompt Tuning for Vision-Language Models
    Zhang, Jiaming
    Ma, Xingjun
    Wang, Xin
    Qiu, Lingyu
    Wang, Jiaqi
    Jiang, Yu-Gang
    Sang, Jitao
    COMPUTER VISION - ECCV 2024, PT XLV, 2025, 15103 : 56 - 72
  • [44] Action-guided prompt tuning for video grounding
    Wang, Jing
    Tsao, Raymon
    Wang, Xuan
    Wang, Xiaojie
    Feng, Fangxiang
    Tian, Shiyu
    Poria, Soujanya
    INFORMATION FUSION, 2025, 113
  • [45] Symbolic Prompt Tuning Completes the App Promotion Graph
    Ouyang, Zhongyu
    Zhang, Chunhui
    Hou, Shifu
    Ma, Shang
    Chen, Chaoran
    Li, Toby
    Xiao, Xusheng
    Zhang, Chuxu
    Ye, Yanfang
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES-APPLIED DATA SCIENCE TRACK, PT X, ECML PKDD 2024, 2024, 14950 : 183 - 198
  • [46] Prompt Tuning for Item Cold-start Recommendation
    Jiang, Yuezihan
    Chen, Gaode
    Zhang, Wenhan
    Wang, Jingchi
    Jiang, Yinjie
    Zhang, Qi
    Lin, Jingjian
    Jiang, Peng
    Bian, Kaigui
    PROCEEDINGS OF THE EIGHTEENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2024, 2024, : 411 - 421
  • [47] An approach for tuning ensemble prediction systems
    Solonen, Antti
    Jarvinen, Heikki
    TELLUS SERIES A-DYNAMIC METEOROLOGY AND OCEANOGRAPHY, 2013, 65
  • [48] Efficient Policy Adaptation with Contrastive Prompt Ensemble for Embodied Agents
    Choi, Wonje
    Kim, Woo Kyung
    Kim, SeungHyun
    Woo, Honguk
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [49] PREFER: Prompt Ensemble Learning via Feedback -Reflect -Refine
    Zhang, Chenrui
    Liu, Lin
    Wang, Chuyuan
    Sun, Xiao
    Wang, Hongyu
    Wang, Jinpeng
    Cai, Mingchen
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 17, 2024, : 19525 - 19532
  • [50] Tibetan Text Classification based on Prompt Learning and Ensemble Learning
    Tang, Chao
    Tan, Zelin
    Zhao, Xiaobing
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2025, 24 (02)