REKP: Refined External Knowledge into Prompt-Tuning for Few-Shot Text Classification

被引:1
|
作者
Dang, Yuzhuo [1 ]
Chen, Weijie [1 ]
Zhang, Xin [1 ]
Chen, Honghui [1 ]
机构
[1] Natl Univ Def Technol, Sci & Technol Informat Syst Engn Lab, Changsha 410073, Peoples R China
基金
中国国家自然科学基金;
关键词
few-shot learning; text classification; prompt learning; pre-trained language model;
D O I
10.3390/math11234780
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Text classification is a machine learning technique employed to assign a given text to predefined categories, facilitating the automatic analysis and processing of textual data. However, an important problem is that the number of new text categories is growing faster than that of human annotation data, which makes many new categories of text data lack a lot of annotation data. As a result, the conventional deep neural network is forced to over-fit, which damages the application in the real world. As a solution to this problem, academics recommend addressing data scarcity through few-shot learning. One of the efficient methods is prompt-tuning, which transforms the input text into a mask prediction problem featuring [MASK]. By utilizing descriptors, the model maps output words to labels, enabling accurate prediction. Nevertheless, the previous prompt-based adaption approaches often relied on manually produced verbalizers or a single label to represent the entire label vocabulary, which makes the mapping granularity low, resulting in words not being accurately mapped to their label. To address these issues, we propose to enhance the verbalizer and construct the refined external knowledge into a prompt-tuning (REKP) model. We employ the external knowledge bases to increase the mapping space of tagged terms and design three refinement methods to remove noise data. We conduct comprehensive experiments on four benchmark datasets, namely AG's News, Yahoo, IMDB, and Amazon. The results demonstrate that REKP can outperform the state-of-the-art baselines in terms of Micro-F1 on knowledge-enhanced text classification. In addition, we conduct an ablation study to ascertain the functionality of each module in our model, revealing that the refinement module significantly contributes to enhancing classification accuracy.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] An enhanced few-shot text classification approach by integrating topic modeling and prompt-tuning
    Zhang, Yinghui
    Xu, Yichun
    Dong, Fangmin
    NEUROCOMPUTING, 2025, 617
  • [2] KPT plus plus : Refined knowledgeable prompt tuning for few-shot text classification
    Ni, Shiwen
    Kao, Hung-Yu
    KNOWLEDGE-BASED SYSTEMS, 2023, 274
  • [3] Ontology-enhanced Prompt-tuning for Few-shot Learning
    Ye, Hongbin
    Zhang, Ningyu
    Deng, Shumin
    Chen, Xiang
    Chen, Hui
    Xiong, Feiyu
    Chen, Xi
    Chen, Huajun
    PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 778 - 787
  • [4] Few-Shot Text Classification with External Knowledge Expansion
    Guan, Jian
    Xu, Rui
    Ya, Jing
    Tang, Qiu
    Xue, Jidong
    Zhang, Ni
    2021 5TH INTERNATIONAL CONFERENCE ON INNOVATION IN ARTIFICIAL INTELLIGENCE (ICIAI 2021), 2021, : 184 - 189
  • [5] Incorporating target-aware knowledge into prompt-tuning for few-shot stance detection
    Wang, Shaokang
    Sun, Fuhui
    Wang, Xiaoyan
    Pan, Li
    INFORMATION PROCESSING & MANAGEMENT, 2024, 61 (05)
  • [6] Knowledge-Enhanced Prompt Learning for Few-Shot Text Classification
    Liu, Jinshuo
    Yang, Lu
    BIG DATA AND COGNITIVE COMPUTING, 2024, 8 (04)
  • [7] Knowledge-Guided Prompt Learning for Few-Shot Text Classification
    Wang, Liangguo
    Chen, Ruoyu
    Li, Li
    ELECTRONICS, 2023, 12 (06)
  • [8] Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification
    Hu, Shengding
    Ding, Ning
    Wang, Huadong
    Liu, Zhiyuan
    Wang, Jingang
    Li, Juanzi
    Wu, Wei
    Sun, Maosong
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 2225 - 2240
  • [9] Adaptive multimodal prompt-tuning model for few-shot multimodal sentiment analysis
    Xiang, Yan
    Zhang, Anlan
    Guo, Junjun
    Huang, Yuxin
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2025,
  • [10] Commonsense Knowledge-Aware Prompt Tuning for Few-Shot NOTA Relation Classification
    Lv, Bo
    Jin, Li
    Zhang, Yanan
    Wang, Hao
    Li, Xiaoyu
    Guo, Zhi
    APPLIED SCIENCES-BASEL, 2022, 12 (04):