REKP: Refined External Knowledge into Prompt-Tuning for Few-Shot Text Classification

被引:1
|
作者
Dang, Yuzhuo [1 ]
Chen, Weijie [1 ]
Zhang, Xin [1 ]
Chen, Honghui [1 ]
机构
[1] Natl Univ Def Technol, Sci & Technol Informat Syst Engn Lab, Changsha 410073, Peoples R China
基金
中国国家自然科学基金;
关键词
few-shot learning; text classification; prompt learning; pre-trained language model;
D O I
10.3390/math11234780
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Text classification is a machine learning technique employed to assign a given text to predefined categories, facilitating the automatic analysis and processing of textual data. However, an important problem is that the number of new text categories is growing faster than that of human annotation data, which makes many new categories of text data lack a lot of annotation data. As a result, the conventional deep neural network is forced to over-fit, which damages the application in the real world. As a solution to this problem, academics recommend addressing data scarcity through few-shot learning. One of the efficient methods is prompt-tuning, which transforms the input text into a mask prediction problem featuring [MASK]. By utilizing descriptors, the model maps output words to labels, enabling accurate prediction. Nevertheless, the previous prompt-based adaption approaches often relied on manually produced verbalizers or a single label to represent the entire label vocabulary, which makes the mapping granularity low, resulting in words not being accurately mapped to their label. To address these issues, we propose to enhance the verbalizer and construct the refined external knowledge into a prompt-tuning (REKP) model. We employ the external knowledge bases to increase the mapping space of tagged terms and design three refinement methods to remove noise data. We conduct comprehensive experiments on four benchmark datasets, namely AG's News, Yahoo, IMDB, and Amazon. The results demonstrate that REKP can outperform the state-of-the-art baselines in terms of Micro-F1 on knowledge-enhanced text classification. In addition, we conduct an ablation study to ascertain the functionality of each module in our model, revealing that the refinement module significantly contributes to enhancing classification accuracy.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] Few-shot English Text Classification Method Based On Graph Convolutional Network And Prompt Learning
    Jin, Yunfei
    JOURNAL OF APPLIED SCIENCE AND ENGINEERING, 2025, 28 (09): : 1777 - 1784
  • [32] A Chinese Few-Shot Text Classification Method Utilizing Improved Prompt Learning and Unlabeled Data
    Hu, Tingkai
    Chen, Zuqin
    Ge, Jike
    Yang, Zhaoxu
    Xu, Jichao
    APPLIED SCIENCES-BASEL, 2023, 13 (05):
  • [33] Generalized Few-Shot Classification with Knowledge Graph
    Liu, Dianqi
    Bai, Liang
    Yu, Tianyuan
    NEURAL PROCESSING LETTERS, 2023, 55 (06) : 7649 - 7666
  • [34] Generalized Few-Shot Classification with Knowledge Graph
    Dianqi Liu
    Liang Bai
    Tianyuan Yu
    Neural Processing Letters, 2023, 55 : 7649 - 7666
  • [35] Knowledge and separating soft verbalizer based prompt-tuning for multi-label short text classification
    Chen, Zhanwang
    Li, Peipei
    Hu, Xuegang
    APPLIED INTELLIGENCE, 2024, 54 (17-18) : 8020 - 8040
  • [36] ContrastNER: Contrastive-based Prompt Tuning for Few-shot NER
    Layegh, Amirhossein
    Payberah, Amir H.
    Soylu, Ahmet
    Roman, Dumitru
    Matskin, Mihhail
    2023 IEEE 47TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC, 2023, : 241 - 249
  • [37] Masked Siamese Prompt Tuning for Few-Shot Natural Language Understanding
    Ni S.
    Kao H.-Y.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (02): : 624 - 633
  • [38] Virtual Node Tuning for Few-shot Node Classification
    Tan, Zhen
    Guo, Ruocheng
    Ding, Kaize
    Liu, Huan
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 2177 - 2188
  • [39] Hierarchical Prompt Tuning for Few-Shot Multi-Task Learning
    Liu, Jingping
    Chen, Tao
    Liang, Zujie
    Jiang, Haiyun
    Xiao, Yanghua
    Wei, Feng
    Qian, Yuxi
    Hao, Zhenghong
    Han, Bing
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 1556 - 1565
  • [40] PPT: Pre-trained Prompt Tuning for Few-shot Learning
    Gu, Yuxian
    Han, Xu
    Liu, Zhiyuan
    Huang, Minlie
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 8410 - 8423