Domain-control prompt-driven zero-shot relational triplet extraction

被引:0
|
作者
Xu, Liang [1 ,2 ]
Gao, Changxia [2 ]
Tian, Xuetao [1 ]
机构
[1] Beijing Normal Univ, Fac Psychol, Beijing 100875, Peoples R China
[2] Beijing Jiaotong Univ, China Engn Res Ctr Network Management Technol High, Sch Comp & Informat Technol, MOE, Beijing 100044, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Relational triplet extraction; Zero-shot learning; Prompt; Pre-trained language models;
D O I
10.1016/j.neucom.2024.127270
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Zero -shot relational triplet extraction is a vital solution to the problem of fact extracted from unstructured text without labeled training data. In the task, the data is divided into seen and unseen relations for training and prediction, respectively. A strategy that trains a generative model based on seen data first and generates training samples for unseen data has been shown to be effective in solving this task. However, this strategy is severely limited by error propagation caused by generated noisy data. To address this issue, prompts may provide a feasible route since they have been widely utilized in cross -domain tasks. In this paper, three preliminary experiments reveal the effectiveness of prompts for the task of triplet extraction and its internal mechanism. Specifically, the method using prompts can control the domain.1 Further, we propose a simple but effective model for zero -shot relational triplet extraction, which leverages zero -shot text classification to first determine the prompts of unseen relations aiming to optimize both its domain and length, and then extracts triplets via prompt -driven strategy. Extensive experiments are conducted on two public datasets, demonstrating that the proposed model achieves a better performance than baselines.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Latent Concept Extraction for Zero-shot Video Retrieval
    Ueki, Kazuya
    2018 INTERNATIONAL CONFERENCE ON IMAGE AND VISION COMPUTING NEW ZEALAND (IVCNZ), 2018,
  • [42] Zero-shot Domain Adaptation Based on Attribute Information
    Ishii, Masato
    Takenouchi, Takashi
    Sugiyama, Masashi
    ASIAN CONFERENCE ON MACHINE LEARNING, VOL 101, 2019, 101 : 473 - 488
  • [43] PAED: Zero-Shot Persona Attribute Extraction in Dialogues
    Zhu, Luyao
    Li, Wei
    Mao, Rui
    Pandelea, Vlad
    Cambria, Erik
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 9771 - 9787
  • [44] Improving Discriminative Learning for Zero-Shot Relation Extraction
    Tran, Van-Hien
    Ouchi, Hiroki
    Watanabe, Taro
    Matsumoto, Yuji
    PROCEEDINGS OF THE 1ST WORKSHOP ON SEMIPARAMETRIC METHODS IN NLP: DECOUPLING LOGIC FROM KNOWLEDGE (SPA-NLP 2022), 2022, : 1 - 6
  • [45] Binary or Graded, Few-Shot or Zero-Shot: Prompt Design for GPTs in Relevance Evaluation
    Choi, Jaekeol
    ADVANCES IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING, 2024, 4 (03):
  • [46] Zero-shot Scene Graph Generation via Triplet Calibration and Reduction
    Li, Jiankai
    Wang, Yunhong
    Li, Weixin
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (01)
  • [47] Triplet Bridge for Zero-Shot Sketch-Based Image Retrieval
    Zheng, Jiahao
    Tang, Yu
    Wu, Dapeng
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024,
  • [48] Zero-shot Scene Graph Generation with Relational Graph Neural Networks
    Yu, Xiang
    Li, Jie
    Yuan, Shijing
    Wang, Chao
    Wu, Chentao
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 1894 - 1900
  • [49] Zero-shot Image Recognition Using Relational Matching, Adaptation and Calibration
    Das, Debasmit
    Lee, C. S. George
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [50] Relational Proxies: Fine-Grained Relationships as Zero-Shot Discriminators
    Chaudhuri, Abhra
    Mancini, Massimiliano
    Akata, Zeynep
    Dutta, Anjan
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (12) : 8652 - 8664