Few-Shot Tabular Data Enrichment Using Fine-Tuned Transformer Architectures

被引:0
|
作者
Harari, Asaf [1 ]
Katz, Gilad [1 ]
机构
[1] Ben Gurion Univ Negev, POB 653, Beer Sheva, Israel
关键词
KNOWLEDGE-BASE; DBPEDIA;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The enrichment of tabular datasets using external sources has gained significant attention in recent years. Existing solutions, however, either ignore external unstructured data completely or devise dataset-specific solutions. In this study, we proposed Few-Shot Transformer based Enrichment (FeSTE), a generic and robust framework for the enrichment of tabular datasets using unstructured data. By training over multiple datasets, our approach is able to develop generic models that can be applied to additional datasets with minimal training (i.e., few-shot). Our approach is based on an adaptation of BERT, for which we present a novel fine-tuning approach that reformulates the tuples of the datasets as sentences. Our evaluation, conducted on 17 datasets, shows that FeSTE is able to generate high quality features and significantly outperform existing fine-tuning solutions.
引用
收藏
页码:1577 / 1591
页数:15
相关论文
共 50 条
  • [21] Automated classification of polyps using deep learning architectures and few-shot learning
    Adrian Krenzer
    Stefan Heil
    Daniel Fitting
    Safa Matti
    Wolfram G. Zoller
    Alexander Hann
    Frank Puppe
    BMC Medical Imaging, 23
  • [22] Multi-Level Fine-Tuned Transformer for Gait Recognition
    Wu, Huimin
    Zhao, Aite
    2022 INTERNATIONAL CONFERENCE ON VIRTUAL REALITY, HUMAN-COMPUTER INTERACTION AND ARTIFICIAL INTELLIGENCE, VRHCIAI, 2022, : 83 - 89
  • [23] DETR-SPP: a fine-tuned vehicle detection with transformer
    Krishnendhu S P
    Prabu Mohandas
    Multimedia Tools and Applications, 2024, 83 : 25573 - 25594
  • [24] DETR-SPP: a fine-tuned vehicle detection with transformer
    Krishnendhu, S. P.
    Mohandas, Prabu
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (09) : 25573 - 25594
  • [25] Genealogical Relationship Extraction from Unstructured Text Using Fine-Tuned Transformer Models
    Parrolivelli, Carloangello
    Stanchev, Lubomir
    2023 IEEE 17TH INTERNATIONAL CONFERENCE ON SEMANTIC COMPUTING, ICSC, 2023, : 167 - 174
  • [26] Classification of Cleft Lip and Palate Speech Using Fine-Tuned Transformer Pretrained Models
    Bhattacharjee, Susmita
    Shekhawat, H. S.
    Prasanna, S. R. M.
    INTELLIGENT HUMAN COMPUTER INTERACTION, IHCI 2023, PT I, 2024, 14531 : 55 - 61
  • [27] Convolutional sparse filter with data and mechanism fusion: A few-shot fault method for transformer
    Qin, Jia
    Yang, Dongsheng
    Wang, Nan
    Ni, Xueqing
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 124
  • [28] Coarse-To-Fine Incremental Few-Shot Learning
    Xiang, Xiang
    Tan, Yuwen
    Wan, Qian
    Ma, Jing
    Yuille, Alan
    Hager, Gregory D.
    COMPUTER VISION, ECCV 2022, PT XXXI, 2022, 13691 : 205 - 222
  • [29] Using fine-tuned conditional probabilities for data transformation of nominal attributes
    Li, Qiude
    Xiong, Qingyu
    Ji, Shengfen
    Wen, Junhao
    Gao, Min
    Yu, Yang
    Xu, Rui
    PATTERN RECOGNITION LETTERS, 2019, 128 : 107 - 114
  • [30] FINE GRAINED FEW-SHOT CLASSIFICATION WITH CONTRASTIVE CLUES
    Banerjee, Anoushka
    Dinesh, Dileep Aroor
    Bhavsar, Arnav
    2022 IEEE 32ND INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2022,