Enabling controllable table-to-text generation via prompting large language models with guided planning

被引:1
|
作者
Zhao, Shuo [1 ]
Sun, Xin [1 ]
机构
[1] Beijing Inst Technol, Sch Comp Sci, Beijing 100081, Peoples R China
基金
国家重点研发计划;
关键词
Large language models; Controllable text generation; Few-shot table-to-text generation;
D O I
10.1016/j.knosys.2024.112571
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, Large Language Models (LLMs) has demonstrated unparalleled capabilities in understanding and generation, hence holding promising prospects for applying LLMs to table-to-text generation. However, the generation process with LLMs lacks a high degree of controllability, which hinders the utilization of LLMs for table-to-text generation. In this paper, we introduce Poised, an effective method that prompts LLMs with guided planning to achieve controllable table-to-text generation. Specifically, we first employ prefix-tuning on BART to derive a plan from the given table. Then, we combine the plan with guided instructions to create a comprehensive prompt, which is later input into LLMs to generate the description of the table. Experiments across three domains of the few-shot Wili dataset show that Poised achieves or approaches a plan completion rate of 100%, with an average hallucination frequency of less than 10%. Furthermore, Poised allows for finegrained control over the generated content by intentionally modifying the prompt, enabling precise control over aspects such as attribute realization order.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] A Universal Prompting Strategy for Extracting Process Model Information from Natural Language Text Using Large Language Models
    Neuberger, Julian
    Ackermann, Lars
    van der Aa, Han
    Jablonski, Stefan
    CONCEPTUAL MODELING, ER 2024, 2025, 15238 : 38 - 55
  • [22] LayoutGPT: Compositional Visual Planning and Generation with Large Language Models
    Feng, Weixi
    Zhu, Wanrong
    Fu, Tsu-jui
    Jampani, Varun
    Akula, Arjun
    He, Xuehai
    Basu, Sugato
    Wang, Xin Eric
    Wang, William Yang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [23] Self-Planning Code Generation with Large Language Models
    Jiang, Xue
    Dong, Yihong
    Wang, Lecheng
    Fang, Zheng
    Shang, Qiwei
    Li, Ge
    Jin, Zhi
    Jiao, Wenpin
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2024, 33 (07)
  • [24] Evaluation and Analysis of Large Language Models for Clinical Text Augmentation and Generation
    Latif, Atif
    Kim, Jihie
    IEEE ACCESS, 2024, 12 : 48987 - 48996
  • [25] Steganographic Text Generation Based on Large Language Models in Dialogue Scenarios
    Zeng, Qingwei
    Wang, Kaixi
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT III, NLPCC 2024, 2025, 15361 : 475 - 487
  • [26] Multi-stage guided code generation for Large Language Models
    Han, Yewei
    Lyu, Chen
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 139
  • [27] Mix and Match: Learning-free Controllable Text Generation using Energy Language Models
    Mireshghallah, Fatemehsadat
    Goya, Kartik
    Berg-Kirkpatrick, Taylor
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 401 - 415
  • [28] Exploring Automated Assertion Generation via Large Language Models
    Zhang, Quanjun
    Sun, Weifeng
    Fang, Chunrong
    Yu, Bowen
    Li, Hongyan
    Yan, Meng
    Zhou, Jianyi
    Chen, Zhenyu
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2025, 34 (03)
  • [29] Prompting and Evaluating Large Language Models for Proactive Dialogues: Clarification, Target-guided, and Non-collaboration
    Deng, Yang
    Liao, Lizi
    Chen, Liang
    Wang, Hongru
    Lei, Wenqiang
    Chua, Tat-Seng
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 10602 - 10621
  • [30] Synthetic Data Generation with Large Language Models for Text Classification: Potential and Limitations
    Li, Zhuoyan
    Zhu, Hangxiao
    Lu, Zhuoran
    Yin, Ming
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 10443 - 10461