Enabling controllable table-to-text generation via prompting large language models with guided planning

被引:1
|
作者
Zhao, Shuo [1 ]
Sun, Xin [1 ]
机构
[1] Beijing Inst Technol, Sch Comp Sci, Beijing 100081, Peoples R China
基金
国家重点研发计划;
关键词
Large language models; Controllable text generation; Few-shot table-to-text generation;
D O I
10.1016/j.knosys.2024.112571
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, Large Language Models (LLMs) has demonstrated unparalleled capabilities in understanding and generation, hence holding promising prospects for applying LLMs to table-to-text generation. However, the generation process with LLMs lacks a high degree of controllability, which hinders the utilization of LLMs for table-to-text generation. In this paper, we introduce Poised, an effective method that prompts LLMs with guided planning to achieve controllable table-to-text generation. Specifically, we first employ prefix-tuning on BART to derive a plan from the given table. Then, we combine the plan with guided instructions to create a comprehensive prompt, which is later input into LLMs to generate the description of the table. Experiments across three domains of the few-shot Wili dataset show that Poised achieves or approaches a plan completion rate of 100%, with an average hallucination frequency of less than 10%. Furthermore, Poised allows for finegrained control over the generated content by intentionally modifying the prompt, enabling precise control over aspects such as attribute realization order.
引用
收藏
页数:9
相关论文
共 50 条
  • [31] Chain-of-Thought Improves Text Generation with Citations in Large Language Models
    Ji, Bin
    Liu, Huijun
    Du, Mingzhe
    Ng, See-Kiong
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 16, 2024, : 18345 - 18353
  • [32] Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints
    Lu, Albert
    Zhang, Hongxin
    Zhang, Yanzhe
    Wang, Xuezhi
    Yang, Diyi
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 1982 - 2008
  • [33] A Survey of Controllable Text Generation Using Transformer-based Pre-trained Language Models
    Zhang, Hanqing
    Song, Haolin
    Li, Shaoyu
    Zhou, Ming
    Song, Dawei
    ACM COMPUTING SURVEYS, 2024, 56 (03)
  • [34] Prompting Large Language Models with Chain-of-Thought for Few-Shot Knowledge Base Question Generation
    Liang, Yuanyuan
    Wang, Jianing
    Zhu, Hanlun
    Wang, Lei
    Qian, Weining
    Lan, Yunshi
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 4329 - 4343
  • [35] Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generation
    Xue, Jinlong
    Deng, Yayue
    Gao, Yingming
    Li, Ya
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 4700 - 4712
  • [36] A Comparative Analysis of Conversational Large Language Models in Knowledge-Based Text Generation
    Schneider, Phillip
    Klettner, Manuel
    Simperl, Elena
    Matthes, Florian
    PROCEEDINGS OF THE 18TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2: SHORT PAPERS, 2024, : 358 - 367
  • [37] Chinese Text Open Domain Tag Generation Method via Large Language Model
    He, Chunhui
    Ge, Bin
    Zhang, Chong
    2024 10TH INTERNATIONAL CONFERENCE ON BIG DATA AND INFORMATION ANALYTICS, BIGDIA 2024, 2024, : 183 - 188
  • [38] RELAND: Integrating Large Language Models' Insights into Industrial Recommenders via a Controllable Reasoning Pool
    Tian, Changxin
    Hu, Binbin
    Gan, Chunjing
    Chen, Haoyu
    Zhang, Zhuo
    Yu, Li
    Liu, Ziqi
    Zhang, Zhiqiang
    Zhou, Jun
    Chen, Jiawei
    PROCEEDINGS OF THE EIGHTEENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2024, 2024, : 63 - 73
  • [39] Character As Pixels: A Controllable Prompt Adversarial Attacking Framework for Black-Box Text Guided Image Generation Models
    Kou, Ziyi
    Pei, Shichao
    Tian, Yijun
    Zhang, Xiangliang
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 983 - 990
  • [40] PROGPROMPT: program generation for situated robot task planning using large language models
    Singh, Ishika
    Blukis, Valts
    Mousavian, Arsalan
    Goyal, Ankit
    Xu, Danfei
    Tremblay, Jonathan
    Fox, Dieter
    Thomason, Jesse
    Garg, Animesh
    AUTONOMOUS ROBOTS, 2023, 47 (08) : 999 - 1012