Enabling controllable table-to-text generation via prompting large language models with guided planning

被引:1
|
作者
Zhao, Shuo [1 ]
Sun, Xin [1 ]
机构
[1] Beijing Inst Technol, Sch Comp Sci, Beijing 100081, Peoples R China
基金
国家重点研发计划;
关键词
Large language models; Controllable text generation; Few-shot table-to-text generation;
D O I
10.1016/j.knosys.2024.112571
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, Large Language Models (LLMs) has demonstrated unparalleled capabilities in understanding and generation, hence holding promising prospects for applying LLMs to table-to-text generation. However, the generation process with LLMs lacks a high degree of controllability, which hinders the utilization of LLMs for table-to-text generation. In this paper, we introduce Poised, an effective method that prompts LLMs with guided planning to achieve controllable table-to-text generation. Specifically, we first employ prefix-tuning on BART to derive a plan from the given table. Then, we combine the plan with guided instructions to create a comprehensive prompt, which is later input into LLMs to generate the description of the table. Experiments across three domains of the few-shot Wili dataset show that Poised achieves or approaches a plan completion rate of 100%, with an average hallucination frequency of less than 10%. Furthermore, Poised allows for finegrained control over the generated content by intentionally modifying the prompt, enabling precise control over aspects such as attribute realization order.
引用
收藏
页数:9
相关论文
共 16 条
  • [1] Classifiers Guided Controllable Text Generation for Discrete Diffusion Language Models
    Jiang, Hang
    Cai, Guoyong
    Li, Sihui
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT III, NLPCC 2024, 2025, 15361 : 132 - 144
  • [2] Distractor Generation for Multiple-Choice Questions with Predictive Prompting and Large Language Models
    Bitew, Semere Kiros
    Deleu, Johannes
    Develder, Chris
    Demeester, Thomas
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2023, PT II, 2025, 2134 : 48 - 63
  • [3] A Universal Prompting Strategy for Extracting Process Model Information from Natural Language Text Using Large Language Models
    Neuberger, Julian
    Ackermann, Lars
    van der Aa, Han
    Jablonski, Stefan
    CONCEPTUAL MODELING, ER 2024, 2025, 15238 : 38 - 55
  • [4] Self-Planning Code Generation with Large Language Models
    Jiang, Xue
    Dong, Yihong
    Wang, Lecheng
    Fang, Zheng
    Shang, Qiwei
    Li, Ge
    Jin, Zhi
    Jiao, Wenpin
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2024, 33 (07)
  • [5] Evaluation and Analysis of Large Language Models for Clinical Text Augmentation and Generation
    Latif, Atif
    Kim, Jihie
    IEEE ACCESS, 2024, 12 : 48987 - 48996
  • [6] Steganographic Text Generation Based on Large Language Models in Dialogue Scenarios
    Zeng, Qingwei
    Wang, Kaixi
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT III, NLPCC 2024, 2025, 15361 : 475 - 487
  • [7] Multi-stage guided code generation for Large Language Models
    Han, Yewei
    Lyu, Chen
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 139
  • [8] A Survey of Controllable Text Generation Using Transformer-based Pre-trained Language Models
    Zhang, Hanqing
    Song, Haolin
    Li, Shaoyu
    Zhou, Ming
    Song, Dawei
    ACM COMPUTING SURVEYS, 2024, 56 (03)
  • [9] RELAND: Integrating Large Language Models' Insights into Industrial Recommenders via a Controllable Reasoning Pool
    Tian, Changxin
    Hu, Binbin
    Gan, Chunjing
    Chen, Haoyu
    Zhang, Zhuo
    Yu, Li
    Liu, Ziqi
    Zhang, Zhiqiang
    Zhou, Jun
    Chen, Jiawei
    PROCEEDINGS OF THE EIGHTEENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2024, 2024, : 63 - 73
  • [10] Beyond Text Generation: Assessing Large Language Models' Ability to Reason Logically and Follow Strict Rules
    Han, Zhiyong
    Battaglia, Fortunato
    Mansuria, Kush
    Heyman, Yoav
    Terlecky, Stanley R.
    AI, 2025, 6 (01)