Prompting Is Programming: A Query Language for Large Language Models

被引:25
|
作者
Beurer-Kellner, Luca [1 ]
Fischer, Marc [1 ]
Vechev, Martin [1 ]
机构
[1] Swiss Fed Inst Technol, Zurich, Switzerland
来源
PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL | 2023年 / 7卷 / PLDI期
关键词
language model programming; prompt programming;
D O I
10.1145/3591300
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Large language models have demonstrated outstanding performance on a wide range of tasks such as question answering and code generation. On a high level, given an input, a language model can be used to automatically complete the sequence in a statistically-likely way. Based on this, users prompt these models with language instructions or examples, to implement a variety of downstream tasks. Advanced prompting methods can even imply interaction between the language model, a user, and external tools such as calculators. However, to obtain state-of-the-art performance or adapt language models for specific tasks, complex task- and model-specific programs have to be implemented, which may still require ad-hoc interaction. Based on this, we present the novel idea of Language Model Programming (LMP). LMP generalizes language model prompting from pure text prompts to an intuitive combination of text prompting and scripting. Additionally, LMP allows constraints to be specified over the language model output. This enables easy adaption to many tasks while abstracting language model internals and providing high-level semantics. To enable LMP, we implement LMQL (short for Language Model Query Language), which leverages the constraints and control flow from an LMP prompt to generate an efficient inference procedure that minimizes the number of expensive calls to the underlying language model. We show that LMQL can capture a wide range of state-of-the-art prompting methods in an intuitive way, especially facilitating interactive flows that are challenging to implement with existing high-level APIs. Our evaluation shows that we retain or increase the accuracy on several downstream tasks, while also significantly reducing the required amount of computation or cost in the case of pay-to-use APIs (26-85% cost savings).
引用
收藏
页数:24
相关论文
共 5 条
  • [1] Discovering the Syntax and Strategies of Natural Language Programming with Generative Language Models
    Jiang, Ellen
    Toh, Edwin
    Molina, Alejandra
    Olson, Kristen
    Kayacik, Claire
    Donsbach, Aaron
    Cai, Carrie J.
    Terry, Michael
    PROCEEDINGS OF THE 2022 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI' 22), 2022,
  • [2] Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm
    Reynolds, Laria
    McDonell, Kyle
    EXTENDED ABSTRACTS OF THE 2021 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'21), 2021,
  • [3] Ironies of Programming Automation: Exploring the Experience of Code Synthesis via Large Language Models
    McCabe, Alan T.
    Bjorkman, Moa
    Engstrom, Joel
    Kuang, Peng
    Soderberg, Emma
    Church, Luke
    PROCEEDINGS OF THE 8TH INTERNATIONAL CONFERENCE ON THE ART, SCIENCE, AND ENGINEERING OF PROGRAMMING, PROGRAMMING COMPANION 2024, 2024, : 12 - 21
  • [4] Building a hospitable and reliable dialogue system for android robots: a scenario-based approach with large language models
    Yamazaki, Takato
    Yoshikawa, Katsumasa
    Kawamoto, Toshiki
    Mizumoto, Tomoya
    Ohagi, Masaya
    Sato, Toshinori
    ADVANCED ROBOTICS, 2023, 37 (21) : 1364 - 1381
  • [5] GenLine and GenForm: Two Tools for Interacting with Generative Language Models in a Code Editor
    Jiang, Ellen
    Toh, Edwin
    Molina, Alejandra
    Donsbach, Aaron
    Cai, Carrie
    Terry, Michael
    ADJUNCT PROCEEDINGS OF THE 34TH ANNUAL ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY, UIST 2021, 2021, : 145 - 147