Cook2LTL: Translating Cooking Recipes to LTL Formulae using Large Language Models

被引:0
|
作者
Mavrogiannis, Angelos [1 ]
Mavrogiannis, Christoforos [2 ]
Aloimonos, Yiannis [1 ]
机构
[1] Univ Maryland, Dept Comp Sci, 8125 Paint Branch Dr, College Pk, MD 20742 USA
[2] Univ Michigan, Dept Robot, Ann Arbor, MI 48105 USA
关键词
D O I
10.1109/ICRA57147.2024.10611086
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Cooking recipes are challenging to translate to robot plans as they feature rich linguistic complexity, temporally-extended interconnected tasks, and an almost infinite space of possible actions. Our key insight is that combining a source of cooking domain knowledge with a formalism that captures the temporal richness of cooking recipes could enable the extraction of unambiguous, robot-executable plans. In this work, we use Linear Temporal Logic (LTL) as a formal language expressive enough to model the temporal nature of cooking recipes. Leveraging a pretrained Large Language Model (LLM), we present Cook2LTL, a system that translates instruction steps from an arbitrary cooking recipe found on the internet to a set of LTL formulae, grounding high-level cooking actions to a set of primitive actions that are executable by a manipulator in a kitchen environment. Cook2LTL makes use of a caching scheme that dynamically builds a queryable action library at runtime. We instantiate Cook2LTL in a realistic simulation environment (AI2-THOR), and evaluate its performance across a series of cooking recipes. We demonstrate that our system significantly decreases LLM API calls (-51%), latency (-59%), and cost (-42%) compared to a baseline that queries the LLM for every newly encountered action at runtime.
引用
收藏
页码:17679 / 17686
页数:8
相关论文
共 9 条
  • [1] ParroT: Translating during Chat using Large Language Models tuned with Human Translation and Feedback
    Jiao, Wenxiang
    Huang, Jieting
    Wang, Wenxuan
    He, Zhiwei
    Liang, Tian
    Wang, Xing
    Shi, Shuming
    Tu, Zhaopeng
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 15009 - 15020
  • [2] nl2spec: Interactively Translating Unstructured Natural Language to Temporal Logics with Large Language Models
    Cosler, Matthias
    Hahn, Christopher
    Mendoza, Daniel
    Schmitt, Frederik
    Trippell, Caroline
    COMPUTER AIDED VERIFICATION, CAV 2023, PT II, 2023, 13965 : 383 - 396
  • [3] Translating Words to Worlds: Zero-Shot Synthesis of 3D Terrain from Textual Descriptions Using Large Language Models
    Zhang, Guangzi
    Chen, Lizhe
    Zhang, Yu
    Liu, Yan
    Ge, Yuyao
    Cai, Xingquan
    APPLIED SCIENCES-BASEL, 2024, 14 (08):
  • [4] Text2Reaction : Enabling Reactive Task Planning Using Large Language Models
    Yang, Zejun
    Ning, Li
    Wang, Haitao
    Jiang, Tianyu
    Zhang, Shaolin
    Cui, Shaowei
    Jiang, Hao
    Li, Chunpeng
    Wang, Shuo
    Wang, Zhaoqi
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (05) : 4003 - 4010
  • [5] NL2TL: Transforming Natural Languages to Temporal Logics using Large Language Models
    Chen, Yongchao
    Gandhi, Rujul
    Zhang, Yang
    Fan, Chuchu
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 15880 - 15903
  • [6] GPT-2C: A Parser for Honeypot Logs Using Large Pre-trained Language Models
    Setianto, Febrian
    Tsani, Erion
    Sadiq, Fatima
    Domalis, Georgios
    Tsakalidis, Dimitris
    Kostakos, Panos
    PROCEEDINGS OF THE 2021 IEEE/ACM INTERNATIONAL CONFERENCE ON ADVANCES IN SOCIAL NETWORKS ANALYSIS AND MINING, ASONAM 2021, 2021, : 649 - 653
  • [7] Chat2VIS: Generating Data Visualizations via Natural Language Using ChatGPT, Codex and GPT-3 Large Language Models
    Maddigan, Paula
    Susnjak, Teo
    IEEE ACCESS, 2023, 11 : 45181 - 45193
  • [8] Large language multimodal models for new-onset type 2 diabetes prediction using five-year cohort electronic health records
    Ding, Jun-En
    Thao, Phan Nguyen Minh
    Peng, Wen-Chih
    Wang, Jian-Zhe
    Chug, Chun-Cheng
    Hsieh, Min-Chen
    Tseng, Yun-Chien
    Chen, Ling
    Luo, Dongsheng
    Wu, Chenwei
    Wang, Chi-Te
    Hsu, Chih-Ho
    Chen, Yi-Tui
    Chen, Pei-Fu
    Liu, Feng
    Hung, Fang-Ming
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [9] Engineering Point Defects in MoS\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${}_{\textbf{2}}$$\end{document} for Tailored Material Properties Using Large Language Models
    Abdalaziz Al-Maeeni
    Denis Derkach
    Andrey Ustyuzhanin
    Moscow University Physics Bulletin, 2024, 79 (Suppl 2) : S818 - S827