Sasha: Creative Goal-Oriented Reasoning in Smart Homes with Large Language Models

被引:7
|
作者
King, Evan [1 ]
Yu, Haoxiang [1 ]
Lee, Sangsu [1 ]
Julien, Christine [1 ]
机构
[1] Univ Texas Austin, Austin, TX 78712 USA
来源
PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT | 2024年 / 8卷 / 01期
基金
美国国家科学基金会;
关键词
smart environments; pervasive computing; ambient intelligence; large language models; USERS;
D O I
10.1145/3643505
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Smart home assistants function best when user commands are direct and well-specified-e.g., "turn on the kitchen light"-or when a hard-coded routine specifies the response. In more natural communication, however, human speech is unconstrained, often describing goals (e.g., "make it cozy in here" or "help me save energy") rather than indicating specific target devices and actions to take on those devices. Current systems fail to understand these under-specified commands since they cannot reason about devices and settings as they relate to human situations. We introduce large language models (LLMs) to this problem space, exploring their use for controlling devices and creating automation routines in response to under-specified user commands in smart homes. We empirically study the baseline quality and failure modes of LLM-created action plans with a survey of age-diverse users. We find that LLMs can reason creatively to achieve challenging goals, but they experience patterns of failure that diminish their usefulness. We address these gaps with Sasha, a smarter smart home assistant. Sasha responds to loosely-constrained commands like "make it cozy" or "help me sleep better" by executing plans to achieve user goals-e.g., setting a mood with available devices, or devising automation routines. We implement and evaluate Sasha in a hands-on user study, showing the capabilities and limitations of LLM-driven smart homes when faced with unconstrained user-generated scenarios.
引用
收藏
页数:38
相关论文
共 43 条
  • [41] Veracity-Oriented Context-Aware Large Language Models-Based Prompting Optimization for Fake News Detection
    Jin, Weiqiang
    Gao, Yang
    Tao, Tao
    Wang, Xiujun
    Wang, Ningwei
    Wu, Baohai
    Zhao, Biao
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2025, 2025 (01)
  • [42] TIME-UIE: Tourism-oriented figure information model and unified information extraction via large language models
    Fan, Zhanling
    Chen, Chongcheng
    Luo, Haifeng
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 278
  • [43] A process-oriented perspective on pre-service teachers' self-efficacy and their motivational messages: Using large language models to classify teachers' speech
    Metzner, Olivia
    Wang, Yindong
    Symes, Wendy
    Huang, Yizhen
    Keller, Lena
    de Melo, Gerard
    Lazarides, Rebecca
    BRITISH JOURNAL OF EDUCATIONAL PSYCHOLOGY, 2025,