Ain't Misbehavin' - Using LLMs to Generate Expressive Robot Behavior in Conversations with the Tabletop Robot Haru

被引:2
作者
Wang, Zining [1 ]
Reisert, Paul [2 ]
Nichols, Eric [3 ]
Gomez, Randy [3 ]
机构
[1] Univ British Columbia, Vancouver, BC, Canada
[2] Beyond Reason, Yokohama, Kanagawa, Japan
[3] Honda Res Inst Japan, Wako, Japan
来源
COMPANION OF THE 2024 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2024 COMPANION | 2024年
关键词
Human-Robot Interaction; Social Robotics; Large Language Models; Expressive Behavior Generation; EMOTION;
D O I
10.1145/3610978.3640562
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Social robots aim to establish long-term bonds with humans through engaging conversation. However, traditional conversational approaches, reliant on scripted interactions, often fall short in maintaining engaging conversations. This paper addresses this limitation by integrating large language models (LLMs) into social robots to achieve more dynamic and expressive conversations. We introduce a fully-automated conversation system that leverages LLMs to generate robot responses with expressive behaviors, congruent with the robot's personality. We incorporate robot behavior with two modalities: 1) a text-to-speech (TTS) engine capable of various delivery styles, and 2) a library of physical actions for the robot. We develop a custom, state-of-the-art emotion recognition model to dynamically select the robot's tone of voice and utilize emojis from LLM output as cues for generating robot actions. A demo of our system is available here. To illuminate design and implementation issues, we conduct a pilot study where volunteers chat with a social robot using our proposed system, and we analyze their feedback, conducting a rigorous error analysis of chat transcripts. Feedback was overwhelmingly positive, with participants commenting on the robot's empathy, helpfulness, naturalness, and entertainment. Most negative feedback was due to automatic speech recognition (ASR) errors which had limited impact on conversations. However, we observed a small class of errors, such as the LLM repeating itself or hallucinating fictitious information and human responses, that have the potential to derail conversations, raising important issues for LLM application.
引用
收藏
页码:1105 / 1109
页数:5
相关论文
共 27 条
  • [1] Emotion and sociable humanoid robots
    Breazeal, C
    [J]. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2003, 59 (1-2) : 119 - 155
  • [2] Brown TB, 2020, ADV NEUR IN, V33
  • [3] Bruce A, 2002, 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS I-IV, PROCEEDINGS, P4138, DOI 10.1109/ROBOT.2002.1014396
  • [4] A Survey of Using Vocal Prosody to Convey Emotion in Robot Speech
    Crumpton, Joe
    Bethel, Cindy L.
    [J]. INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2016, 8 (02) : 271 - 285
  • [5] The Effects of Robot Voice and Gesture Types on the Perceived Robot Personalities
    Dou, Xiao
    Wu, Chih-Fu
    Lin, Kai-Chieh
    Tseng, Tzu-Min
    [J]. HUMAN-COMPUTER INTERACTION. PERSPECTIVES ON DESIGN, HCI 2019, PT I, 2019, 11566 : 299 - 309
  • [6] A Socially Assistive Robot Exercise Coach for the Elderly
    Fasola, Juan
    Mataric, Maja J.
    [J]. JOURNAL OF HUMAN-ROBOT INTERACTION, 2013, 2 (02): : 3 - 32
  • [7] A survey of socially interactive robots
    Fong, T
    Nourbakhsh, I
    Dautenhahn, K
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2003, 42 (3-4) : 143 - 166
  • [8] Frantar E, 2023, Arxiv, DOI [arXiv:2210.17323, 10.48550/arxiv.2210.17323]
  • [9] Gomez R, 2020, IEEE INT CONF ROBOT, P1970, DOI 10.1109/ICRA40945.2020.9197016
  • [10] "Haru": Hardware Design of an Experimental Tabletop Robot Assistant
    Gomez, Randy
    Szapiro, Deborah
    Galindo, Kerl
    Nakamura, Keisuke
    [J]. HRI '18: PROCEEDINGS OF THE 2018 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2018, : 233 - 240