Toward Better Understanding of Engagement in Multiparty Spoken Interaction with Children

被引:3
|
作者
Al Moubayed, Samer [1 ]
Lehman, Jill Fain [1 ]
机构
[1] Disney Res, Pittsburgh, PA 15213 USA
来源
ICMI'15: PROCEEDINGS OF THE 2015 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION | 2015年
关键词
Multi-party interaction; child-computer interaction; group task engagement; dialogue systems; spoken interaction; child-robot interaction;
D O I
10.1145/2818346.2820733
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
A system's ability to understand and model a human's engagement during an interactive task is important for both adapting its behavior to the moment and achieving a coherent interaction over time. Standard practice for creating such a capability requires uncovering and modeling the multimodal cues that predict engagement in a given task environment. The first step in this methodology is to have human coders produce "gold standard" judgments of sample behavior. In this paper we report results from applying this first step to the complex and varied behavior of children playing a fast-paced, speech-controlled, side-scrolling game called Mole Madness. We introduce a concrete metric for engagement-willingness to continue the interaction-that leads to better inter-coder judgments for children playing in pairs, explore how coders perceive the relative contribution of audio and visual cues, and describe engagement trends and patterns in our population. We also examine how the measures change when the same children play Mole Madness with a robot instead of a peer. We conclude by discussing the implications of the differences within and across play conditions for the automatic estimation of engagement and the extension of our autonomous robot player into a "buddy" that can individualize interaction for each player and game.
引用
收藏
页码:211 / 218
页数:8
相关论文
共 3 条
  • [1] Multiparty Interaction Understanding Using Smart Multimodal Digital Signage
    Tung, Tony
    Gomez, Randy
    Kawahara, Tatsuya
    Matsuyama, Takashi
    IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2014, 44 (05) : 625 - 637
  • [2] Automatic recognition and understanding of spoken language - A first step toward natural human-machine communication
    Juang, BH
    Furui, S
    PROCEEDINGS OF THE IEEE, 2000, 88 (08) : 1142 - 1165
  • [3] Toward Improved Child-Robot Interaction by Understanding Eye Movements
    Lohan, Katrin Solveig
    Sheppard, Eli
    Little, Gillian
    Rajendran, Gnanathusharan
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2018, 10 (04) : 983 - 992