Constraining user response via multimodal dialog interface

被引:3
|
作者
Baker K. [1 ]
Mckenzie A. [2 ]
Biermann A. [2 ]
Webelhuth G. [3 ]
机构
[1] Linguistics Department, Ohio State University, 222 Oxley Hall, Columbus, OH 43210-1298
[2] Department of Computer Science, Duke University, Durham, NC 27708-0129
[3] Sem. für Englische Philologie, Georg-August-Univ. Göttingen, 37073 Göttingen
关键词
Constrain user response; Multimodal dialog interface; Speech recognition;
D O I
10.1023/B:IJST.0000037069.82313.57
中图分类号
学科分类号
摘要
This paper presents the results of an experiment comparing two different designs of an automated dialog interface. We compare a multimodal design utilizing text displays coordinated with spoken prompts to a voice-only version of the same application. Our results show that the text-coordinated version is more efficient in terms of word recognition and number of out-of-grammar responses, and is equal to the voice-only version in terms of user satisfaction. We argue that this type of multimodal dialog interface effectively constrains user response to allow for better speech recognition without increasing cognitive load or compromising the naturalness of the interaction.
引用
收藏
页码:251 / 258
页数:7
相关论文
共 50 条
  • [1] Multimodal user interface research of CAD system
    Pu, JT
    Yue, W
    Dong, SH
    CAD/GRAPHICS '2001: PROCEEDINGS OF THE SEVENTH INTERNATIONAL CONFERENCE ON COMPUTER AIDED DESIGN AND COMPUTER GRAPHICS, VOLS 1 AND 2, 2001, : 405 - 410
  • [2] A Multimodal User Interface for an Assistive Robotic Shopping Cart
    Ryumin, Dmitry
    Kagirov, Ildar
    Axyonov, Alexandr
    Pavlyuk, Nikita
    Saveliev, Anton
    Kipyatkova, Irina
    Zelezny, Milos
    Mporas, Iosif
    Karpov, Alexey
    ELECTRONICS, 2020, 9 (12) : 1 - 25
  • [3] Evaluating user interface of multimodal teaching advisor implemented on a wearable personal computer
    Yanagihara, Y
    Muto, S
    Kakizaki, T
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2001, 31 (04) : 423 - 438
  • [4] Evaluating User Interface of Multimodal Teaching Advisor Implemented on a Wearable Personal Computer
    Yoshimasa Yanagihara
    Sinyo Muto
    Takao Kakizaki
    Journal of Intelligent and Robotic Systems, 2001, 31 : 423 - 438
  • [5] Speaker-independent spontaneous speech dialog system TOSBURG II using multimodal response and speech response cancellation
    Takebayashi, Yoichi
    Kanazawa, Hiroshi
    Nagata, Yoshifumi
    Seto, Shigenobu
    Electronics and Communications in Japan, Part III: Fundamental Electronic Science (English translation of Denshi Tsushin Gakkai Ronbunshi), 1994, 77 (12): : 24 - 34
  • [6] A Multimodal User Interface using the Webinos Platform to Connect a Smart Input Device to the Web of Things
    Baccaglini, E.
    Gavelli, M.
    Morello, M.
    Vergori, P.
    PECCS 2015 Proceedings of the 5th International Conference on Pervasive and Embedded Computing and Communication Systems, 2015, : 104 - 108
  • [8] Multimodal Interface for Elderly People
    Awada, Imad Alex
    Mocanu, Irina
    Florea, Adina
    Cramaiuc, Bogdan
    2017 21ST INTERNATIONAL CONFERENCE ON CONTROL SYSTEMS AND COMPUTER SCIENCE (CSCS), 2017, : 536 - 541
  • [9] Multimodal Spoken Dialog System Using State Estimation by Body Motion
    Koseki, Takeru
    Kosaka, Tetsuo
    2017 IEEE 6TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE), 2017,
  • [10] Human-robot interaction via voice-controllable intelligent user interface
    Medicherla, Harsha
    Sekmen, Ali
    ROBOTICA, 2007, 25 (05) : 521 - 527