Eliciting Multimodal Gesture plus Speech Interactions in a Multi-Object Augmented Reality Environment

被引:4
作者
Zhou, Xiaoyan [1 ]
Williams, Adam S. [1 ]
Ortega, Francisco R. [1 ]
机构
[1] Colorado State Univ, Ft Collins, CO USA
来源
28TH ACM SYMPOSIUM ON VIRTUAL REALITY SOFTWARE AND TECHNOLOGY, VRST 2022 | 2022年
基金
美国国家科学基金会;
关键词
elicitation; multimodal interaction; augmented reality; gesture and speech interaction; multi-object AR environment;
D O I
10.1145/3562939.3565637
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
As augmented reality (AR) technology and hardware become more mature and affordable, researchers have been exploring more intuitive and discoverable interaction techniques for immersive environments. This paper investigates multimodal interaction for 3D object manipulation in a multi-object AR environment. To identify the user-defined gestures, we conducted an elicitation study involving 24 participants and 22 referents using an augmented reality headset. It yielded 528 proposals and generated a winning gesture set with 25 gestures after binning and ranking all gesture proposals. We found that for the same task, the same gesture was preferred for both one and two-object manipulation, although both hands were used in the two-object scenario. We present the gestures and speech results, and the differences compared to similar studies in a single object AR environment. The study also explored the association between speech expressions and gesture stroke during object manipulation, which could improve the recognizer efficiency in augmented reality headsets.
引用
收藏
页数:10
相关论文
共 45 条