Understanding the Design Space of Mouth Microgestures

被引:12
作者
Chen, Victor [1 ]
Xu, Xuhai [2 ]
Li, Richard [3 ]
Shi, Yuanchun [4 ]
Patel, Shwetak [3 ]
Wang, Yuntao [4 ]
机构
[1] Stanford Univ, Dept Comp Sci, Stanford, CA 94305 USA
[2] Univ Washington, Informat Sch, Seattle, WA 98195 USA
[3] Univ Washington, Paul G Allen Sch Comp Sci & Engn, Seattle, WA 98195 USA
[4] Tsinghua Univ, Dept Comp Sci & Technol, Key Lab Pervas Comp, Minist Educ, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 2021 ACM DESIGNING INTERACTIVE SYSTEMS CONFERENCE (DIS 2021) | 2021年
基金
国家重点研发计划;
关键词
Mouth microgesture; interaction techniques; user-designed gestures; design space; EXPRESSION;
D O I
10.1145/3461778.3462004
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
As wearable devices move toward the face (i.e. smart earbuds, glasses), there is an increasing need to facilitate intuitive interactions with these devices. Current sensing techniques can already detect many mouth-based gestures; however, users' preferences of these gestures are not fully understood. In this paper, we investigate the design space and usability of mouth-based microgestures. We first conducted brainstorming sessions (N=16) and compiled an extensive set of 86 user-defined gestures. Then, with an online survey (N=50), we assessed the physical and mental demand of our gesture set and identified a subset of 14 gestures that can be performed easily and naturally. Finally, we conducted a remote Wizard-of-Oz usability study (N=11) mapping gestures to various daily smartphone operations under a sitting and walking context. From these studies, we develop a taxonomy for mouth gestures, finalize a practical gesture set for common applications, and provide design guidelines for future mouth-based gesture interactions.
引用
收藏
页码:1068 / 1081
页数:14
相关论文
共 48 条
[21]  
Kuzume K., 2012, 2012 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), P387, DOI 10.1109/PerComW.2012.6197515
[22]   VIDEO ANOMALY DETECTION VIA PREDICTIVE AUTOENCODER WITH GRADIENT-BASED ATTENTION [J].
Lai, Yuandu ;
Liu, Rui ;
Han, Yahong .
2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2020,
[23]   TongueBoard: An Oral Interface for Subtle Input [J].
Li, Richard ;
Wu, Jason ;
Starner, Thad .
PROCEEDINGS OF THE 10TH AUGMENTED HUMAN INTERNATIONAL CONFERENCE 2019 (AH2019), 2019,
[24]   Buccal: Low-Cost Cheek Sensing for Inferring Continuous Jaw Motion in Mobile Virtual Reality [J].
Li, Richard ;
Reyes, Gabriel .
ISWC'18: PROCEEDINGS OF THE 2018 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2018, :180-183
[25]  
Lyons MJ, 2004, IEEE SYS MAN CYBERN, P598
[26]   EarFieldSensing: A Novel In-Ear Electric Field Sensing to Enrich Wearable Gesture Input through Facial Expressions [J].
Matthies, Denys J. C. ;
Strecker, Bernhard A. ;
Urban, Bodo .
PROCEEDINGS OF THE 2017 ACM SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'17), 2017, :1911-1922
[27]  
McCulloch C.E., 2005, Encyclopedia of Biostatistics, DOI [DOI 10.1002/9781118445112.STAT07540, 10.1002/9781118445112.stat07540]
[28]  
Morris M.R., 2010, P GRAPHICS INTERFACE, P261, DOI DOI 10.5555/1839214.1839260
[29]  
Nacenta M.A., 2013, P SIGCHI C HUM FACT
[30]  
Oakley I, 2007, LECT NOTES COMPUT SC, V4813, P121