Understanding the Design Space of Mouth Microgestures

被引:12
作者
Chen, Victor [1 ]
Xu, Xuhai [2 ]
Li, Richard [3 ]
Shi, Yuanchun [4 ]
Patel, Shwetak [3 ]
Wang, Yuntao [4 ]
机构
[1] Stanford Univ, Dept Comp Sci, Stanford, CA 94305 USA
[2] Univ Washington, Informat Sch, Seattle, WA 98195 USA
[3] Univ Washington, Paul G Allen Sch Comp Sci & Engn, Seattle, WA 98195 USA
[4] Tsinghua Univ, Dept Comp Sci & Technol, Key Lab Pervas Comp, Minist Educ, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 2021 ACM DESIGNING INTERACTIVE SYSTEMS CONFERENCE (DIS 2021) | 2021年
基金
国家重点研发计划;
关键词
Mouth microgesture; interaction techniques; user-designed gestures; design space; EXPRESSION;
D O I
10.1145/3461778.3462004
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
As wearable devices move toward the face (i.e. smart earbuds, glasses), there is an increasing need to facilitate intuitive interactions with these devices. Current sensing techniques can already detect many mouth-based gestures; however, users' preferences of these gestures are not fully understood. In this paper, we investigate the design space and usability of mouth-based microgestures. We first conducted brainstorming sessions (N=16) and compiled an extensive set of 86 user-defined gestures. Then, with an online survey (N=50), we assessed the physical and mental demand of our gesture set and identified a subset of 14 gestures that can be performed easily and naturally. Finally, we conducted a remote Wizard-of-Oz usability study (N=11) mapping gestures to various daily smartphone operations under a sitting and walking context. From these studies, we develop a taxonomy for mouth gestures, finalize a practical gesture set for common applications, and provide design guidelines for future mouth-based gesture interactions.
引用
收藏
页码:1068 / 1081
页数:14
相关论文
共 48 条
  • [1] Facial Expression Recognition Using Ear Canal Transfer Function
    Amesaka, Takashi
    Watanabe, Hiroki
    Sugimoto, Masanori
    [J]. ISWC'19: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2019, : 1 - 9
  • [2] CanalSense: Face-Related Movement Recognition System based on Sensing Air Pressure in Ear Canals
    Ando, Toshiyuki
    Kubo, Yuki
    Shizuki, Buntarou
    Takahashi, Shin
    [J]. UIST'17: PROCEEDINGS OF THE 30TH ANNUAL ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY, 2017, : 679 - 689
  • [3] [Anonymous], 1993, Participatory Design-Principles and Practices
  • [4] Bitey: An Exploration of Tooth Click Gestures for Hands-Free User Interface Control
    Ashbrook, Daniel
    Tejada, Carlos
    Mehta, Dhwanit
    Jiminez, Anthony
    Muralitharam, Goudam
    Gajendra, Sangeeta
    Tallents, Ross
    [J]. PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON HUMAN-COMPUTER INTERACTION WITH MOBILE DEVICES AND SERVICES (MOBILEHCI'16), 2016, : 158 - 169
  • [5] Athalye Anish., 2016, GAVEL
  • [6] Framed Guessability: Improving the Discoverability of Gestures and Body Movements for Full-Body Interaction
    Cafaro, Francesco
    Lyons, Leilah
    Antle, Alissa N.
    [J]. PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018), 2018,
  • [7] ChewIt. An Intraoral Interface for Discreet Interactions
    Cascon, Pablo Gallego
    Matthies, Denys J. C.
    Muthukumarana, Sachith
    Nanayakkara, Suranga
    [J]. CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
  • [8] User Elicitation on Single-hand Microgestures
    Chan, Edwin
    Seyed, Teddy
    Stuerzlinger, Wolfgang
    Yang, Xing-Dong
    Maurer, Frank
    [J]. 34TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2016, 2016, : 3403 - 3414
  • [9] Cheng Jingyuan, 2014, P 5 AUGMENTED HUMAN, P1
  • [10] Silent speech interfaces
    Denby, B.
    Schultz, T.
    Honda, K.
    Hueber, T.
    Gilbert, J. M.
    Brumberg, J. S.
    [J]. SPEECH COMMUNICATION, 2010, 52 (04) : 270 - 287