Exploring user-defined gestures for lingual and palatal interaction

被引:3
|
作者
Villarreal-Narvaez, Santiago [1 ,2 ]
Perez-Medina, Jorge Luis [2 ]
Vanderdonckt, Jean [2 ]
机构
[1] Univ Namur, Namur Digital Inst, 61 Rue Bruxelles, B-5000 Namur, Belgium
[2] Catholic Univ Louvain, Louvain Res Management, 1 Pl Doyens, B-1348 Louvain la Neuve, Belgium
关键词
Gesture interaction; Tongue interaction; Internet of Things; Gesture elicitation study; TONGUE MOVEMENTS;
D O I
10.1007/s12193-023-00408-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Individuals with motor disabilities can benefit from an alternative means of interacting with the world: using their tongue. The tongue possesses precise movement capabilities within the mouth, allowing individuals to designate targets on the palate. This form of interaction, known as lingual interaction, enables users to perform basic functions by utilizing their tongues to indicate positions. The purpose of this work is to identify the lingual and palatal gestures proposed by end-users. In order to achieve this goal, our initial step was to examine relevant literature on the subject, including clinical studies on the motor capacity of the tongue, devices detecting the movement of the tongue, and current lingual interfaces (e.g., using a wheelchair). Then, we conducted a gesture elicitation study (GES) involving 24 (N = 24) participants, who proposed lingual and palatal gestures to perform 19 Internet of Things (IoT) referents, thus obtaining a corpus of 456 gestures. These gestures were clustered into similarity classes (80 unique gestures) and analyzed by dimension, nature, complexity, thinking time, and goodness-of-fit. Using the Agreement Rate (Ar) methodology, we present a set of 16 gestures for a lingual and palatal interface, which serve as a basis for further comparison with gestures suggested by disabled people.
引用
收藏
页码:167 / 185
页数:19
相关论文
共 50 条
  • [1] Exploring user-defined gestures for lingual and palatal interaction
    Santiago Villarreal-Narvaez
    Jorge Luis Perez-Medina
    Jean Vanderdonckt
    Journal on Multimodal User Interfaces, 2023, 17 : 167 - 185
  • [2] User-Defined Motion Gestures for Mobile Interaction
    Ruiz, Jaime
    Li, Yang
    Lank, Edward
    29TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2011, : 197 - 206
  • [3] Exploring the Design Space of Gestural Interaction with Active Tokens through User-Defined Gestures
    Valdes, Consuelo
    Eastman, Diana
    Grote, Casey
    Thatte, Shantanu
    Shaer, Orit
    Mazalek, Ali
    Ullmer, Brygg
    Konkel, Miriam K.
    32ND ANNUAL ACM CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2014), 2014, : 4107 - 4116
  • [4] Exploring User-Defined Gestures to Control a Group of Four UAVs
    Peshkova, Ekaterina
    Hitz, Martin
    2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2017, : 169 - 174
  • [5] User-Defined Gestures for Dual-Screen Mobile Interaction
    Wu, Huiyue
    Yang, Liuqingqing
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2020, 36 (10) : 978 - 992
  • [6] User-Defined Gestures for Augmented Reality
    Piumsomboon, Thammathip
    Clark, Adrian
    Billinghurst, Mark
    Cockburn, Andy
    HUMAN-COMPUTER INTERACTION - INTERACT 2013, PT II, 2013, 8118 : 282 - 299
  • [7] User-Defined Gestures for Surface Computing
    Wobbrock, Jacob O.
    Morris, Meredith Ringel
    Wilson, Andrew D.
    CHI2009: PROCEEDINGS OF THE 27TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, VOLS 1-4, 2009, : 1083 - 1092
  • [8] Afordance-Based and User-Defined Gestures for Spatial Tangible Interaction
    Gong, Weilun
    Santosa, Stephanie
    Grossman, Tovi
    Glueck, Michael
    Clarke, Daniel
    Lai, Frances
    DESIGNING INTERACTIVE SYSTEMS CONFERENCE, DIS 2023, 2023, : 1500 - 1514
  • [9] Effects of holding postures on user-defined touch gestures for tablet interaction
    Tu, Huawei
    Huang, Qihan
    Zhao, Yanchao
    Gao, Boyu
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2020, 141
  • [10] Exploring User-Defined Gestures as Input for Hearables and Recognizing Ear-Level Gestures with IMUs
    Sato, Yukina
    Amesaka, Takashi
    Yamamoto, Takumi
    Watanabe, Hiroki
    Sugiura, Yuta
    Proceedings of the ACM on Human-Computer Interaction, 2024, 8 (MHCI)