TongueBoard: An Oral Interface for Subtle Input

被引:52
作者
Li, Richard [1 ]
Wu, Jason [1 ]
Starner, Thad [1 ]
机构
[1] Google Res & Machine Intelligence, Mountain View, CA 94043 USA
来源
PROCEEDINGS OF THE 10TH AUGMENTED HUMAN INTERNATIONAL CONFERENCE 2019 (AH2019) | 2019年
关键词
wearable devices; input interaction; oral sensing; subtle gestures; silent speech interface;
D O I
10.1145/3311823.3311831
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present TongueBoard, a retainer form-factor device for recognizing non-vocalized speech. TongueBoard enables absolute position tracking of the tongue by placing capacitive touch sensors on the roof of the mouth. We collect a dataset of 21 common words from four user study participants (two native American English speakers and two non-native speakers with severe hearing loss). We train a classifier that is able to recognize the words with 91.01% accuracy for the native speakers and 77.76% accuracy for the non-native speakers in a user dependent, offline setting. The native English speakers then participate in a user study involving operating a calculator application with 15 non-vocalized words and two tongue gestures at a desktop and with a mobile phone while walking. TongueBoard consistently maintains an information transfer rate of 3.78 bits per decision (number of choices = 17, accuracy = 97.1%) and 2.18 bits per second across stationary and mobile contexts, which is comparable to our control conditions of mouse (desktop) and touchpad (mobile) input.
引用
收藏
页数:9
相关论文
共 37 条
[1]   CanalSense: Face-Related Movement Recognition System based on Sensing Air Pressure in Ear Canals [J].
Ando, Toshiyuki ;
Kubo, Yuki ;
Shizuki, Buntarou ;
Takahashi, Shin .
UIST'17: PROCEEDINGS OF THE 30TH ANNUAL ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY, 2017, :679-689
[2]  
[Anonymous], 2018, ARXIV180705162
[3]   Bitey: An Exploration of Tooth Click Gestures for Hands-Free User Interface Control [J].
Ashbrook, Daniel ;
Tejada, Carlos ;
Mehta, Dhwanit ;
Jiminez, Anthony ;
Muralitharam, Goudam ;
Gajendra, Sangeeta ;
Tallents, Ross .
PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON HUMAN-COMPUTER INTERACTION WITH MOBILE DEVICES AND SERVICES (MOBILEHCI'16), 2016, :158-169
[4]  
Ashbrook D, 2011, 29TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, P2043
[5]  
Bedri A., 2015, P 2015 ACM INT JOINT, DOI [DOI 10.1145/2800835, 10.1145/2800835.2807933, DOI 10.1145/2800835.2807933]
[6]  
Bedri Abdelkareem, 2017, Proc ACM Interact Mob Wearable Ubiquitous Technol, V1, DOI 10.1145/3130902
[7]   Toward Silent-Speech Control of Consumer Wearables [J].
Bedri, Abdelkareem ;
Sahni, Himanshu ;
Thukral, Pavleen ;
Starner, Thad ;
Byrd, David ;
Presti, Peter ;
Reyes, Gabriel ;
Ghovanloo, Maysam ;
Guo, Zehua .
COMPUTER, 2015, 48 (10) :54-+
[8]   On How Deaf People Might Use Speech to Control Devices [J].
Bigham, Jeffrey P. ;
Kushalnagar, Raja ;
Huang, Ting-Hao Kenneth ;
Pablo Flores, Juan ;
Savage, Saiph .
PROCEEDINGS OF THE 19TH INTERNATIONAL ACM SIGACCESS CONFERENCE ON COMPUTERS AND ACCESSIBILITY (ASSETS'17), 2017, :383-384
[9]  
Costanza E, 2004, LECT NOTES COMPUT SC, V3160, P426
[10]  
Cuturi M., 2011, P INT C MACH LEARN I