Adding Directional Context to Gestures Using Doppler Effect

被引:3
作者
Bannis, Adeola [1 ]
Pan, Shijia [2 ]
Zhang, Pei [2 ]
机构
[1] Carnegie Mellon Univ, Dept Elect & Comp Engn, 5000 Forbes Ave, Pittsburgh, PA 15213 USA
[2] Carnegie Mellon Silicon Valley, Dept Elect & Comp Engn, Moffett Field, CA 94035 USA
来源
PROCEEDINGS OF THE 2014 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING (UBICOMP'14 ADJUNCT) | 2014年
关键词
gestures; ultrasound; sensor fusion;
D O I
10.1145/2638728.2638774
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Human beings often give non-verbal instructions through motions of the hand and arm, such as pointing or waving. These motions convey not just actions, but the direction or target of those actions. In this paper, we integrate direction into gesture definitions by detecting frequency shifts created by relative motion between a receiver and transmitter and combining this with inertial motion data captured by a smartphone. With the combined data we are able separate similar gestures with 71.7% accuracy in a typical home use environment.
引用
收藏
页码:5 / 8
页数:4
相关论文
共 4 条
[1]   DopLink: Using the Doppler Effect for Multi-Device Interaction [J].
Aumi, Md Tanvir Islam ;
Gupta, Sidhant ;
Goel, Mayank ;
Larson, Eric ;
Patel, Shwetak .
UBICOMP'13: PROCEEDINGS OF THE 2013 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING, 2013, :583-586
[2]   Gesture recognition based on arm tracking for human-robot interaction [J].
Sigalas, Markos ;
Baltzakis, Haris ;
Trahanias, Panos .
IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 2010, :5424-5429
[3]  
Sun Z., 2013, Proceeding of the 11th annual international conference on Mobile systems, applications, and services, P263
[4]  
Zigelbaum J, 2010, TEI 2010, P261