Buccal: Low-Cost Cheek Sensing for Inferring Continuous Jaw Motion in Mobile Virtual Reality

被引:9
作者
Li, Richard [1 ]
Reyes, Gabriel [1 ]
机构
[1] Georgia Inst Technol, Sch Interact Comp, Atlanta, GA 30332 USA
来源
ISWC'18: PROCEEDINGS OF THE 2018 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS | 2018年
关键词
Mobile; virtual reality; VR; jaw motion; proximity; sensing; machine learning;
D O I
10.1145/3267242.3267265
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Teleconferencing is touted to be one of the main and most powerful uses of virtual reality (VR). While subtle facial movements play a large role in human-to-human interactions, current work in the VR space has focused on identifying discrete emotions and expressions through coarse facial cues and gestures. By tracking and representing the fluid movements of facial elements as continuous range values, users are able to more fully express themselves. In this work, we present Buccal, a simple yet effective approach to inferring continuous lip and jaw motions by measuring deformations of the cheeks and temples with only 5 infrared proximity sensors embedded in a mobile VR headset. The signals from these sensors are mapped to facial movements through a regression model trained with ground truth labels recorded from a webcam. For a streamlined user experience, we train a user independent model that requires no setup process. Finally, we demonstrate the use of our technique to manipulate the lips and jaw of a 3D face model in real-time.
引用
收藏
页码:180 / 183
页数:4
相关论文
共 9 条
  • [1] Arai Kohei, 2015, INT J ADV RES ARTIFI, V4, P6
  • [2] Baude Marjolaine, 2015, BIDIMENSIONAL SYSTEM, DOI [10.1155/2015/812961, DOI 10.1155/2015/812961]
  • [3] De Silva GamhewageC., 2003, 2003 Conference on Computer Vision and Pattern Recognition Workshop, V5, P52, DOI DOI 10.1109/CVPRW.2003.10055
  • [4] Kawahara Keisuke, 2016, P 13 INT C ADV COMP
  • [5] Foldabilizing Furniture
    Li, Honghua
    Hu, Ruizhen
    Alhashim, Ibraheem
    Zhang, Hao
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2015, 34 (04):
  • [6] Masai K., 2015, P ACM SIGGRAPH 2015, DOI [10.1145/2782782.2792495, DOI 10.1145/2782782.2792495]
  • [7] EarFieldSensing: A Novel In-Ear Electric Field Sensing to Enrich Wearable Gesture Input through Facial Expressions
    Matthies, Denys J. C.
    Strecker, Bernhard A.
    Urban, Bodo
    [J]. PROCEEDINGS OF THE 2017 ACM SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'17), 2017, : 1911 - 1922
  • [8] High-Fidelity Facial and Speech Animation for VR HMDs
    Olszewski, Kyle
    Lim, Joseph J.
    Saito, Shunsuke
    Li, Hao
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2016, 35 (06):
  • [9] Suzuki K, 2017, P IEEE VIRT REAL ANN, P177, DOI 10.1109/VR.2017.7892245