Head-Coupled Kinematic Template Matching: A Prediction Model for Ray Pointing in VR

被引:20
作者
Henrikson, Rorik [1 ]
Grossman, Tovi [2 ]
Trowbridge, Sean [3 ]
Wigdor, Daniel [1 ,2 ]
Benko, Hrvoje [3 ]
机构
[1] Chatham Labs, Toronto, ON, Canada
[2] Univ Toronto, Toronto, ON, Canada
[3] Facebook Real Labs, Redmond, WA USA
来源
PROCEEDINGS OF THE 2020 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'20) | 2020年
关键词
Endpoint prediction; target prediction; virtual reality; VR; kinematics; ray pointing; template matching; OBJECT SELECTION;
D O I
10.1145/3313831.3376489
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents a new technique to predict the ray pointer landing position for selection movements in virtual reality (VR) environments. The technique adapts and extends a prior 2D kinematic template matching method to VR environments where ray pointers are used for selection. It builds on the insight that the kinematics of a controller and Head-Mounted Display (HMD) can be used to predict the ray's final landing position and angle. An initial study provides evidence that the motion of the head is a key input channel for improving prediction models. A second study validates this technique across a continuous range of distances, angles, and target sizes. On average, the technique's predictions were within 7.3 degrees of the true landing position when 50% of the way through the movement and within 3.4 degrees when 90%. Furthermore, compared to a direct extension of Kinematic Template Matching, which only uses controller movement, this head-coupled approach increases prediction accuracy by a factor of 1.8x when 40% of the way through the movement.
引用
收藏
页数:14
相关论文
共 63 条
  • [1] You Do Not Have to Touch to Select: A Study on Predictive In-car Touchscreen with Mid-air Selection
    Ahmad, Bashar I.
    Langdon, Patrick M.
    Godsill, Simon J.
    Donkor, Richard
    Wilde, Rebecca
    Skrypchuk, Lee
    [J]. 8TH INTERNATIONAL CONFERENCE ON AUTOMOTIVE USER INTERFACES AND INTERACTIVE VEHICULAR APPLICATIONS (AUTOMOTIVEUI 2016), 2016, : 113 - 120
  • [2] Saccade Landing Position Prediction for Gaze-Contingent Rendering
    Arabadzhiyska, Elena
    Tursun, Okan Tarhan
    Myszkowski, Karol
    Seidel, Hans-Peter
    Didyk, Piotr
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2017, 36 (04):
  • [3] A survey of 3D object selection techniques for virtual environments
    Argelaguet, Ferran
    Andujar, Carlos
    [J]. COMPUTERS & GRAPHICS-UK, 2013, 37 (03): : 121 - 136
  • [4] Asano Takeshi., 2005, UIST '05: Proceedings of the 18th annual ACM symposium on User interface software and technology, P133, DOI DOI 10.1145/1095034.1095058
  • [5] Aydemir Gokcen Aslan, 2013, Universal Access in Human-Computer Interaction. Design Methods, Tools, and Interaction Techniques for eInclusion. 7th International Conference, UAHCI 2013 Held as Part of HCI International 2013. Proceedings. LNCS 8009, P419, DOI 10.1007/978-3-642-39188-0_45
  • [6] Haptic Retargeting: Dynamic Repurposing of Passive Haptics for Enhanced Virtual Reality Experiences
    Azmandian, Mahdi
    Hancock, Mark
    Benko, Hrvoje
    Ofek, Eyal
    Wilson, Andrew D.
    [J]. 34TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2016, 2016, : 1968 - 1979
  • [7] Baloup M, 2019, CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, DOI [10.1145/3290605.3300331, 10.1109/africon46755.2019.9133906]
  • [8] Biswas Pradipta, 2013, Human-Computer Interaction and Knowledge Discovery in Complex, Unstructured, Big Data. Third International Workshop, HCI-KDD 2013. Held at SouthCHI 2013. Proceedings: LNCS 7947, P112, DOI 10.1007/978-3-642-39146-0_11
  • [9] Blanch R., 2004, P SIGCHI C HUMAN FAC, P519, DOI DOI 10.1145/985692.985758
  • [10] Casallas JS, 2014, 2014 IEEE VIRTUAL REALITY (VR), P51, DOI 10.1109/VR.2014.6802050