GazeIntent: Adapting dwell-time selection in VR interaction with real-time intent modeling

被引:0
作者
Narkar A.S. [1 ]
Michalak J.J. [1 ]
Peacock C.E. [2 ]
David-John B. [1 ]
机构
[1] Virginia Tech, Blacksburg, 24060, VA
[2] Independent Researcher, Denver, 80005, CO
关键词
algorithms; Eye movements and cognition; Gaze-controlled and hands-free interfaces; Gaze-input in augmented or mixed reality systems; Machine-learning methods; Novel systems; Predictive models; Task-specific evaluations;
D O I
10.1145/3655600
中图分类号
学科分类号
摘要
The use of ML models to predict a user’s cognitive state from behavioral data has been studied for various applications which includes predicting the intent to perform selections in VR. We developed a novel technique that uses gaze-based intent models to adapt dwell-time thresholds to aid gaze-only selection. A dataset of users performing selection in arithmetic tasks was used to develop intent prediction models (F1 = 0.94). We developed GazeIntent to adapt selection dwell times based on intent model outputs and conducted an end-user study with returning and new users performing additional tasks with varied selection frequencies. Personalized models for returning users effectively accounted for prior experience and were preferred by 63% of users. Our work provides the field with methods to adapt dwell-based selection to users, account for experience over time, and consider tasks that vary by selection frequency. © 2024 Copyright held by the owner/author(s).
引用
收藏
相关论文
共 52 条
  • [1] Abadi M., Chu A., Goodfellow I., Brendan McMahan H., Mironov I., Talwar K., Zhang L., Deep learning with differential privacy, Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp. 308-318, (2016)
  • [2] Alghofaili R., Sawahata Y., Huang H., Wang H.-C., Shiratori T., Yu L.-F., Lost in Style: Gaze-Driven Adaptive Aid for VR Navigation, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI’19), pp. 1-12, (2019)
  • [3] Bednarik R., Vrzakova H., Hradis M., What Do You Want to Do next: A Novel Approach for Intent Prediction in Gaze-Based Interaction, Proceedings of the Symposium on Eye Tracking Research and Applications (Santa Barbara, California) (ETRA’12), pp. 83-90, (2012)
  • [4] Bednarik R., Vrzakova H., Hradis M., What do you want to do next: a novel approach for intent prediction in gaze-based interaction, Proceedings of the symposium on eye tracking research and applications, pp. 83-90, (2012)
  • [5] Bozkir E., Gunlu O., Fuhl W., Schaefer R.F., Kasneci E., Differential privacy for eye tracking with temporal correlations, Plos one, 16, 8, (2021)
  • [6] Bozkir E., Ozdel S., Wang M., David-John B., Gao H., Butler K., Jain E., Kasneci E., Eye-tracked Virtual Reality: A Comprehensive Survey on Methods and Privacy Challenges, (2023)
  • [7] Bremer G., Stein N., Lappe M., Predicting future position from natural walking and eye movements with machine learning, 2021 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), pp. 19-28, (2021)
  • [8] David-John B., Butler K., Jain E., For your eyes only: Privacy-preserving eye-tracking datasets, 2022 Symposium on Eye Tracking Research and Applications, pp. 1-6, (2022)
  • [9] David-John B., Butler K., Jain E., Privacy-preserving datasets of eye-tracking samples with applications in XR, IEEE Transactions on Visualization and Computer Graphics, 29, 5, pp. 2774-2784, (2023)
  • [10] David-John B., Peacock C., Zhang T., Scott Murdison T., Benko H., Jonker T.R., Towards Gaze-Based Prediction of the Intent to Interact in Virtual Reality, ACM Symposium on Eye Tracking Research and Applications (Virtual Event, Germany) (ETRA’21 Short Papers), (2021)