Research and application of gaze direction algorithm based on head movement and eye movement data fusion

被引:0
作者
Zhou, Jiarui [1 ]
机构
[1] Heilongjiang Vocat Coll, Harbin 150000, Heilongjiang, Peoples R China
来源
PROCEEDINGS OF INTERNATIONAL CONFERENCE ON ALGORITHMS, SOFTWARE ENGINEERING, AND NETWORK SECURITY, ASENS 2024 | 2024年
关键词
Eye movements characteristics; Multiple eyes and multiple light sources; Head posture; Neural network; Data fusion;
D O I
10.1145/3677182.3677195
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Visibility could potentially signify attentiveness. Estimating the line of sight is crucial in numerous fields, including computer vision, human-computer interaction, and psychology. Estimating the line of sight is crucial in numerous fields, including computer vision, human-computer interaction, and psychology. Line of sight estimation is difficult to achieve due to changes in head position and advancements in precision. This article examines domestic and international research and methodologies for developing a three-eye, eight-light source system and fusing head-eye motion data using deep convolution neural dynamics. What followed transpired. Analysis of line of sight characteristics consists of the pupil center, eye image, and pulchin spot center. The irises of humans are identified via face detection. The tracking of the pulchin spot center, face identification, eye image, and pupil center is accomplished through the utilization of a non-wearable three-eye eight light source system, Haar feature, AdaBoost algorithm, and ASM algorithm. To distinguish faces, ASM monitors face feature points in the wide field image and intercepts the human eye image. In order to improve line of sight, this article employs a convolutional neural network to arbitrarily identify faces on-screen. Three random screen orientations result in a classification accuracy of 99% for facial images. The pupil position for the front-facing eye is estimated by a Dual Haar-like feature extractor, followed by the estimation of the pupil center using morphological pixel model and ellipse fitting. Reflection of the purchin Purchin Point is located by a Canny edge detection operator and search algorithm subsequent to threshold processing of the image. In conclusion, the centroid method yields the center coordinates and line of sight of the purchin point. The feature extraction data for head posture presented in this article comprises the following in space axis coordinates: roll, pitch, yaw angle, three-axis position, and speed. Utilize image-based head position evaluation and inertial sensors. Utilizing acceleration interpolation, Picard approximation, and four-order Runge Kutta integral methods, the rotation quaternion is integrated. For tracking purposes, the acceleration interpolation integral converts a head posture quaternion to a rotation angle. Using a face model transformation tailored to the facial feature, the image approach subsequently transforms three-dimensional attributes into two-dimensional space. In order to account for the angle of head posture, the image approach modifies the depth parameters of the model subsequent to converting three-dimensional features to two-dimensional space via facial feature points. Using these two methods, head motion quantification can be enhanced.
引用
收藏
页码:59 / 65
页数:7
相关论文
共 29 条
  • [1] A Regression-Based User Calibration Framework for Real-Time Gaze Estimation
    Arar, Nuri Murat
    Gao, Hua
    Thiran, Jean-Philippe
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2017, 27 (12) : 2623 - 2638
  • [2] Gazing point dependent eye gaze estimation
    Cheng, Hong
    Liu, Yaqi
    Fu, Wenhao
    Ji, Yanli
    Yang, Lu
    Zhao, Yang
    Yang, Jie
    [J]. PATTERN RECOGNITION, 2017, 71 : 36 - 44
  • [3] 3-D Gaze-Estimation Method Using a Multi-Camera-Multi-Light-Source System
    Chi, Jiannan
    Liu, Jiahui
    Wang, Feng
    Chi, Yingkai
    Hou, Zeng-Guang
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2020, 69 (12) : 9695 - 9708
  • [4] Chi JN, 2011, Line-of-sight tracking technology
  • [5] A breadth-first survey of eye-tracking applications
    Duchowski, AT
    [J]. BEHAVIOR RESEARCH METHODS INSTRUMENTS & COMPUTERS, 2002, 34 (04): : 455 - 470
  • [6] Jiang J, 2019, Appearance-Based Gaze Tracking: a Brief Review
  • [7] Gaze360: Physically Unconstrained Gaze Estimation in the Wild
    Kellnhofer, Petr
    Recasens, Adria
    Stent, Simon
    Matusik, Wojciech
    Torralba, Antonio
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6911 - 6920
  • [8] Deep transfer learning in sheep activity recognition using accelerometer data
    Kleanthous, Natasa
    Hussain, Abir
    Khan, Wasiq
    Sneddon, Jennifer
    Liatsis, Panos
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2022, 207
  • [9] Eye Tracking for Everyone
    Krafka, Kyle
    Khosla, Aditya
    Kellnhofer, Petr
    Kannan, Harini
    Bhandarkar, Suchendra
    Matusik, Wojciech
    Torralba, Antonio
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 2176 - 2184
  • [10] Human activity recognition based on multienvironment sensor data
    Li, Yang
    Yang, Guanci
    Su, Zhidong
    Li, Shaobo
    Wang, Yang
    [J]. INFORMATION FUSION, 2023, 91 : 47 - 63