Visibility could potentially signify attentiveness. Estimating the line of sight is crucial in numerous fields, including computer vision, human-computer interaction, and psychology. Estimating the line of sight is crucial in numerous fields, including computer vision, human-computer interaction, and psychology. Line of sight estimation is difficult to achieve due to changes in head position and advancements in precision. This article examines domestic and international research and methodologies for developing a three-eye, eight-light source system and fusing head-eye motion data using deep convolution neural dynamics. What followed transpired. Analysis of line of sight characteristics consists of the pupil center, eye image, and pulchin spot center. The irises of humans are identified via face detection. The tracking of the pulchin spot center, face identification, eye image, and pupil center is accomplished through the utilization of a non-wearable three-eye eight light source system, Haar feature, AdaBoost algorithm, and ASM algorithm. To distinguish faces, ASM monitors face feature points in the wide field image and intercepts the human eye image. In order to improve line of sight, this article employs a convolutional neural network to arbitrarily identify faces on-screen. Three random screen orientations result in a classification accuracy of 99% for facial images. The pupil position for the front-facing eye is estimated by a Dual Haar-like feature extractor, followed by the estimation of the pupil center using morphological pixel model and ellipse fitting. Reflection of the purchin Purchin Point is located by a Canny edge detection operator and search algorithm subsequent to threshold processing of the image. In conclusion, the centroid method yields the center coordinates and line of sight of the purchin point. The feature extraction data for head posture presented in this article comprises the following in space axis coordinates: roll, pitch, yaw angle, three-axis position, and speed. Utilize image-based head position evaluation and inertial sensors. Utilizing acceleration interpolation, Picard approximation, and four-order Runge Kutta integral methods, the rotation quaternion is integrated. For tracking purposes, the acceleration interpolation integral converts a head posture quaternion to a rotation angle. Using a face model transformation tailored to the facial feature, the image approach subsequently transforms three-dimensional attributes into two-dimensional space. In order to account for the angle of head posture, the image approach modifies the depth parameters of the model subsequent to converting three-dimensional features to two-dimensional space via facial feature points. Using these two methods, head motion quantification can be enhanced.