Deep Learning-Driven Real-Time Facial Expression Tracking and Analysis in Virtual Reality

被引:0
作者
Liu, Yinuo [1 ]
机构
[1] School of Information Engineering, Northwest A & F University, Shaanxi, Xianyang
关键词
Deep confidence network; Facial expression tracking; Image processing; Multilayer perceptron; Virtual reality;
D O I
10.2478/amns-2024-2283
中图分类号
学科分类号
摘要
In this paper, we use VR equipment to collect relevant facial expression images and normalize the angle, scale, and gray scale of the collected images. The direction quantization of image features is realized by 3D gradient computation, and then the histogram of the direction gradient of each video sub-block is cascaded into the final HOG3D descriptor so as to complete the extraction of dynamic expression features. In view of the multi-dimensional problem of the features, it is proposed to use a principal component analysis algorithm to reduce their dimensionality and use a multi-layer perceptron and deep confidence network to jointly construct the facial expression tracking recognition model. The datasets are used to analyze real-time facial expression tracking in virtual reality. The results present that the verification correctness of both datasets A and B reaches the maximum at the 120th iteration. In contrast, the loss value reaches the equilibrium state quickly at the 40th iteration. The dynamic occlusion expression recognition rate of the deep confidence network on dataset A (66.52%) is higher than that of the CNN (62.74%), which fully demonstrates that the method of this paper is able to effectively improve the performance of real-time facial expression tracking performance in virtual reality. This study can help computers further understand human emotions through facial expressions, which is of great significance to the development of the human-computer interaction field. © 2024 Yinuo Liu, published by Sciendo.
引用
收藏
相关论文
共 28 条
[1]  
Taskiran M., Kahraman N., Erdem C.E., Face recognition: Past, present and future (a review), Digital Signal Processing, 106, (2020)
[2]  
Kortli Y., Jridi M., Al Falou A., Atri M., Face recognition systems: A survey, Sensors, 20, 2, (2020)
[3]  
Yu J., Wang Z., A video-based facial motion tracking and expression recognition system, Multimedia Tools and Applications, 76, pp. 14653-14672, (2017)
[4]  
Khan M., Chakraborty S., Astya R., Khepra S., Face detection and recognition using OpenCV, 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 116-119, (2019)
[5]  
Andriana D., Prihatmanto A.S., Hidayat E.M.I., Machbub C., Combination of face and posture features for tracking of moving human visual characteristics, International Journal on Electrical Engineering and Informatics, 9, 3, pp. 616-631, (2017)
[6]  
Chen L., Shao Y., Mei Y., Chu H., Chang Z., Zhan H., Yang G., Using KCF and face recognition for outdoor target tracking UAV, Tenth International Conference on Graphics and Image Processing (ICGIP 2018), 11069, pp. 153-158, (2019)
[7]  
Chrysos G.G., Antonakos E., Snape P., Asthana A., Zafeiriou S., A comprehensive performance evaluation of deformable face tracking “in-the-wild, International Journal of Computer Vision, 126, pp. 198-232, (2018)
[8]  
Bah S.M., Ming F., An improved face recognition algorithm and its application in attendance management system, Array, 5, (2020)
[9]  
Schofield D., Nagrani A., Zisserman A., Hayashi M., Matsuzawa T., Biro D., Carvalho S., Chimpanzee face recognition from videos in the wild using deep learning, Science advances, 5, 9, (2019)
[10]  
Bours C.C.A.H., Bakker-Huvenaars M.J., Tramper J., Bielczyk N., Scheepers F., Nijhof K.S., Buitelaar J.K., Emotional face recognition in male adolescents with autism spectrum disorder or disruptive behavior disorder: an eye-tracking study, European child & adolescent psychiatry, 27, pp. 1143-1157, (2018)