Research on real-time interaction for the emotion recognition robot based on depthwise separable convolution

被引:0
|
作者
Xu G. [1 ,2 ]
Zhao Y. [1 ,2 ]
Guo M. [1 ,2 ]
Jin M. [1 ,2 ]
机构
[1] State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin
[2] Key Laboratory of Electromagnetic Field and Electrical Apparatus Reliability of Hebei Province, Hebei University of Technology, Tianjin
来源
Yi Qi Yi Biao Xue Bao/Chinese Journal of Scientific Instrument | 2019年 / 40卷 / 10期
关键词
Depthwise separable convolution; Emotion recognition; Emotional interaction; Humanoid robots;
D O I
10.19650/j.cnki.cjsi.J1905426
中图分类号
学科分类号
摘要
Emotion recognition is a research hotspot in the field of artificial intelligence. If the human-robot interaction system can perceive human emotional behavior and express emotion, it will make the interaction between robot and human more natural. Humans acquire emotional information mainly through facial expression, semantic intonation and body language. Taking the NAO robot with high degree of freedom as an application platform, a human-robot interaction system is designed for facial emotion recognition and body emotion expression. Firstly, the depthwise separable convolution algorithm is introduced to extract and classify features of facial expressions (e.g., angry, fear, sad, happy, surprise and neutral). Results showed that the prediction accuracy of FER2013 facial expression test set could reach 0.711 by the trained network model. Secondly, the body movement of NAO robot are designed and classified according to six facial emotions. Finally, the real-time expression of the user's emotional state by the robot is tested, and the feedback time is within 2 s. The statistical analysis of the prediction results of 10 consecutive frames is carried out. © 2019, Science Press. All right reserved.
引用
收藏
页码:161 / 168
页数:7
相关论文
共 28 条
  • [1] Awais M., Henrich D., Human-robot Interaction in an Unknown Human Intention Scenario, IEEE International Conference on Frontiers of Information Technology, pp. 89-94, (2014)
  • [2] Wang T.M., Tao Y., Liu H., Current researches and future development trend of intelligent robot: A review, International Journal of Automation and Computing, 15, 5, pp. 525-546, (2018)
  • [3] Zheng Y., Jing X.B., Li G.L., Application of human-machine intelligence synergy in the field of medical and rehabilitation robot, Chinese Journal of Scientific Instrument, 38, 10, pp. 25-32, (2017)
  • [4] Xu G.Z., Song A.G., Gao X., Et al., Robot-assisted clinical rehabilitation experiment based on anxiety emotion and hybrid control, Chinese Journal of Scientific Instrument, 38, 10, pp. 16-24, (2017)
  • [5] Fukuda T., Tachibana D., Arai F., Et al., Human-robot mutual communication system, IEEE International Workshop on Robot & Human Interactive Communication, pp. 14-19, (2001)
  • [6] Roning J., Holappa J., Kellokumpu V., Et al., minotaurus: A system for affective human-robot interaction in smart environments, Cognitive Computation, 6, 4, pp. 940-953, (2014)
  • [7] Liu Z., Wu M., Cao W., Et al., A facial expression emotion recognition based human-robot interaction system, IEEE/CAA Journal of Automatica Sinica, 4, 4, pp. 668-676, (2017)
  • [8] Cohn J.F., Ambadar Z., Ekman P., Observer-based measurement of facial expression with the Facial Action Coding System, Neuroscience Letters, 394, 3, pp. 203-221, (2007)
  • [9] Zhuang L., Guan Y., Improvement of face recognition with Gabor, PCA, and SVM under complex illumination conditions, Journal of Advanced Computational Intelligence and Intelligent Informatics, 23, 3, pp. 465-473, (2019)
  • [10] Sun X.D., Wu P.C., Hoi S.C.H., Face detection using deep learning: An improved faster RCNN approach, Neurocomputing, 299, 1, pp. 42-50, (2018)