Study of emotion recognition based on fusion multi-modal bio-signal with SAE and LSTM recurrent neural network

被引:0
|
作者
Li Y.-J. [1 ,2 ,3 ]
Huang J.-J. [1 ,2 ,3 ]
Wang H.-Y. [1 ,2 ,3 ]
Zhong N. [1 ,2 ,3 ,4 ]
机构
[1] Institute of International WIC, Beijing University of Technology, Beijing
[2] Beijing Key Laboratory of Magnetic Resonance Imaging and Brain Informatics, Beijing
[3] Beijing International Collaboration Base on Brain Informatics Wisdom and Services, Beijing
[4] Beijing Advanced Innovation Center for Future Internet Technology, Beijing
来源
基金
对外科技合作项目(国际科技项目); 中国国家自然科学基金;
关键词
LSTM recurrent neural network; Multi-modal bio-signal emotion recognition; Multi-modal bio-signals fusion; Stacked auto-encoder neural network;
D O I
10.11959/j.issn.1000-436x.2017294
中图分类号
学科分类号
摘要
In order to achieve more accurate emotion recognition accuracy from multi-modal bio-signal features, a novel method to extract and fuse the signal with the stacked auto-encoder and LSTM recurrent neural networks was proposed. The stacked auto-encoder neural network was used to compress and fuse the features. The deep LSTM recurrent neural network was employed to classify the emotion states. The results present that the fused multi-modal features provide more useful information than single-modal features. The deep LSTM recurrent neural network achieves more accurate emotion classification results than other method. The highest accuracy rate is 0.792 6 © 2017, Editorial Board of Journal on Communications. All right reserved.
引用
收藏
页码:109 / 120
页数:11
相关论文
共 26 条
  • [1] Nie D., Wang X.H., Duan R.N., Et al., A survey on EEG based emotion recognition, Journal of Biomedical Engineering, 31, 4, pp. 595-606, (2012)
  • [2] Jonghwa K., Andre E., Emotion recognition based on physiological changes in music listening, IEEE Transactions on Pattern Analysis and Machine Intelligence, 30, pp. 2067-2083, (2008)
  • [3] Zhao L., Qian X.M., Zou C.R., Et al., A study on emotional recognition in speech signal, Journal of Software, 12, 7, pp. 1050-1055, (2001)
  • [4] Lin Y.L., Wei G., Yang K.C., A survey of emotion recognition in speech, Journal of Circuits and Systems, 12, 1, pp. 90-98, (2007)
  • [5] Zhao L.S., Zhang Q., Wei X.P., Survey on speech emotion recognition, Application Research of Computers, 26, 2, pp. 34-38, (2009)
  • [6] Othman M., Wahab A., Karim I., Et al., EEG emotion recognition based on the dimensional models of emotions, Procedia-Social and Behavioral Sciences, 97, 2, pp. 30-37, (2013)
  • [7] Chen Z., Liu G.Y., Application of EEG signal in emotion recognition, Computer Engineering, 36, 9, pp. 168-170, (2010)
  • [8] Zhang D., Chen D.W., You Y., Et al., Analyzing emotional EEG signals feature based on adaptive LEMPEL-ZIV complexity, Computer Applications and Software, 9, pp. 162-165, (2014)
  • [9] Upasana T., Shyamanta M.H., Estimation of mental fatigue during EEG based motor imagery, IHCI 2016: Intelligent Human Computer Interaction, pp. 122-132, (2016)
  • [10] Bajaj V., Pachori R.B., Detection of Human Emotions Using Features based on the Multiwavelet Transform of EEG Signals, pp. 215-240, (2015)