Research on Chinese Speech Emotion Recognition Based on Deep Neural Network and Acoustic Features

被引:1
作者
Lee, Ming-Che [1 ]
Yeh, Sheng-Cheng [1 ]
Chang, Jia-Wei [2 ]
Chen, Zhen-Yi [1 ]
机构
[1] Ming Chuan Univ, Dept Comp & Commun Engn, Taoyuan 333, Taiwan
[2] Natl Taichung Univ Sci & Technol, Dept Comp Sci & Informat Engn, Taichung 404, Taiwan
关键词
emotion recognition; deep neural network; acoustic features; SIGNALS; MODEL;
D O I
10.3390/s22134744
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In recent years, the use of Artificial Intelligence for emotion recognition has attracted much attention. The industrial applicability of emotion recognition is quite comprehensive and has good development potential. This research uses voice emotion recognition technology to apply it to Chinese speech emotion recognition. The main purpose of this research is to transform gradually popularized smart home voice assistants or AI system service robots from a touch-sensitive interface to a voice operation. This research proposed a specifically designed Deep Neural Network (DNN) model to develop a Chinese speech emotion recognition system. In this research, 29 acoustic characteristics in acoustic theory are used as the training attributes of the proposed model. This research also proposes a variety of audio adjustment methods to amplify datasets and enhance training accuracy, including waveform adjustment, pitch adjustment, and pre-emphasize. This study achieved an average emotion recognition accuracy of 88.9% in the CASIA Chinese sentiment corpus. The results show that the deep learning model and audio adjustment method proposed in this study can effectively identify the emotions of Chinese short sentences and can be applied to Chinese voice assistants or integrated with other dialogue applications.
引用
收藏
页数:16
相关论文
共 43 条
[31]  
Peeters G., 2004, CUIDADO 1 PROJECT RE
[32]   The subspace Gaussian mixture model-A structured model for speech recognition [J].
Povey, Daniel ;
Burget, Lukas ;
Agarwal, Mohit ;
Akyazi, Pinar ;
Kai, Feng ;
Ghoshal, Arnab ;
Glembek, Ondrej ;
Goel, Nagendra ;
Karafiat, Martin ;
Rastrow, Ariya ;
Rose, Richard C. ;
Schwarz, Petr ;
Thomas, Samuel .
COMPUTER SPEECH AND LANGUAGE, 2011, 25 (02) :404-439
[33]  
RUSSELL JA, 1980, J PERS SOC PSYCHOL, V38, P311
[34]  
Schneider S., 2019, arXiv
[35]  
Schuller B, 2013, INTERSPEECH, P148
[36]   Speech Emotion Recognition Two Decades in a Nutshell, Benchmarks, and Ongoing Trends [J].
Schuller, Bjoern W. .
COMMUNICATIONS OF THE ACM, 2018, 61 (05) :90-99
[37]   EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks [J].
Song, Tengfei ;
Zheng, Wenming ;
Song, Peng ;
Cui, Zhen .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2020, 11 (03) :532-541
[38]  
Starner T., 1997, Motion-based Recognition, P227
[39]  
Szegedy C, 2017, AAAI CONF ARTIF INTE, P4278
[40]  
Umamaheswari J., 2019, 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon), P177, DOI 10.1109/COMITCon.2019.8862221