Inferring Emotions From Large-Scale Internet Voice Data

被引:13
作者
Jia, Jia [1 ]
Zhou, Suping [1 ]
Yin, Yufeng [1 ]
Wu, Boya [1 ]
Chen, Wei [2 ]
Meng, Fanbo [2 ]
Wang, Yanfeng [2 ]
机构
[1] Tsinghua Univ, Dept Comp Sci & Technol, Beijing Natl Res Ctr Informat Sci & Technol, Key Lab Pervas Comp,Minist Educ, Beijing 100084, Peoples R China
[2] Sogou Corp, Beijing 100084, Peoples R China
关键词
Emotion; Internet voice data; deep sparse neural network; long short-term memory; RECOGNITION; FEATURES; MOOD;
D O I
10.1109/TMM.2018.2887016
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As voice dialog applications (VDAs, e.g., Siri,(1) Cortana,(2) Google Now(3)) are increasing in popularity, inferring emotions from the large-scale internet voice data generated from VDAs can help give a more reasonable and humane response. However, the tremendous amounts of users in large-scale internet voice data lead to a great diversity of users accents and expression patterns. Therefore, the traditional speech emotion recognition methods, which mainly target acted corpora, cannot effectively handle the massive and diverse amount of internet voice data. To address this issue, we carry out a series of observations, find suitable emotion categories for large-scale internet voice data, and verify the indicators of the social attributes (query time, query topic, and users location) and emotion inferring. Based on our observations, two different strategies are employed to solve the problem. First, a deep sparse neural network model that uses acoustic information, textual information, and three indicators (a temporal indicator, descriptive indicator, and geo-social indicator) as the input is proposed. Then, to capture the contextual information, we propose a hybrid emotion inference model that includes long short-term memory to capture the acoustic features and a latent dirichlet allocation to extract text features. Experiments on 93 000 utterances collected from the Sogou Voice Assistant(4) (Chinese Siri) validate the effectiveness of the proposed methodologies. Furthermore, we compare the two methodologies and give their advantages and disadvantages.
引用
收藏
页码:1853 / 1866
页数:14
相关论文
共 70 条
[1]   AN INTRODUCTION TO KERNEL AND NEAREST-NEIGHBOR NONPARAMETRIC REGRESSION [J].
ALTMAN, NS .
AMERICAN STATISTICIAN, 1992, 46 (03) :175-185
[2]  
[Anonymous], 2016, P 24 ACM INT C MULT
[3]  
[Anonymous], 2008, ADV NEURAL INFORM PR, DOI DOI 10.1109/ICISS.2008.7
[4]  
[Anonymous], 2012, INT J ADV RES COMPUT
[5]  
[Anonymous], 2010, ICWSM
[6]  
Balabantaray R. C., 2012, IJAIS, V4, P48, DOI DOI 10.5120/IJAIS12-450651
[7]  
Baldi P., 2012, P ICML WORKSH UNS TR, P27, DOI 10.5555/3045796.3045
[8]   Latent Dirichlet allocation [J].
Blei, DM ;
Ng, AY ;
Jordan, MI .
JOURNAL OF MACHINE LEARNING RESEARCH, 2003, 3 (4-5) :993-1022
[9]  
Bollen J., 2009, COMPUT SCI, V44, P2365
[10]   Twitter mood predicts the stock market [J].
Bollen, Johan ;
Mao, Huina ;
Zeng, Xiaojun .
JOURNAL OF COMPUTATIONAL SCIENCE, 2011, 2 (01) :1-8