Feature selection in acted speech for the creation of an emotion recognition personalization service

被引:0
作者
Anagnostopoulos, Christos-Nikolaos [1 ]
机构
[1] Univ Aegean, Cultural Technol & Commun Dpt, Intelligent Multimedia & Virtual Real Lab, Mitilini 81100, Lesvos Island, Greece
来源
THIRD INTERNATIONAL WORKSHOP ON SEMANTIC MEDIA ADAPTATION AND PERSONALIZATION, PROCEEDINGS | 2008年
关键词
D O I
10.1109/SMAP.2008.34
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
One hundred thirty three (133) sound/speech features extracted from Pitch, Mel Frequency Cepstral Coefficients, Energy and Formants were evaluated in order to create a feature set sufficient to discriminate between seven emotions in acted speech. After the appropriate feature selection, Multilayered Perceptrons here trained for emotion recognition on the basis of a 23-input vector, which provide information about the prosody of the speaker over the entire sentence. Several experiments were performed and the results are presented analytically. Extra emphasis was given to assess the proposed 23-input vector in a speaker independent framework where speakers are not "known" to the classifier. The proposed feature vector achieved promising results (51%) for speaker independent recognition in seven emotion classes. Moreover, considering the problem of classing high and low arousal emotions, our classifier reaches 86.8% successful recognition.
引用
收藏
页码:116 / 121
页数:6
相关论文
共 22 条
  • [1] Abdulla W. H., 2001, ART NEUR NETW EXP SY, P218
  • [2] ANG J, 2002, P INT C SPOK LANG PR, P2037
  • [3] [Anonymous], Waikato environment for knowledge analysis (WEKA) version 3.7.13
  • [4] [Anonymous], 2000, P INT C SPOKEN LANGU
  • [5] The role of intonation in emotional expressions
    Bänziger, T
    Scherer, KR
    [J]. SPEECH COMMUNICATION, 2005, 46 (3-4) : 252 - 267
  • [6] Boersma P, Praat: doing phonetics by computer (Version 6.0.14)
  • [7] BURKHARDT F, 2005, P INT LISB PORT, P1515
  • [8] Fingerhut M., 2004, IAML IASA C OSL NORW
  • [9] Context-independent multilingual emotion recognition from speech signals
    Vladimir Hozjan
    Zdravko Kačič
    [J]. International Journal of Speech Technology, 2003, 6 (3) : 311 - 320
  • [10] Kim EH, 2005, 2005 IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN), P667