Mexican Emotional Speech Database Based on Semantic, Frequency, Familiarity, Concreteness, and Cultural Shaping of Affective Prosody

被引:9
作者
Duville, Mathilde Marie [1 ]
Alonso-Valerdi, Luz Maria [1 ]
Ibarra-Zarate, David I. [1 ]
机构
[1] Tecnol Monterrey, Escuela Ingn Ciencias, Ave Eugenio Garza Sada 2501, Monterrey 64849, Mexico
关键词
affective computing; audio database; cross-cultural; machine learning; Mexican Spanish; emotional speech; paralinguistic information; discrete emotions; AFFECTIVE NORMS; SPANISH WORDS; RECOGNITION; FEATURES; EXPRESSION; CLASSIFICATION; VOICE; PERCEPTION; CATEGORIES; DIALECTS;
D O I
10.3390/data6120130
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, the Mexican Emotional Speech Database (MESD) that contains single-word emotional utterances for anger, disgust, fear, happiness, neutral and sadness with adult (male and female) and child voices is described. To validate the emotional prosody of the uttered words, a cubic Support Vector Machines classifier was trained on the basis of prosodic, spectral and voice quality features for each case study: (1) male adult, (2) female adult and (3) child. In addition, cultural, semantic, and linguistic shaping of emotional expression was assessed by statistical analysis. This study was registered at BioMed Central and is part of the implementation of a published study protocol. Mean emotional classification accuracies yielded 93.3%, 89.4% and 83.3% for male, female and child utterances respectively. Statistical analysis emphasized the shaping of emotional prosodies by semantic and linguistic features. A cultural variation in emotional expression was highlighted by comparing the MESD with the INTERFACE for Castilian Spanish database. The MESD provides reliable content for linguistic emotional prosody shaped by the Mexican cultural environment. In order to facilitate further investigations, a corpus controlled for linguistic features and emotional semantics, as well as one containing words repeated across voices and emotions are provided. The MESD is made freely available.
引用
收藏
页数:34
相关论文
共 73 条
  • [41] Similarities and differences between western cultures: Toddler temperament and parent-child interactions in the United States (US) and Germany
    Kirchhoff, C.
    Desmarais, E. E.
    Putnam, S. P.
    Gartstein, M. A.
    [J]. INFANT BEHAVIOR & DEVELOPMENT, 2019, 57
  • [42] When emotional prosody and semantics dance cheek to cheek: ERP evidence
    Kotz, Sonia A.
    Paulmann, Silke
    [J]. BRAIN RESEARCH, 2007, 1151 : 107 - 118
  • [43] On Short-Time Estimation of Vocal Tract Length from Formant Frequencies
    Lammert, Adam C.
    Narayanan, Shrikanth S.
    [J]. PLOS ONE, 2015, 10 (07):
  • [44] Cross-Cultural Emotion Recognition and In-Group Advantage in Vocal Expression: A Meta-Analysis
    Laukka, Petri
    Elfenbein, Hillary Anger
    [J]. EMOTION REVIEW, 2021, 13 (01) : 3 - 11
  • [45] The Expression and Recognition of Emotions in the Voice Across Five Nations: A Lens Model Analysis Based on Acoustic Features
    Laukka, Petri
    Elfenbein, Hillary Anger
    Thingujam, Nutankumar S.
    Rockstuhl, Thomas
    Iraki, Frederick K.
    Chui, Wanda
    Althoff, Jean
    [J]. JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, 2016, 111 (05) : 686 - 705
  • [46] Evidence for Cultural Dialects in Vocal Emotion Expression: Acoustic Classification Within and Across Five Nations
    Laukka, Petri
    Neiberg, Daniel
    Elfenbein, Hillary Anger
    [J]. EMOTION, 2014, 14 (03) : 445 - 449
  • [47] Emotion recognition and confidence ratings predicted by vocal stimulus type and prosodic parameters
    Lausen, Adi
    Hammerschmidt, Kurt
    [J]. HUMANITIES & SOCIAL SCIENCES COMMUNICATIONS, 2020, 7 (01):
  • [48] Exploiting the potentialities of features for speech emotion recognition
    Li, Dongdong
    Zhou, Yijun
    Wang, Zhe
    Gao, Daqi
    [J]. INFORMATION SCIENCES, 2021, 548 : 328 - 343
  • [49] Speech emotion recognition based on an improved brain emotion learning model
    Liu, Zhen-Tao
    Xie, Qiao
    Wu, Min
    Cao, Wei-Hua
    Mei, Ying
    Mao, Jun-Wei
    [J]. NEUROCOMPUTING, 2018, 309 : 145 - 156
  • [50] Speech emotion recognition based on feature selection and extreme learning machine decision tree
    Liu, Zhen-Tao
    Wu, Min
    Cao, Wei-Hua
    Mao, Jun-Wei
    Xu, Jian-Ping
    Tan, Guan-Zheng
    [J]. NEUROCOMPUTING, 2018, 273 : 271 - 280