Categorical emotion recognition from voice improves during childhood and adolescence

被引:39
作者
Grosbras, Marie-Helene [1 ]
Ross, Paddy D. [2 ]
Belin, Pascal [3 ,4 ,5 ]
机构
[1] Aix Marseille Univ, CNRS, Lab Neurosci Cognit, Marseille, France
[2] Univ Durham, Dept Psychol, Durham, England
[3] CNRS, La Timone Neurosci Inst, Mixed Res Unit 7289, Marseille, France
[4] Aix Marseille Univ, Marseille, France
[5] Univ Montreal, Dept Psychol, Montreal, PQ, Canada
来源
SCIENTIFIC REPORTS | 2018年 / 8卷
基金
英国经济与社会研究理事会;
关键词
FACIAL EXPRESSION RECOGNITION; SEX-DIFFERENCES; INFANT DISCRIMINATION; DEVELOPMENTAL-CHANGES; BRAIN-DEVELOPMENT; VOCAL CUES; CHILDREN; AUTISM; FACE; SENSITIVITY;
D O I
10.1038/s41598-018-32868-3
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Converging evidence demonstrates that emotion processing from facial expressions continues to improve throughout childhood and part of adolescence. Here we investigated whether this is also the case for emotions conveyed by non-linguistic vocal expressions, another key aspect of social interactions. We tested 225 children and adolescents (age 5-17) and 30 adults in a forced-choice labeling task using vocal bursts expressing four basic emotions (anger, fear, happiness and sadness). Mixed-model logistic regressions revealed a small but highly significant change with age, mainly driven by changes in the ability to identify anger and fear. Adult-level of performance was reached between 14 and 15 years of age. Also, across ages, female participants obtained better scores than male participants, with no significant interaction between age and sex effects. These results expand the findings showing that affective prosody understanding improves during childhood; they document, for the first time, continued improvement in vocal affect recognition from early childhood to mid-adolescence, a pivotal period for social maturation.
引用
收藏
页数:11
相关论文
共 85 条
[1]  
Agresti A., 2002, CATEGORICAL DATA ANA, DOI [10.1002/0471249688, DOI 10.1002/0471249688]
[2]   Inferring Emotions from Speech Prosody: Not So Easy at Age Five [J].
Aguert, Marc ;
Laval, Virginie ;
Lacroix, Agnes ;
Gil, Sandrine ;
Le Bigot, Ludovic .
PLOS ONE, 2013, 8 (12)
[3]   Action and Emotion Recognition from Point Light Displays: An Investigation of Gender Differences [J].
Alaerts, Kaat ;
Nackaerts, Evelien ;
Meyns, Pieter ;
Swinnen, Stephan P. ;
Wenderoth, Nicole .
PLOS ONE, 2011, 6 (06)
[4]   Developmental change and cross-domain links in vocal and musical emotion recognition performance in childhood [J].
Allgood, Rebecca ;
Heaton, Pamela .
BRITISH JOURNAL OF DEVELOPMENTAL PSYCHOLOGY, 2015, 33 (03) :398-403
[5]  
Barth J. M., 1997, MERRILL PALMER Q, V1, P11
[6]   Fitting Linear Mixed-Effects Models Using lme4 [J].
Bates, Douglas ;
Maechler, Martin ;
Bolker, Benjamin M. ;
Walker, Steven C. .
JOURNAL OF STATISTICAL SOFTWARE, 2015, 67 (01) :1-48
[7]   Voice-selective areas in human auditory cortex [J].
Belin, P ;
Zatorre, RJ ;
Lafaille, P ;
Ahad, P ;
Pike, B .
NATURE, 2000, 403 (6767) :309-312
[8]   The Montreal Affective Voices: A validated set of nonverbal affect bursts for research on auditory affective processing [J].
Belin, Pascal ;
Fillion-Bilodeau, Sarah ;
Gosselin, Frederic .
BEHAVIOR RESEARCH METHODS, 2008, 40 (02) :531-539
[9]   Emotional prosody:: sex differences in sensitivity to speech melody [J].
Besson, M ;
Magne, C ;
Schön, D .
TRENDS IN COGNITIVE SCIENCES, 2002, 6 (10) :405-407
[10]   ADOLESCENCE IN EVOLUTIONARY PERSPECTIVE [J].
BOGIN, B .
ACTA PAEDIATRICA, 1994, 83 :29-36