Perceptual cues in nonverbal vocal expressions of emotion

被引:219
作者
Sauter, Disa A.
Eisner, Frank [1 ]
Calder, Andrew J. [2 ]
Scott, Sophie K. [1 ]
机构
[1] UCL, Inst Cognit Neurosci, London WC1N 3AR, England
[2] MRC Cognit & Brain Sci Unit, Cambridge, England
基金
英国医学研究理事会; 英国惠康基金;
关键词
Emotion; Voice; Vocalizations; Acoustics; Nonverbal behaviour; FACIAL EXPRESSIONS; IMPAIRED RECOGNITION; SPEECH; COMMUNICATION; INTENSITY; DISPLAYS; CULTURES; SYSTEM;
D O I
10.1080/17470211003721642
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Work on facial expressions of emotions (Calder, Burton, Miller, Young, Akamatsu, [2001]) and emotionally inflected speech (Banse Scherer, [1996]) has successfully delineated some of the physical properties that underlie emotion recognition. To identify the acoustic cues used in the perception of nonverbal emotional expressions like laugher and screams, an investigation was conducted into vocal expressions of emotion, using nonverbal vocal analogues of the obasico emotions (anger, fear, disgust, sadness, and surprise; Ekman Friesen, [1971]; Scott et al., [1997]), and of positive affective states (Ekman, [1992], [2003]; Sauter Scott, [2007]). First, the emotional stimuli were categorized and rated to establish that listeners could identify and rate the sounds reliably and to provide confusion matrices. A principal components analysis of the rating data yielded two underlying dimensions, correlating with the perceived valence and arousal of the sounds. Second, acoustic properties of the amplitude, pitch, and spectral profile of the stimuli were measured. A discriminant analysis procedure established that these acoustic measures provided sufficient discrimination between expressions of emotional categories to permit accurate statistical classification. Multiple linear regressions with participants' subjective ratings of the acoustic stimuli showed that all classes of emotional ratings could be predicted by some combination of acoustic measures and that most emotion ratings were predicted by different constellations of acoustic features. The results demonstrate that, similarly to affective signals in facial expressions and emotionally inflected speech, the perceived emotional character of affective vocalizations can be predicted on the basis of their physical features.
引用
收藏
页码:2251 / 2272
页数:22
相关论文
共 49 条
[1]  
[Anonymous], HUMAN VOICE
[2]  
[Anonymous], 1993, P-centres in speech-an acoustic analysis
[3]  
[Anonymous], 2003, EMOTIONS REVEALED RE
[4]   Evidence for distinct contributions of form and motion information to the recognition of emotions from body gestures [J].
Atkinson, Anthony P. ;
Tunstall, Mary L. ;
Dittrich, Winand H. .
COGNITION, 2007, 104 (01) :59-72
[5]   Emotion perception from dynamic and static body expressions in point-light and full-light displays [J].
Atkinson, AP ;
Dittrich, WH ;
Gemmell, AJ ;
Young, AW .
PERCEPTION, 2004, 33 (06) :717-746
[6]  
BA N, 2005, SPEECH COMMUN, V46, P252
[7]   EMOTIONAL INTENSITY - MEASUREMENT AND THEORETICAL IMPLICATIONS [J].
BACHOROWSKI, JA ;
BRAATEN, EB .
PERSONALITY AND INDIVIDUAL DIFFERENCES, 1994, 17 (02) :191-199
[8]   Vocal expression and perception of emotion [J].
Bachorowski, JA .
CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE, 1999, 8 (02) :53-57
[9]   VOCAL EXPRESSION OF EMOTION - ACOUSTIC PROPERTIES OF SPEECH ARE ASSOCIATED WITH EMOTIONAL INTENSITY AND CONTEXT [J].
BACHOROWSKI, JA ;
OWREN, MJ .
PSYCHOLOGICAL SCIENCE, 1995, 6 (04) :219-224
[10]   Acoustic profiles in vocal emotion expression [J].
Banse, R ;
Scherer, KR .
JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, 1996, 70 (03) :614-636