共 54 条
Adaptation to Vocal Expressions Reveals Multistep Perception of Auditory Emotion
被引:73
作者:
Bestelmeyer, Patricia E. G.
[1
]
Maurage, Pierre
[2
]
Rouger, Julien
[3
]
Latinus, Marianne
[4
]
Belin, Pascal
[4
,5
,6
]
机构:
[1] Bangor Univ, Sch Psychol, Bangor LL57 2AS, Gwynedd, Wales
[2] Catholic Univ Louvain, Dept Psychol, Cognit Neurosci & Clin Psychol Res Units, B-1348 Louvain La Neuve, Belgium
[3] Univ Maastricht, Dept Cognit Neurosci, NL-6200 MD Maastricht, Netherlands
[4] Aix Marseille Univ, Inst Neurosci La Timone, Unite Mixte Rech 7289, CNRS, F-13385 Marseille, France
[5] Univ Glasgow, Inst Neurosci & Psychol, Glasgow G12 8QB, Lanark, Scotland
[6] Univ Montreal, McGill Univ, Int Lab Brain Mus & Sound Res, Montreal, PQ H3C 3J7, Canada
基金:
英国经济与社会研究理事会;
关键词:
fMRI;
vocal emotion;
voice perception;
RIGHT-HEMISPHERE;
ACOUSTIC PARAMETERS;
AFFECTIVE PROSODY;
VOICE IDENTITY;
BRAIN NETWORKS;
CARRY-OVER;
FMRI;
IDENTIFICATION;
CORTEX;
RECOGNITION;
D O I:
10.1523/JNEUROSCI.4820-13.2014
中图分类号:
Q189 [神经科学];
学科分类号:
071006 ;
摘要:
The human voice carries speech as well as important nonlinguistic signals that influence our social interactions. Among these cues that impact our behavior and communication with other people is the perceived emotional state of the speaker. A theoretical framework for the neural processing stages of emotional prosody has suggested that auditory emotion is perceived in multiple steps (Schirmer and Kotz, 2006) involving low-level auditory analysis and integration of the acoustic information followed by higher-level cognition. Empirical evidence for this multistep processing chain, however, is still sparse. We examined this question using functional magnetic resonance imaging and a continuous carry-over design (Aguirre, 2007) to measure brain activity while volunteers listened to non-speech-affective vocalizations morphed on a continuum between anger and fear. Analyses dissociated neuronal adaptation effects induced by similarity in perceived emotional content between consecutive stimuli from those induced by their acoustic similarity. We found that bilateral voice-sensitive auditory regions as well as right amygdala coded the physical difference between consecutive stimuli. In contrast, activity in bilateral anterior insulae, medial superior frontal cortex, precuneus, and subcortical regions such as bilateral hippocampi depended predominantly on the perceptual difference between morphs. Our results suggest that the processing of vocal affect recognition is a multistep process involving largely distinct neural networks. Amygdala and auditory areas predominantly code emotion-related acoustic information while more anterior insular and prefrontal regions respond to the abstract, cognitive representation of vocal affect.
引用
收藏
页码:8098 / 8105
页数:8
相关论文