Automatic Brain Categorization of Discrete Auditory Emotion Expressions

被引:3
作者
Talwar, Siddharth [1 ]
Barbero, Francesca M. [1 ]
Calce, Roberta P. [1 ]
Collignon, Olivier [1 ,2 ]
机构
[1] Univ Louvain UCLouvain, Louvain Bion, Inst Res Psychol IPSY & Neurosci IoNS, Louvain, Belgium
[2] Sense Innovat & Res Ctr, Sch Hlth Sci, HES SO Valais Wallis, Lausanne, Switzerland
关键词
Emotion; Voice; Categories; EEG; Frequency tagging; REPORTED HEARING-LOSS; FACIAL EXPRESSION; SPATIOTEMPORAL DYNAMICS; VOCAL EXPRESSIONS; NEURAL RESPONSES; BASIC EMOTIONS; RECOGNITION; VOICES; AMYGDALA; COMMUNICATION;
D O I
10.1007/s10548-023-00983-8
中图分类号
R74 [神经病学与精神病学];
学科分类号
摘要
Seamlessly extracting emotional information from voices is crucial for efficient interpersonal communication. However, it remains unclear how the brain categorizes vocal expressions of emotion beyond the processing of their acoustic features. In our study, we developed a new approach combining electroencephalographic recordings (EEG) in humans with a frequency-tagging paradigm to 'tag' automatic neural responses to specific categories of emotion expressions. Participants were presented with a periodic stream of heterogeneous non-verbal emotional vocalizations belonging to five emotion categories: anger, disgust, fear, happiness and sadness at 2.5 Hz (stimuli length of 350 ms with a 50 ms silent gap between stimuli). Importantly, unknown to the participant, a specific emotion category appeared at a target presentation rate of 0.83 Hz that would elicit an additional response in the EEG spectrum only if the brain discriminates the target emotion category from other emotion categories and generalizes across heterogeneous exemplars of the target emotion category. Stimuli were matched across emotion categories for harmonicity-to-noise ratio, spectral center of gravity and pitch. Additionally, participants were presented with a scrambled version of the stimuli with identical spectral content and periodicity but disrupted intelligibility. Both types of sequences had comparable envelopes and early auditory peripheral processing computed via the simulation of the cochlear response. We observed that in addition to the responses at the general presentation frequency (2.5 Hz) in both intact and scrambled sequences, a greater peak in the EEG spectrum at the target emotion presentation rate (0.83 Hz) and its harmonics emerged in the intact sequence in comparison to the scrambled sequence. The greater response at the target frequency in the intact sequence, together with our stimuli matching procedure, suggest that the categorical brain response elicited by a specific emotion is at least partially independent from the low-level acoustic features of the sounds. Moreover, responses at the fearful and happy vocalizations presentation rates elicited different topographies and different temporal dynamics, suggesting that different discrete emotions are represented differently in the brain. Our paradigm revealed the brain's ability to automatically categorize non-verbal vocal emotion expressions objectively (at a predefined frequency of interest), behavior-free, rapidly (in few minutes of recording time) and robustly (with a high signal-to-noise ratio), making it a useful tool to study vocal emotion processing and auditory categorization in general and in populations where behavioral assessments are more challenging.
引用
收藏
页码:854 / 869
页数:16
相关论文
共 91 条
[1]  
[Anonymous], 1989, Human brain electrophysiology. Evoked potentials and evoked magnetic fields in science and medicine
[2]   Acoustic profiles in vocal emotion expression [J].
Banse, R ;
Scherer, KR .
JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, 1996, 70 (03) :614-636
[3]   Fast Periodic Auditory Stimulation Reveals a Robust Categorical Response to Voices in the Human Brain [J].
Barbero, Francesca M. ;
Calce, Roberta P. ;
Talwar, Siddharth ;
Rossion, Bruno ;
Collignon, Olivier .
ENEURO, 2021, 8 (03)
[4]  
Baron-Cohen S, 1998, NATURE, V392, P459, DOI 10.1038/33076
[5]   The Montreal Affective Voices: A validated set of nonverbal affect bursts for research on auditory affective processing [J].
Belin, Pascal ;
Fillion-Bilodeau, Sarah ;
Gosselin, Frederic .
BEHAVIOR RESEARCH METHODS, 2008, 40 (02) :531-539
[6]   CONTROLLING THE FALSE DISCOVERY RATE - A PRACTICAL AND POWERFUL APPROACH TO MULTIPLE TESTING [J].
BENJAMINI, Y ;
HOCHBERG, Y .
JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 1995, 57 (01) :289-300
[7]  
Boersma P., 2001, Glot International, V5, P341, DOI DOI 10.1097/AUD.0B013E31821473F7
[8]   Recognition of affective prosody: Continuous wavelet measures of event-related brain potentials to emotional exclamations [J].
Bostanov, V ;
Kotchoubey, B .
PSYCHOPHYSIOLOGY, 2004, 41 (02) :259-268
[9]   EEG frequency-tagging demonstrates increased left hemispheric involvement and crossmodal plasticity for face processing in congenitally deaf signers [J].
Bottari, Davide ;
Bednaya, Evgenia ;
Dormal, Giulia ;
Villwock, Agnes ;
Dzhelyova, Milena ;
Grin, Konstantin ;
Pietrini, Pietro ;
Ricciardi, Emiliano ;
Rossion, Bruno ;
Roeder, Brigitte .
NEUROIMAGE, 2020, 223
[10]   The psychophysics toolbox [J].
Brainard, DH .
SPATIAL VISION, 1997, 10 (04) :433-436