Selective Attention Modulates Early Human Evoked Potentials during Emotional Face-Voice Processing

被引:39
作者
Ho, Hao Tam [1 ]
Schroeger, Erich [2 ]
Kotz, Sonja A. [3 ]
机构
[1] Max Planck Inst Human Cognit & Brain Sci, D-04103 Leipzig, Germany
[2] Univ Leipzig, D-04109 Leipzig, Germany
[3] Univ Manchester, Manchester M13 9PL, Lancs, England
关键词
CROSS-MODAL INTEGRATION; MULTISENSORY INTEGRATION; AUDIOVISUAL INTEGRATION; FACIAL EXPRESSION; SPATIAL ATTENTION; TIME-COURSE; SPEECH; PERCEPTION; MECHANISMS; PROSODY;
D O I
10.1162/jocn_a_00734
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Recent findings on multisensory integration suggest that selective attention influences cross-sensory interactions from an early processing stage. Yet, in the field of emotional face-voice integration, the hypothesis prevails that facial and vocal emotional information interacts preattentively. Using ERPs, we investigated the influence of selective attention on the perception of congruent versus incongruent combinations of neutral and angry facial and vocal expressions. Attention was manipulated via four tasks that directed participants to (i) the facial expression, (ii) the vocal expression, (iii) the emotional congruence between the face and the voice, and (iv) the synchrony between lip movement and speech onset. Our results revealed early interactions between facial and vocal emotional expressions, manifested as modulations of the auditory N1 and P2 amplitude by incongruent emotional face-voice combinations. Although audiovisual emotional interactions within the N1 time window were affected by the attentional manipulations, interactions within the P2 modulation showed no such attentional influence. Thus, we propose that the N1 and P2 are functionally dissociated in terms of emotional face-voice processing and discuss evidence in support of the notion that the N1 is associated with cross-sensory prediction, whereas the P2 relates to the derivation of an emotional percept. Essentially, our findings put the integration of facial and vocal emotional expressions into a new perspectiveone that regards the integration process as a composite of multiple, possibly independent subprocesses, some of which are susceptible to attentional modulation, whereas others may be influenced by additional factors.
引用
收藏
页码:798 / 818
页数:21
相关论文
共 80 条
[1]   Audiovisual integration of speech falters under high attention demands [J].
Alsius, A ;
Navarra, J ;
Campbell, R ;
Soto-Faraco, S .
CURRENT BIOLOGY, 2005, 15 (09) :839-843
[2]   Attention to touch weakens audiovisual speech integration [J].
Alsius, Agnes ;
Navarra, Jordi ;
Soto-Faraco, Salvador .
EXPERIMENTAL BRAIN RESEARCH, 2007, 183 (03) :399-404
[3]  
[Anonymous], 2013, RSTUDIO INT DEV ENV
[4]   Cross-modal integration of emotional face and voice in congruous and incongruous pairs: The P2 ERP effect [J].
Balconi, Michela ;
Carrera, Alba .
JOURNAL OF COGNITIVE PSYCHOLOGY, 2011, 23 (01) :132-139
[5]   Bimodal speech: early suppressive visual effects in human auditory cortex [J].
Besle, J ;
Fort, A ;
Delpuech, C ;
Giard, MH .
EUROPEAN JOURNAL OF NEUROSCIENCE, 2004, 20 (08) :2225-2234
[6]   Electrophysiological (EEG, sEEG, MEG) evidence for multiple audiovisual interactions in the human auditory cortex [J].
Besle, Julien ;
Bertrand, Olivier ;
Giard, Marie-Helene .
HEARING RESEARCH, 2009, 258 (1-2) :143-151
[7]   MEASURING EMOTION - THE SELF-ASSESSMENT MANNEQUIN AND THE SEMANTIC DIFFERENTIAL [J].
BRADLEY, MM ;
LANG, PJ .
JOURNAL OF BEHAVIOR THERAPY AND EXPERIMENTAL PSYCHIATRY, 1994, 25 (01) :49-59
[8]   Multisensory integration: methodological approaches and emerging principles in the human brain [J].
Calvert, GA ;
Thesen, T .
JOURNAL OF PHYSIOLOGY-PARIS, 2004, 98 (1-3) :191-205
[9]   Audio-visual integration of emotion expression [J].
Collignon, Olivier ;
Girard, Simon ;
Gosselin, Frederic ;
Roy, Sylvain ;
Saint-Amour, Dave ;
Lassonde, Maryse ;
Lepore, Franco .
BRAIN RESEARCH, 2008, 1242 :126-135
[10]  
Compton Rebecca J, 2003, Behav Cogn Neurosci Rev, V2, P115, DOI 10.1177/1534582303255278