Cerebral Correlates and Statistical Criteria of Cross-Modal Face and Voice Integration

被引:23
作者
Love, Scott A. [1 ]
Pollick, Frank E. [1 ]
Latinus, Marianne [1 ,2 ]
机构
[1] Univ Glasgow, Sch Psychol, Glasgow G12 8QB, Lanark, Scotland
[2] Univ Glasgow, Ctr Cognit Neuroimaging CCNi, Inst Neurosci & Psychol, Glasgow G12 8QB, Lanark, Scotland
来源
SEEING AND PERCEIVING | 2011年 / 24卷 / 04期
基金
英国经济与社会研究理事会;
关键词
Multisensory; audiovisual; fMRI; speech; connectivity; super-additive; SUPERIOR TEMPORAL SULCUS; AUDITORY-CORTEX ACTIVATION; MULTISENSORY INTEGRATION; SPEECH-PERCEPTION; AUDIOVISUAL INTEGRATION; VISUAL SPEECH; INVERSE EFFECTIVENESS; HUMAN BRAIN; FMRI; RECOGNITION;
D O I
10.1163/187847511X584452
中图分类号
Q6 [生物物理学];
学科分类号
071011 ;
摘要
Perception of faces and voices plays a prominent role in human social interaction, making multisensory integration of cross-modal speech a topic of great interest in cognitive neuroscience. How to define potential sites of multisensory integration using functional magnetic resonance imaging (fMRI) is currently under debate, with three statistical criteria frequently used (e.g., super-additive, max and mean criteria). In the present fMRI study, 20 participants were scanned in a block design under three stimulus conditions: dynamic unimodal face, unimodal voice and bimodal face voice. Using this single dataset, we examine all these statistical criteria in an attempt to define loci of face voice integration. While the super-additive and mean criteria essentially revealed regions in which one of the unimodal responses was a deactivation, the max criterion appeared stringent and only highlighted the left hippocampus as a potential site of face-voice integration. Psychophysiological interaction analysis showed that connectivity between occipital and temporal cortices increased during bimodal compared to unimodal conditions. We concluded that, when investigating multisensory integration with fMRI, all these criteria should be used in conjunction with manipulation of stimulus signal-to-noise ratio and/or cross-modal congruency. (C) Koninklijke Brill NV, Leiden, 2011
引用
收藏
页码:351 / 367
页数:17
相关论文
共 56 条
[1]   Do you see what you are hearing? Cross-modal effects of speech sounds on lipreading [J].
Baart, Martijn ;
Vroomen, Jean .
NEUROSCIENCE LETTERS, 2010, 471 (02) :100-103
[2]   Statistical criteria in fMRI studies of multisensory integration [J].
Beauchamp, MS .
NEUROINFORMATICS, 2005, 3 (02) :93-113
[3]   Integration of auditory and visual information about objects in superior temporal sulcus [J].
Beauchamp, MS ;
Lee, KE ;
Argall, BD ;
Martin, A .
NEURON, 2004, 41 (05) :809-823
[4]   Thinking the voice:: neural correlates of voice perception [J].
Belin, P ;
Fecteau, S ;
Bédard, C .
TRENDS IN COGNITIVE SCIENCES, 2004, 8 (03) :129-135
[5]   Visual speech perception without primary auditory cortex activation [J].
Bernstein, LE ;
Auer, ET ;
Moore, JK ;
Ponton, CW ;
Don, M ;
Singh, M .
NEUROREPORT, 2002, 13 (03) :311-315
[6]   Bimodal speech: early suppressive visual effects in human auditory cortex [J].
Besle, J ;
Fort, A ;
Delpuech, C ;
Giard, MH .
EUROPEAN JOURNAL OF NEUROSCIENCE, 2004, 20 (08) :2225-2234
[7]   The psychophysics toolbox [J].
Brainard, DH .
SPATIAL VISION, 1997, 10 (04) :433-436
[8]   Audiovisual Non-Verbal Dynamic Faces Elicit Converging fMRI and ERP Responses [J].
Brefczynski-Lewis, Julie ;
Lowitszch, Svenja ;
Parsons, Michael ;
Lemieux, Susan ;
Puce, Aina .
BRAIN TOPOGRAPHY, 2009, 21 (3-4) :193-206
[9]   Evidence from functional magnetic resonance imaging of crossmodal binding in the human heteromodal cortex [J].
Calvert, GA ;
Campbell, R ;
Brammer, MJ .
CURRENT BIOLOGY, 2000, 10 (11) :649-657
[10]   Response amplification in sensory-specific cortices during crossmodal binding [J].
Calvert, GA ;
Brammer, MJ ;
Bullmore, ET ;
Campbell, R ;
Iversen, SD ;
David, AS .
NEUROREPORT, 1999, 10 (12) :2619-2623