Neural processes underlying perceptual enhancement by visual speech gestures

被引:139
作者
Callan, DE
Jones, JA
Munhall, K
Callan, AM
Kroos, C
Vatikiotis-Bateson, E
机构
[1] ATR Int, Human Informat Sci Labs, Seika, Kyoto 6190288, Japan
[2] ATR Int, Brain Act Imaging Ctr, Seika, Kyoto 6190288, Japan
[3] Wilfrid Laurier Univ, Dept Psychol, Waterloo, ON N2L 3C5, Canada
[4] Queens Univ, Dept Psychol, Kingston, ON K7L 3N6, Canada
[5] Univ Munich, Inst Phonet & Sprachliche Kommunicat, D-80539 Munich, Germany
[6] Univ British Columbia, Dept Linguist, Vancouver, BC V5Z 1M9, Canada
关键词
audiovisual speech perception; crossmodal; mirror neuron; multisensory integration; speechreading;
D O I
10.1097/00001756-200312020-00016
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
This fMRI study explores brain regions involved with perceptual enhancement afforded by observation of visual speech gesture information. Subjects passively identified words presented in the following conditions: audio-only, audiovisual, audio-only with noise, audiovisual with noise, and visual only. The brain may use concordant audio and visual information to enhance perception by integrating the information in a converging multisensory site. Consistent with response properties of multisensory integration sites, enhanced activity in middle and superior temporal gyrus/sulcus was greatest when concordant audiovisual stimuli were presented with acoustic noise. Activity found in brain regions involved with planning and execution of speech production in response to visual speech presented with degraded or absent auditory stimulation, is consistent with the use of an additional pathway through which speech perception is facilitated by a process of internally simulating the intended speech act of the observed speaker.
引用
收藏
页码:2213 / 2218
页数:6
相关论文
共 25 条
  • [1] [Anonymous], [No title captured]
  • [2] [Anonymous], 2000, Brain mapping: The systems, DOI 10.1016/b978-012692545-6/50014-3
  • [3] Visual speech perception without primary auditory cortex activation
    Bernstein, LE
    Auer, ET
    Moore, JK
    Ponton, CW
    Don, M
    Singh, M
    [J]. NEUROREPORT, 2002, 13 (03) : 311 - 315
  • [4] Multimodal contribution to speech perception revealed by independent component analysis: a single-sweep EEG case study
    Callan, DE
    Callan, AM
    Kroos, C
    Vatikiotis-Bateson, E
    [J]. COGNITIVE BRAIN RESEARCH, 2001, 10 (03): : 349 - 353
  • [5] Evidence from functional magnetic resonance imaging of crossmodal binding in the human heteromodal cortex
    Calvert, GA
    Campbell, R
    Brammer, MJ
    [J]. CURRENT BIOLOGY, 2000, 10 (11) : 649 - 657
  • [6] Response amplification in sensory-specific cortices during crossmodal binding
    Calvert, GA
    Brammer, MJ
    Bullmore, ET
    Campbell, R
    Iversen, SD
    David, AS
    [J]. NEUROREPORT, 1999, 10 (12) : 2619 - 2623
  • [7] Reading speech from still and moving faces: The neural substrates of visible speech
    Calvert, GA
    Campbell, R
    [J]. JOURNAL OF COGNITIVE NEUROSCIENCE, 2003, 15 (01) : 57 - 70
  • [8] Activation of auditory cortex during silent lipreading
    Calvert, GA
    Bullmore, ET
    Brammer, MJ
    Campbell, R
    Williams, SCR
    McGuire, PK
    Woodruff, PWR
    Iverson, SD
    David, AS
    [J]. SCIENCE, 1997, 276 (5312) : 593 - 596
  • [9] Cortical substrates for the perception of face actions: an fMRI study of the specificity of activation for seen speech and for meaningless lower-face acts (gurning)
    Campbell, R
    MacSweeney, M
    Surguladze, S
    Calvert, G
    McGuire, P
    Suckling, J
    Brammer, MJ
    David, AS
    [J]. COGNITIVE BRAIN RESEARCH, 2001, 12 (02): : 233 - 243
  • [10] Mesial motor areas in self-initiated versus externally triggered movements examined with fMRI:: Effect of movement type and rate
    Deiber, MP
    Honda, M
    Ibañez, V
    Sadato, N
    Hallett, M
    [J]. JOURNAL OF NEUROPHYSIOLOGY, 1999, 81 (06) : 3065 - 3077