The Impact and Status of Carol Fowler's Supramodal Theory of Multisensory Speech Perception

被引:12
作者
Rosenblum, Lawrence D. [1 ]
Dorsi, Josh [1 ]
Dias, James W. [1 ]
机构
[1] Univ Calif Riverside, Dept Psychol, 900 Univ Ave, Riverside, CA 92521 USA
关键词
VISUAL SPEECH; SELECTIVE ADAPTATION; PREMOTOR CORTEX; MOTOR CORTEX; AUDITORY SPEECH; LIP MOVEMENTS; BRAIN-REGIONS; TALKING FACES; BROCAS AREA; PHONETIC CONVERGENCE;
D O I
10.1080/10407413.2016.1230373
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
One important contribution of Carol Fowler's direct approach to speech perception is its account of multisensory perception. This supramodal account proposes a speech function that detects supramodal information available across audition, vision, and touch. This detection allows for the recovery of articulatory primitives that provide the basis of a common currency shared between modalities as well as between perception and production. Common currency allows for perceptual experience to be shared between modalities and supports perceptually guided speaking as well as production-guided perception. In this report, we discuss the contribution and status of the supramodal approach relative to recent research in multisensory speech perception. We argue that the approach has helped motivate a multisensory revolution in perceptual psychology. We then review the new behavioral and neurophysiological research on (a) supramodal information, (b) cross-sensory sharing of experience, and (c) perceptually guided speaking as well as production guided speech perception. We conclude that Fowler's supramodal theory has fared quite well in light of this research.
引用
收藏
页码:262 / 294
页数:33
相关论文
共 228 条
[11]  
Bernstein LE., 2004, HDB MULTISENSORY PRO, P203
[12]   Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults [J].
Bernstein, Lynne E. ;
Eberhardt, Silvio P. ;
Auer, Edward T., Jr. .
FRONTIERS IN PSYCHOLOGY, 2014, 5
[13]   Auditory perceptual learning for speech perception can be enhanced by audiovisual training [J].
Bernstein, Lynne E. ;
Auer, Edward T., Jr. ;
Eberhardt, Silvio P. ;
Jiang, Jintao .
FRONTIERS IN NEUROSCIENCE, 2013, 7
[14]   Bimodal speech: early suppressive visual effects in human auditory cortex [J].
Besle, J ;
Fort, A ;
Delpuech, C ;
Giard, MH .
EUROPEAN JOURNAL OF NEUROSCIENCE, 2004, 20 (08) :2225-2234
[15]   Neural correlates of sensory and decision processes in auditory object identification [J].
Binder, JR ;
Liebenthal, E ;
Possing, ET ;
Medler, DA ;
Ward, BD .
NATURE NEUROSCIENCE, 2004, 7 (03) :295-301
[16]   Visual influences on perception of speech and nonspeech vocal-tract events [J].
Brancazio, Lawrence ;
Best, Catherine T. ;
Fowler, Carol A. .
LANGUAGE AND SPEECH, 2006, 49 :21-53
[17]   Song and speech: Brain regions involved with perception and covert production [J].
Callan, Daniel E. ;
Tsytsarev, Vassilly ;
Hanakawa, Takashi ;
Callan, Akiko M. ;
Katsuhara, Maya ;
Fukuyama, Hidenao ;
Turner, Robert .
NEUROIMAGE, 2006, 31 (03) :1327-1342
[18]   Multisensory and modality specific processing of visual speech in different regions of the premotor cortex [J].
Callan, Daniel E. ;
Jones, Jeffery A. ;
Callan, Akiko .
FRONTIERS IN PSYCHOLOGY, 2014, 5
[19]   Phonetic perceptual identification by native- and second-language speakers differentially activates brain regions involved with acoustic phonetic processing and those involved with articulatory-auditory/orosensory internal models [J].
Callan, DE ;
Jones, JA ;
Callan, AM ;
Akahane-Yamada, R .
NEUROIMAGE, 2004, 22 (03) :1182-1194
[20]   Neural processes underlying perceptual enhancement by visual speech gestures [J].
Callan, DE ;
Jones, JA ;
Munhall, K ;
Callan, AM ;
Kroos, C ;
Vatikiotis-Bateson, E .
NEUROREPORT, 2003, 14 (17) :2213-2218