Attentional resources contribute to the perceptual learning of talker idiosyncrasies in audiovisual speech

被引:4
作者
Jesse, Alexandra [1 ]
Kaplan, Elina [1 ]
机构
[1] Univ Massachusetts, Dept Psychol & Brain Sci, 135 Hicks Way, Amherst, MA 01003 USA
关键词
Speech perception; Perceptual learning; Multisensory processing; SELECTIVE ADAPTATION; PHONETIC RECALIBRATION; LIPREAD SPEECH; VISUAL RECALIBRATION; AUDITORY SPEECH; WORKING-MEMORY; COGNITIVE LOAD; INFORMATION; REPRESENTATIONS; AUTOMATICITY;
D O I
10.3758/s13414-018-01651-x
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
To recognize audiovisual speech, listeners evaluate and combine information obtained from the auditory and visual modalities. Listeners also use information from one modality to adjust their phonetic categories to a talker's idiosyncrasy encountered in the other modality. In this study, we examined whether the outcome of this cross-modal recalibration relies on attentional resources. In a standard recalibration experiment in Experiment 1, participants heard an ambiguous sound, disambiguated by the accompanying visual speech as either /p/ or /t/. Participants' primary task was to attend to the audiovisual speech while either monitoring a tone sequence for a target tone or ignoring the tones. Listeners subsequently categorized the steps of an auditory /p/-/t/ continuum more often in line with their exposure. The aftereffect of phonetic recalibration was reduced, but not eliminated, by attentional load during exposure. In Experiment 2, participants saw an ambiguous visual speech gesture that was disambiguated auditorily as either /p/ or /t/. At test, listeners categorized the steps of a visual /p/-/t/ continuum more often in line with the prior exposure. Imposing load in the auditory modality during exposure did not reduce the aftereffect of this type of cross-modal phonetic recalibration. Together, these results suggest that auditory attentional resources are needed for the processing of auditory speech and/or for the shifting of auditory phonetic category boundaries. Listeners thus need to dedicate attentional resources in order to accommodate talker idiosyncrasies in audiovisual speech.
引用
收藏
页码:1006 / 1019
页数:14
相关论文
共 85 条
[1]   Comprehension of a Novel Accent by Young and Older Listeners [J].
Adank, Patti ;
Janse, Esther .
PSYCHOLOGY AND AGING, 2010, 25 (03) :736-740
[2]   Separate attentional resources for vision and audition [J].
Alais, D ;
Morrone, C ;
Burr, D .
PROCEEDINGS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES, 2006, 273 (1592) :1339-1345
[3]   Audiovisual integration of speech falters under high attention demands [J].
Alsius, A ;
Navarra, J ;
Campbell, R ;
Soto-Faraco, S .
CURRENT BIOLOGY, 2005, 15 (09) :839-843
[4]   Attention to touch weakens audiovisual speech integration [J].
Alsius, Agnes ;
Navarra, Jordi ;
Soto-Faraco, Salvador .
EXPERIMENTAL BRAIN RESEARCH, 2007, 183 (03) :399-404
[5]   Effect of attentional load on audiovisual speech perception: evidence from ERPs [J].
Alsius, Agnes ;
Moettoenen, Riikka ;
Sams, Mikko E. ;
Soto-Faraco, Salvador ;
Tiippana, Kaisa .
FRONTIERS IN PSYCHOLOGY, 2014, 5
[6]  
[Anonymous], 2016, PRAAT DOING PHONETIC
[7]  
[Anonymous], 2014, R: a language and environment for statistical computing
[8]   Vision and audition do not share attentional resources in sustained tasks [J].
Arrighi, Roberto ;
Lunardi, Roy ;
Burr, David .
FRONTIERS IN PSYCHOLOGY, 2011, 2
[9]   Lipread-induced phonetic recalibration in dyslexia [J].
Baart, Martijn ;
de Boer-Schellekens, Liselotte ;
Vroomen, Jean .
ACTA PSYCHOLOGICA, 2012, 140 (01) :91-95
[10]   Phonetic recalibration does not depend on working memory [J].
Baart, Martijn ;
Vroomen, Jean .
EXPERIMENTAL BRAIN RESEARCH, 2010, 203 (03) :575-582