The Effect of Visual Articulatory Information on the Neural Correlates of Non-native Speech Sound Discrimination

被引:0
作者
Plumridge, James M. A. [1 ]
Barham, Michael P. [1 ]
Foley, Denise L. [1 ]
Ware, Anna T. [1 ]
Clark, Gillian M. [1 ]
Albein-Urios, Natalia [1 ]
Hayden, Melissa J. [1 ]
Lum, Jarrad A. G. [1 ]
机构
[1] Deakin Univ, Sch Psychol, Cognit Neurosci Unit, Geelong, Vic, Australia
来源
FRONTIERS IN HUMAN NEUROSCIENCE | 2020年 / 14卷
关键词
audio-visual training; speech processing; speech discrimination; mismatch negativity (MMN); event related potential (ERP); non-native speech sounds; MISMATCH NEGATIVITY MMN; 2ND-LANGUAGE SPEECH; BASIC RESEARCH; PERCEPTION; CONTRASTS; MEMORY; PLASTICITY; ENGLISH; TIME; LIPS;
D O I
10.3389/fnhum.2020.00025
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Behavioral studies have shown that the ability to discriminate between non-native speech sounds improves after seeing how the sounds are articulated. This study examined the influence of visual articulatory information on the neural correlates of non-native speech sound discrimination. English speakers' discrimination of the Hindi dental and retroflex sounds was measured using the mismatch negativity (MMN) event-related potential, before and after they completed one of three 8-min training conditions. In an audio-visual speech training condition (n = 14), each sound was presented with its corresponding visual articulation. In one control condition (n = 14), both sounds were presented with the same visual articulation, resulting in one congruent and one incongruent audio-visual pairing. In another control condition (n = 14), both sounds were presented with the same image of a still face. The control conditions aimed to rule out the possibility that the MMN is influenced by non-specific audio-visual pairings, or by general exposure to the dental and retroflex sounds over the course of the study. The results showed that audio-visual speech training reduced the latency of the MMN but did not affect MMN amplitude. No change in MMN amplitude or latency was observed for the two control conditions. The pattern of results suggests that a relatively short audio-visual speech training session (i.e., 8 min) may increase the speed with which the brain processes non-native speech sound contrasts. The absence of a training effect on MMN amplitude suggests a single session of audio-visual speech training does not lead to the formation of more discrete memory traces for non-native speech sounds. Longer and/or multiple sessions might be needed to influence the MMN amplitude.
引用
收藏
页数:13
相关论文
共 43 条
  • [1] Brain responses reveal hardwired detection of native-language rule violations
    Aaltonen, Olli
    Hellstrom, Ake
    Peltola, Maija S.
    Savela, Janne
    Tamminen, Henna
    Lehtola, Heidi
    [J]. NEUROSCIENCE LETTERS, 2008, 444 (01) : 56 - 59
  • [2] Perceived phonetic dissimilarity and L2 speech learning:: the case of Japanese |r| and English |l| and |r|
    Aoyama, K
    Flege, JE
    Guion, SG
    Akahane-Yamada, R
    Yamada, T
    [J]. JOURNAL OF PHONETICS, 2004, 32 (02) : 233 - 250
  • [3] A systematic review of the mismatch negativity as an index for auditory sensory memory: From basic research to clinical and developmental perspectives
    Bartha-Doering, Lisa
    Deuster, Dirk
    Giordano, Vito
    Zehnhoff-Dinnesen, Antoinette Am
    Dobel, Christian
    [J]. PSYCHOPHYSIOLOGY, 2015, 52 (09) : 1115 - 1130
  • [4] Pre-attentive sensitivity to vowel duration reveals native phonology and predicts learning of second-language sounds
    Chladkova, Katerina
    Escudero, Paola
    Lipski, Silvia C.
    [J]. BRAIN AND LANGUAGE, 2013, 126 (03) : 243 - 252
  • [5] Mismatch negativity evoked by the McGurk-MacDonald effect: a phonetic representation within short-term memory
    Colin, C
    Radeau, M
    Soquet, A
    Demolin, D
    Colin, F
    Deltenre, P
    [J]. CLINICAL NEUROPHYSIOLOGY, 2002, 113 (04) : 495 - 506
  • [6] EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis
    Delorme, A
    Makeig, S
    [J]. JOURNAL OF NEUROSCIENCE METHODS, 2004, 134 (01) : 9 - 21
  • [7] Furr R. M., 2003, Understanding Statistics, V2, P45, DOI 10.1207/S15328031US0201_03
  • [8] Second-language spoken word identification: Effects of perceptual training, visual cues, and phonetic environment
    Hardison, DM
    [J]. APPLIED PSYCHOLINGUISTICS, 2005, 26 (04) : 579 - 596
  • [9] Acquisition of second-language speech: Effects of visual cues, context, and talker variability
    Hardison, DM
    [J]. APPLIED PSYCHOLINGUISTICS, 2003, 24 (04) : 495 - 522
  • [10] Effect of audiovisual perceptual perception and production of training on the consonants by Japanese learners of English
    Hazan, V
    Sennema, A
    Iba, M
    Faulkner, A
    [J]. SPEECH COMMUNICATION, 2005, 47 (03) : 360 - 378