The Audiovisual Mismatch Negativity in Predictive and Non-Predictive Speech Stimuli in Older Adults With and Without Hearing Loss

被引:0
作者
Randazzo, Melissa [1 ]
Smith, Paul J. [2 ]
Priefer, Ryan [1 ,3 ]
Senzer, Deborah R. [1 ]
Froud, Karen [2 ]
机构
[1] Adelphi Univ, Dept Commun Sci & Disorders, Garden City, NY 11530 USA
[2] Columbia Univ, Dept Neurosci & Educ, Teachers Coll, New York, NY 10027 USA
[3] Magstim Inc, Roseville, MN 55113 USA
关键词
aging-related hearing loss; audiovisual integration; mismatch negativity; speech perception; VISUAL SPEECH; FUNCTIONAL CONNECTIVITY; AGE-DIFFERENCES; SEEING-VOICES; NEURAL BASIS; INTEGRATION; MEMORY; LIPS; MMN; PERCEPTION;
D O I
10.1163/22134808-bja10106
中图分类号
Q6 [生物物理学];
学科分类号
071011 ;
摘要
Adults with aging-related hearing loss (ARHL) experience adaptive neural changes to optimize their sensory experiences; for example, enhanced audiovisual (AV) and predictive processing during speech perception. The mismatch negativity (MMN) event-related potential is an index of central auditory processing; however, it has not been explored as an index of AV and predictive processing in adults with ARHL. In a pilot study we examined the AV MMN in two conditions of a passive oddball paradigm - one AV condition in which the visual aspect of the stimulus can predict the auditory percept and one AV control condition in which the visual aspect of the stimulus cannot predict the auditory percept. In adults with ARHL, evoked responses in the AV conditions occurred in the early MMN time window while the older adults with normal hearing showed a later MMN. Findings suggest that adults with ARHL are sensitive to AV incongruity, even when the visual is not predictive of the auditory signal. This suggests that predictive coding for AV speech processing may be heightened in adults with ARHL. This paradigm can be used in future studies to measure treatment related changes, for example via aural rehabilitation, in older adults with ARHL.
引用
收藏
页码:631 / 659
页数:29
相关论文
共 77 条
[1]   Forty Years After Hearing Lips and Seeing Voices: the McGurk Effect Revisited [J].
Alsius, Agnes ;
Pare, Martin ;
Munhall, Kevin G. .
MULTISENSORY RESEARCH, 2018, 31 (1-2) :111-144
[2]   Multisensory integration, learning, and the predictive coding hypothesis [J].
Altieri, Nicholas .
FRONTIERS IN PSYCHOLOGY, 2014, 5
[3]  
American National Standards Institute, 1999, ANSI S3.1-1999 (R2018)
[4]  
American National Standards Institute, 2010, ANSI/ASA S3.6-2010: Specification for audiometers
[5]  
[Anonymous], Quick Statistics About Hearing
[6]   Improving the design of the letter contrast sensitivity test [J].
Arditi, A .
INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2005, 46 (06) :2225-2229
[7]   Dual Neural Routing of Visual Facilitation in Speech Processing [J].
Arnal, Luc H. ;
Morillon, Benjamin ;
Kell, Christian A. ;
Giraud, Anne-Lise .
JOURNAL OF NEUROSCIENCE, 2009, 29 (43) :13445-13453
[8]   Electrophysiological evidence for differences between fusion and combination illusions in audiovisual speech perception [J].
Baart, Martijn ;
Lindborg, Alma ;
Andersen, Tobias S. .
EUROPEAN JOURNAL OF NEUROSCIENCE, 2017, 46 (10) :2578-2583
[9]   Putative mechanisms mediating tolerance for audiovisual stimulus onset asynchrony [J].
Bhat, Jyoti ;
Miller, Lee M. ;
Pitt, Mark A. ;
Shahin, Antoine J. .
JOURNAL OF NEUROPHYSIOLOGY, 2015, 113 (05) :1437-1450
[10]  
Boersma Paul, 2021, Praat: doing phonetics by computer [Computer program]. Version 6.2.10