Detection and Attention for Auditory, Visual, and Audiovisual Speech in Children with Hearing Loss

被引:6
作者
Jerger, Susan [1 ,2 ]
Damian, Markus F. [3 ]
Karl, Cassandra [1 ,2 ]
Abdi, Herve [1 ]
机构
[1] Univ Texas Dallas, Sch Behav Brain Sci, GR4-1,800 W Campbell Rd, Richardson, TX 75080 USA
[2] Univ Texas Dallas, Callier Ctr Commun Disorders, Richardson, TX 75083 USA
[3] Univ Bristol, Sch Psychol Sci, Bristol, Avon, England
关键词
Attention; Audiovisual speech; Children; Hearing loss; Lipreading; Multisensory speech; Speech detection; Visual speech; RESPONSE-TIME DISTRIBUTIONS; MULTISENSORY INTEGRATION; SUSTAINED ATTENTION; STIMULUS-INTENSITY; MOVING FACES; PERCEPTION; DISCRIMINATION; IDENTIFICATION; LANGUAGE; PERFORMANCE;
D O I
10.1097/AUD.0000000000000798
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
Objectives: Efficient multisensory speech detection is critical for children who must quickly detect/encode a rapid stream of speech to participate in conversations and have access to the audiovisual cues that underpin speech and language development, yet multisensory speech detection remains understudied in children with hearing loss (CHL). This research assessed detection, along with vigilant/goal-directed attention, for multisensory versus unisensory speech in CHL versus children with normal hearing (CNH). Design: Participants were 60 CHL who used hearing aids and communicated successfully aurally/orally and 60 age-matched CNH. Simple response times determined how quickly children could detect a preidentified easy-to-hear stimulus (70 dB SPL, utterance "buh" presented in auditory only [A], visual only [V], or audiovisual [AV] modes). The V mode formed two facial conditions: static versus dynamic face. Faster detection for multisensory (AV) than unisensory (A or V) input indicates multisensory facilitation. We assessed mean responses and faster versus slower responses (defined by first versus third quartiles of response-time distributions), which were respectively conceptualized as: faster responses (first quartile) reflect efficient detection with efficient vigilant/goal-directed attention and slower responses (third quartile) reflect less efficient detection associated with attentional lapses. Finally, we studied associations between these results and personal characteristics of CHL. Results: Unisensory A versus V modes: Both groups showed better detection and attention for A than V input. The A input more readily captured children's attention and minimized attentional lapses, which supports A-bound processing even by CHL who were processing low fidelity A input. CNH and CHL did not differ in ability to detect A input at conversational speech level. Multisensory AV versus A modes: Both groups showed better detection and attention for AV than A input. The advantage for AV input was facial effect (both static and dynamic faces), a pattern suggesting that communication is a social interaction that is more than just words. Attention did not differ between groups; detection was faster in CHL than CNH for AV input, but not for A input. Associations between personal characteristics/degree of hearing loss of CHL and results: CHL with greatest deficits in detection of V input had poorest word recognition skills and CHL with greatest reduction of attentional lapses from AV input had poorest vocabulary skills. Both outcomes are consistent with the idea that CHL who are processing low fidelity A input depend disproportionately on V and AV input to learn to identify words and associate them with concepts. As CHL aged, attention to V input improved. Degree of HL did not influence results. Conclusions: Understanding speech-a daily challenge for CHL-is a complex task that demands efficient detection of and attention to AV speech cues. Our results support the clinical importance of multisensory approaches to understand and advance spoken communication by CHL.
引用
收藏
页码:508 / 520
页数:13
相关论文
共 81 条
[11]   The Development of Audiovisual Multisensory Integration Across Childhood and Early Adolescence: A High-Density Electrical Mapping Study [J].
Brandwein, Alice B. ;
Foxe, John J. ;
Russo, Natalie N. ;
Altschuler, Ted S. ;
Gomes, Hilary ;
Molholm, Sophie .
CEREBRAL CORTEX, 2011, 21 (05) :1042-1055
[12]   Phonological processing, language, and literacy: A comparison of children with mild-to-moderate sensorineural hearing loss and those with specific language impairment [J].
Briscoe, J ;
Bishop, DVM ;
Norbury, CF .
JOURNAL OF CHILD PSYCHOLOGY AND PSYCHIATRY AND ALLIED DISCIPLINES, 2001, 42 (03) :329-340
[13]  
Brownell R., 2000, Expressive One Word Picture Vocabulary Test- Spanish Bilingual Edition
[14]   Reading speech from still and moving faces: The neural substrates of visible speech [J].
Calvert, GA ;
Campbell, R .
JOURNAL OF COGNITIVE NEUROSCIENCE, 2003, 15 (01) :57-70
[15]   Cortical substrates for the perception of face actions: an fMRI study of the specificity of activation for seen speech and for meaningless lower-face acts (gurning) [J].
Campbell, R ;
MacSweeney, M ;
Surguladze, S ;
Calvert, G ;
McGuire, P ;
Suckling, J ;
Brammer, MJ ;
David, AS .
COGNITIVE BRAIN RESEARCH, 2001, 12 (02) :233-243
[16]  
Campbell R., 2006, ENCY LANGUAGE LINGUI, V2, P562
[17]   Assessing the Role of the 'Unity Assumption' on Multisensory Integration: A Review [J].
Chen, Yi-Chuan ;
Spence, Charles .
FRONTIERS IN PSYCHOLOGY, 2017, 8
[18]   ATTENTION IN CHILDREN - A NEUROPSYCHOLOGICALLY BASED MODEL FOR ASSESSMENT [J].
COOLEY, EL ;
MORRIS, RD .
DEVELOPMENTAL NEUROPSYCHOLOGY, 1990, 6 (03) :239-274
[19]   Control of goal-directed and stimulus-driven attention in the brain [J].
Corbetta, M ;
Shulman, GL .
NATURE REVIEWS NEUROSCIENCE, 2002, 3 (03) :201-215
[20]  
Dunn L. M., 2007, Peabody Picture Vocabulary Test, Fourth Edition